url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
48
51
id
int64
600M
3.67B
node_id
stringlengths
18
24
number
int64
2
7.88k
title
stringlengths
1
290
user
dict
labels
listlengths
0
4
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
4
comments
listlengths
0
30
created_at
timestamp[s]date
2020-04-14 18:18:51
2025-11-26 16:16:56
updated_at
timestamp[s]date
2020-04-29 09:23:05
2025-11-30 03:52:07
closed_at
timestamp[s]date
2020-04-29 09:23:05
2025-11-21 12:31:19
author_association
stringclasses
4 values
type
null
active_lock_reason
null
draft
null
pull_request
null
body
stringlengths
0
228k
closed_by
dict
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
null
state_reason
stringclasses
4 values
sub_issues_summary
dict
issue_dependencies_summary
dict
is_pull_request
bool
1 class
closed_at_time_taken
duration[s]
https://api.github.com/repos/huggingface/datasets/issues/4238
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4238/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4238/comments
https://api.github.com/repos/huggingface/datasets/issues/4238/events
https://github.com/huggingface/datasets/issues/4238
1,217,168,123
I_kwDODunzps5IjIL7
4,238
Dataset caching policy
{ "avatar_url": "https://avatars.githubusercontent.com/u/163333?v=4", "events_url": "https://api.github.com/users/loretoparisi/events{/privacy}", "followers_url": "https://api.github.com/users/loretoparisi/followers", "following_url": "https://api.github.com/users/loretoparisi/following{/other_user}", "gists_url": "https://api.github.com/users/loretoparisi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/loretoparisi", "id": 163333, "login": "loretoparisi", "node_id": "MDQ6VXNlcjE2MzMzMw==", "organizations_url": "https://api.github.com/users/loretoparisi/orgs", "received_events_url": "https://api.github.com/users/loretoparisi/received_events", "repos_url": "https://api.github.com/users/loretoparisi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/loretoparisi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/loretoparisi/subscriptions", "type": "User", "url": "https://api.github.com/users/loretoparisi", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "Hi @loretoparisi, thanks for reporting.\r\n\r\nThere is an option to force the redownload of the data files (and thus not using previously download and cached data files): `load_dataset(..., download_mode=\"force_redownload\")`.\r\n\r\nPlease, let me know if this fixes your problem.\r\n\r\nI can confirm you that your dataset loads without any problem for me:\r\n```python\r\nIn [2]: ds = load_dataset(\"loretoparisi/tatoeba-sentences\", data_files={\"train\": \"train.csv\", \"test\": \"test.csv\"}, delimiter=\"\\t\", column_names=['label', 'text'])\r\n\r\nIn [3]: ds\r\nOut[3]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['label', 'text'],\r\n num_rows: 8256449\r\n })\r\n test: Dataset({\r\n features: ['label', 'text'],\r\n num_rows: 2061204\r\n })\r\n})\r\n``` ", "@albertvillanova thank you, it seems it still does not work using:\r\n\r\n```python\r\nsentences = load_dataset(\r\n \"loretoparisi/tatoeba-sentences\",\r\n data_files=data_files,\r\n delimiter='\\t', \r\n column_names=['label', 'text'],\r\n download_mode=\"force_redownload\"\r\n)\r\n```\r\n[This](https://colab.research.google.com/drive/1EA6FWo5pHxU8rPHHRn24NlHqRPiOlPTr?usp=sharing) is my notebook!\r\n\r\nThe problem is that the download file's revision for `test.csv` is not correctly parsed\r\n\r\n![Schermata 2022-04-27 alle 18 09 41](https://user-images.githubusercontent.com/163333/165563507-0be53eb6-8f61-49b0-b959-306e59281de3.png)\r\n\r\nIf you download that file `test.csv` from the repo, the line `\\\\N` is not there anymore (it was there at the first file upload).\r\n\r\nMy impression is that the Apache Arrow file is still cached - so server side, despite of enabling a forced download. For what I can see I get those two arrow files, but I cannot grep the bad line (`\\\\N`) since are binary files:\r\n\r\n```\r\n!ls -l /root/.cache/huggingface/datasets/csv/loretoparisi--tatoeba-sentences-efeff8965c730a2c/0.0.0/433e0ccc46f9880962cc2b12065189766fbb2bee57a221866138fb9203c83519\r\n!ls -l /root/.cache/huggingface/datasets/csv/loretoparisi--tatoeba-sentences-efeff8965c730a2c/0.0.0/433e0ccc46f9880962cc2b12065189766fbb2bee57a221866138fb9203c83519/csv-test.arrow\r\n!head /root/.cache/huggingface/datasets/csv/loretoparisi--tatoeba-sentences-efeff8965c730a2c/0.0.0/433e0ccc46f9880962cc2b12065189766fbb2bee57a221866138fb9203c83519/dataset_info.json\r\n```\r\n", "SOLVED! The problem was the with the file itself, using caching parameter helped indeed.\r\nThanks for helping!" ]
2022-04-27T10:42:11
2022-04-27T16:29:25
2022-04-27T16:28:50
NONE
null
null
null
null
## Describe the bug I cannot clean cache of my datasets files, despite I have updated the `csv` files on the repository [here](https://huggingface.co/datasets/loretoparisi/tatoeba-sentences). The original file had a line with bad characters, causing the following error ``` [/usr/local/lib/python3.7/dist-packages/datasets/features/features.py](https://localhost:8080/#) in str2int(self, values) 852 if value not in self._str2int: 853 value = str(value).strip() --> 854 output.append(self._str2int[str(value)]) 855 else: 856 # No names provided, try to integerize KeyError: '\\N' ``` The file now is cleanup up, but I still get the error. This happens even if I inspect the local cached contents, and cleanup the files locally: ```python from datasets import load_dataset_builder dataset_builder = load_dataset_builder("loretoparisi/tatoeba-sentences") print(dataset_builder.cache_dir) print(dataset_builder.info.features) print(dataset_builder.info.splits) ``` ``` Using custom data configuration loretoparisi--tatoeba-sentences-e59b8ad92f1bb8dd /root/.cache/huggingface/datasets/csv/loretoparisi--tatoeba-sentences-e59b8ad92f1bb8dd/0.0.0/433e0ccc46f9880962cc2b12065189766fbb2bee57a221866138fb9203c83519 None None ``` and removing files located at `/root/.cache/huggingface/datasets/csv/loretoparisi--tatoeba-sentences-*`. Is there any remote file caching policy in place? If so, is it possibile to programmatically disable it? Currently it seems that the file `test.csv` on the repo [here](https://huggingface.co/datasets/loretoparisi/tatoeba-sentences/blob/main/test.csv) is cached remotely. In fact I download locally the file from raw link, the file is up-to-date; but If I use it within `datasets` as shown above, it gives to me always the first revision of the file, not the last. Thank you. ## Steps to reproduce the bug ```python from datasets import load_dataset,Features,Value,ClassLabel class_names = ["cmn","deu","rus","fra","eng","jpn","spa","ita","kor","vie","nld","epo","por","tur","heb","hun","ell","ind","ara","arz","fin","bul","yue","swe","ukr","bel","que","ces","swh","nno","wuu","nob","zsm","est","kat","pol","lat","urd","sqi","isl","fry","afr","ron","fao","san","bre","tat","yid","uig","uzb","srp","qya","dan","pes","slk","eus","cycl","acm","tgl","lvs","kaz","hye","hin","lit","ben","cat","bos","hrv","tha","orv","cha","mon","lzh","scn","gle","mkd","slv","frm","glg","vol","ain","jbo","tok","ina","nds","mal","tlh","roh","ltz","oss","ido","gla","mlt","sco","ast","jav","oci","ile","ota","xal","tel","sjn","nov","khm","tpi","ang","aze","tgk","tuk","chv","hsb","dsb","bod","sme","cym","mri","ksh","kmr","ewe","kab","ber","tpw","udm","lld","pms","lad","grn","mlg","xho","pnb","grc","hat","lao","npi","cor","nah","avk","mar","guj","pan","kir","myv","prg","sux","crs","ckt","bak","zlm","hil","cbk","chr","nav","lkt","enm","arq","lin","abk","pcd","rom","gsw","tam","zul","awa","wln","amh","bar","hbo","mhr","bho","mrj","ckb","osx","pfl","mgm","sna","mah","hau","kan","nog","sin","glv","dng","kal","liv","vro","apc","jdt","fur","che","haw","yor","crh","pdc","ppl","kin","shs","mnw","tet","sah","kum","ngt","nya","pus","hif","mya","moh","wol","tir","ton","lzz","oar","lug","brx","non","mww","hak","nlv","ngu","bua","aym","vec","ibo","tkl","bam","kha","ceb","lou","fuc","smo","gag","lfn","arg","umb","tyv","kjh","oji","cyo","urh","kzj","pam","srd","lmo","swg","mdf","gil","snd","tso","sot","zza","tsn","pau","som","egl","ady","asm","ori","dtp","cho","max","kam","niu","sag","ilo","kaa","fuv","nch","hoc","iba","gbm","sun","war","mvv","pap","ary","kxi","csb","pag","cos","rif","kek","krc","aii","ban","ssw","tvl","mfe","tah","bvy","bcl","hnj","nau","nst","afb","quc","min","tmw","mad","bjn","mai","cjy","got","hsn","gan","tzl","dws","ldn","afh","sgs","krl","vep","rue","tly","mic","ext","izh","sma","jam","cmo","mwl","kpv","koi","bis","ike","run","evn","ryu","mnc","aoz","otk","kas","aln","akl","yua","shy","fkv","gos","fij","thv","zgh","gcf","cay","xmf","tig","div","lij","rap","hrx","cpi","tts","gaa","tmr","iii","ltg","bzt","syc","emx","gom","chg","osp","stq","frr","fro","nys","toi","new","phn","jpa","rel","drt","chn","pli","laa","bal","hdn","hax","mik","ajp","xqa","pal","crk","mni","lut","ayl","ood","sdh","ofs","nus","kiu","diq","qxq","alt","bfz","klj","mus","srn","guc","lim","zea","shi","mnr","bom","sat","szl"] features = Features({ 'label': ClassLabel(names=class_names), 'text': Value('string')}) num_labels = features['label'].num_classes data_files = { "train": "train.csv", "test": "test.csv" } sentences = load_dataset( "loretoparisi/tatoeba-sentences", data_files=data_files, delimiter='\t', column_names=['label', 'text'], ) # You can make this part faster with num_proc=<some int> sentences = sentences.map(lambda ex: {"label" : features["label"].str2int(ex["label"]) if ex["label"] is not None else None}, features=features) sentences = sentences.shuffle() ``` ## Expected results Properly tokenize dataset file `test.csv` without issues. ## Actual results Specify the actual results or traceback. ``` Downloading data files: 100% 2/2 [00:16<00:00, 7.34s/it] Downloading data: 100% 391M/391M [00:12<00:00, 36.6MB/s] Downloading data: 100% 92.4M/92.4M [00:02<00:00, 40.0MB/s] Extracting data files: 100% 2/2 [00:00<00:00, 47.66it/s] Dataset csv downloaded and prepared to /root/.cache/huggingface/datasets/csv/loretoparisi--tatoeba-sentences-efeff8965c730a2c/0.0.0/433e0ccc46f9880962cc2b12065189766fbb2bee57a221866138fb9203c83519. Subsequent calls will reuse this data. 100% 2/2 [00:00<00:00, 25.94it/s] 11% 942339/8256449 [01:55<13:11, 9245.85ex/s] --------------------------------------------------------------------------- KeyError Traceback (most recent call last) [<ipython-input-3-6a9867fad8d6>](https://localhost:8080/#) in <module>() 12 ) 13 # You can make this part faster with num_proc=<some int> ---> 14 sentences = sentences.map(lambda ex: {"label" : features["label"].str2int(ex["label"]) if ex["label"] is not None else None}, features=features) 15 sentences = sentences.shuffle() 10 frames [/usr/local/lib/python3.7/dist-packages/datasets/features/features.py](https://localhost:8080/#) in str2int(self, values) 852 if value not in self._str2int: 853 value = str(value).strip() --> 854 output.append(self._str2int[str(value)]) 855 else: 856 # No names provided, try to integerize KeyError: '\\N' ``` ## Environment info ``` - `datasets` version: 2.1.0 - Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - PyArrow version: 6.0.1 - Pandas version: 1.3.5 - ``` ``` - `transformers` version: 4.18.0 - Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - Huggingface_hub version: 0.5.1 - PyTorch version (GPU?): 1.11.0+cu113 (True) - Tensorflow version (GPU?): 2.8.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> - ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/163333?v=4", "events_url": "https://api.github.com/users/loretoparisi/events{/privacy}", "followers_url": "https://api.github.com/users/loretoparisi/followers", "following_url": "https://api.github.com/users/loretoparisi/following{/other_user}", "gists_url": "https://api.github.com/users/loretoparisi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/loretoparisi", "id": 163333, "login": "loretoparisi", "node_id": "MDQ6VXNlcjE2MzMzMw==", "organizations_url": "https://api.github.com/users/loretoparisi/orgs", "received_events_url": "https://api.github.com/users/loretoparisi/received_events", "repos_url": "https://api.github.com/users/loretoparisi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/loretoparisi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/loretoparisi/subscriptions", "type": "User", "url": "https://api.github.com/users/loretoparisi", "user_view_type": "public" }
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/4238/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4238/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
5:46:39
https://api.github.com/repos/huggingface/datasets/issues/4237
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4237/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4237/comments
https://api.github.com/repos/huggingface/datasets/issues/4237/events
https://github.com/huggingface/datasets/issues/4237
1,217,121,044
I_kwDODunzps5Ii8sU
4,237
Common Voice 8 doesn't show datasets viewer
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten", "user_view_type": "public" }
[ { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
closed
false
null
[]
[ "Thanks for reporting. I understand it's an error in the dataset script. To reproduce:\r\n\r\n```python\r\n>>> import datasets as ds\r\n>>> split_names = ds.get_dataset_split_names(\"mozilla-foundation/common_voice_8_0\", use_auth_token=\"**********\")\r\nDownloading builder script: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10.9k/10.9k [00:00<00:00, 10.9MB/s]\r\nDownloading extra modules: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2.98k/2.98k [00:00<00:00, 3.36MB/s]\r\nDownloading extra modules: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 53.1k/53.1k [00:00<00:00, 650kB/s]\r\nNo config specified, defaulting to: common_voice/en\r\nTraceback (most recent call last):\r\n File \"/home/slesage/hf/datasets-preview-backend/libs/libmodels/.venv/lib/python3.9/site-packages/datasets/inspect.py\", line 280, in get_dataset_config_info\r\n for split_generator in builder._split_generators(\r\n File \"/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/mozilla-foundation--common_voice_8_0/720589e6e5ad674019008b719053303a71716db1b27e63c9846df02fdf93f2f3/common_voice_8_0.py\", line 153, in _split_generators\r\n self._log_download(self.config.name, bundle_version, hf_auth_token)\r\n File \"/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/mozilla-foundation--common_voice_8_0/720589e6e5ad674019008b719053303a71716db1b27e63c9846df02fdf93f2f3/common_voice_8_0.py\", line 139, in _log_download\r\n email = HfApi().whoami(auth_token)[\"email\"]\r\nKeyError: 'email'\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets-preview-backend/libs/libmodels/.venv/lib/python3.9/site-packages/datasets/inspect.py\", line 323, in get_dataset_split_names\r\n info = get_dataset_config_info(\r\n File \"/home/slesage/hf/datasets-preview-backend/libs/libmodels/.venv/lib/python3.9/site-packages/datasets/inspect.py\", line 285, in get_dataset_config_info\r\n raise SplitsNotFoundError(\"The split names could not be parsed from the dataset config.\") from err\r\ndatasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.\r\n```", "Thanks for reporting @patrickvonplaten and thanks for the investigation @severo.\r\n\r\nUnfortunately I'm not able to reproduce the error.\r\n\r\nI think the error has to do with authentication with `huggingface_hub`, because the exception is thrown from these code lines: https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0/blob/main/common_voice_8_0.py#L137-L139\r\n```python\r\nfrom huggingface_hub import HfApi, HfFolder\r\n\r\nif isinstance(auth_token, bool):\r\n email = HfApi().whoami(auth_token)\r\nemail = HfApi().whoami(auth_token)[\"email\"]\r\n```\r\n\r\nCould you please verify the previous code with the `auth_token` you pass to `load_dataset(..., use_auth_token=auth_token,...`?", "OK, thanks for digging a bit into it. Indeed, the error occurs with the dataset-viewer, but not with a normal user token, because we use an app token, and it does not have a related email!\r\n\r\n```python\r\n>>> from huggingface_hub import HfApi, HfFolder\r\n>>> auth_token = \"hf_app_******\"\r\n>>> t = HfApi().whoami(auth_token)\r\n>>> t\r\n{'type': 'app', 'name': 'dataset-preview-backend'}\r\n>>> t[\"email\"]\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\nKeyError: 'email'\r\n```\r\n\r\nNote also that the doc (https://huggingface.co/docs/huggingface_hub/package_reference/hf_api#huggingface_hub.HfApi.whoami) does not state that `whoami` should return an `email` key.\r\n\r\n@SBrandeis @julien-c: do you think the app token should have an email associated, like the users?", "We can workaround this with\r\n```python\r\nemail = HfApi().whoami(auth_token).get(\"email\", \"system@huggingface.co\")\r\n```\r\nin the common voice scripts", "Hmmm, does this mean that any person who downloads the common voice dataset will be logged as \"system@huggingface.co\"? If so, it would defeat the purpose of sending the user's email to the commonvoice API, right?", "I agree with @severo: we cannot set our system email as default, allowing anybody not authenticated to by-pass the Common Voice usage policy.\r\n\r\nAdditionally, looking at the code, I think we should implement a more robust way to send user email to Common Voice: currently anybody can tweak the script and send somebody else email instead.\r\n\r\nCC: @patrickvonplaten @lhoestq @SBrandeis @julien-c ", "Hmm I don't agree here. \r\n\r\nAnybody can always just bypass the system by setting whatever email. As soon as someone has access to the downloading script it's trivial to tweak the code to not send the \"correct\" email but to just whatever and it would work.\r\n\r\nNote that someone only has visibility on the code after having \"signed\" the access-mechanism so I think we can expect the users to have agreed to not do anything malicious. \r\n\r\nI'm fine with both @lhoestq's solution or we find a way that forces the user to be logged in + being able to load the data for the datasets viewer. Wdyt @lhoestq @severo @albertvillanova ?", "> Additionally, looking at the code, I think we should implement a more robust way to send user email to Common Voice: currently anybody can tweak the script and send somebody else email instead.\r\n\r\nYes, I agree we can forget about this @patrickvonplaten. After having had a look at Common Voice website, I've seen they only require sending an email (no auth is inplace on their side, contrary to what I had previously thought). Therefore, currently we impose stronger requirements than them: we require the user having logged in and accepted the access mechanism.\r\n\r\nCurrently the script as it is already requires the user being logged in:\r\n```python\r\nHfApi().whoami(auth_token)\r\n```\r\nthrows an exception if None/invalid auth_token is passed.\r\n\r\nOn the other hand, we should agree on the way to allow the viewer to stream the data.", "The preview is back now, thanks !" ]
2022-04-27T10:05:20
2022-05-10T12:17:05
2022-05-10T12:17:04
CONTRIBUTOR
null
null
null
null
https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4237/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4237/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
13 days, 2:11:44
https://api.github.com/repos/huggingface/datasets/issues/4235
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4235/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4235/comments
https://api.github.com/repos/huggingface/datasets/issues/4235/events
https://github.com/huggingface/datasets/issues/4235
1,216,952,640
I_kwDODunzps5IiTlA
4,235
How to load VERY LARGE dataset?
{ "avatar_url": "https://avatars.githubusercontent.com/u/45160643?v=4", "events_url": "https://api.github.com/users/CaoYiqingT/events{/privacy}", "followers_url": "https://api.github.com/users/CaoYiqingT/followers", "following_url": "https://api.github.com/users/CaoYiqingT/following{/other_user}", "gists_url": "https://api.github.com/users/CaoYiqingT/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/CaoYiqingT", "id": 45160643, "login": "CaoYiqingT", "node_id": "MDQ6VXNlcjQ1MTYwNjQz", "organizations_url": "https://api.github.com/users/CaoYiqingT/orgs", "received_events_url": "https://api.github.com/users/CaoYiqingT/received_events", "repos_url": "https://api.github.com/users/CaoYiqingT/repos", "site_admin": false, "starred_url": "https://api.github.com/users/CaoYiqingT/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/CaoYiqingT/subscriptions", "type": "User", "url": "https://api.github.com/users/CaoYiqingT", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "The `Trainer` support `IterableDataset`, not just datasets." ]
2022-04-27T07:50:13
2023-07-25T15:07:57
2023-07-25T15:07:57
NONE
null
null
null
null
### System Info ```shell I am using transformer trainer while meeting the issue. The trainer requests torch.utils.data.Dataset as input, which loads the whole dataset into the memory at once. Therefore, when the dataset is too large to load, there's nothing I can do except using IterDataset, which loads samples of data seperately, and results in low efficiency. I wonder if there are any tricks like Sharding in huggingface trainer. Looking forward to your reply. ``` ### Who can help? Trainer: @sgugger ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction None ### Expected behavior ```shell I wonder if there are any tricks like fairseq Sharding very large datasets https://fairseq.readthedocs.io/en/latest/getting_started.html. Thanks a lot! ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4235/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4235/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
454 days, 7:17:44
https://api.github.com/repos/huggingface/datasets/issues/4230
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4230/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4230/comments
https://api.github.com/repos/huggingface/datasets/issues/4230/events
https://github.com/huggingface/datasets/issues/4230
1,216,643,661
I_kwDODunzps5IhIJN
4,230
Why the `conll2003` dataset on huggingface only contains the `en` subset? Where is the German data?
{ "avatar_url": "https://avatars.githubusercontent.com/u/37113676?v=4", "events_url": "https://api.github.com/users/beyondguo/events{/privacy}", "followers_url": "https://api.github.com/users/beyondguo/followers", "following_url": "https://api.github.com/users/beyondguo/following{/other_user}", "gists_url": "https://api.github.com/users/beyondguo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/beyondguo", "id": 37113676, "login": "beyondguo", "node_id": "MDQ6VXNlcjM3MTEzNjc2", "organizations_url": "https://api.github.com/users/beyondguo/orgs", "received_events_url": "https://api.github.com/users/beyondguo/received_events", "repos_url": "https://api.github.com/users/beyondguo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/beyondguo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/beyondguo/subscriptions", "type": "User", "url": "https://api.github.com/users/beyondguo", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
[ "Thanks for reporting @beyondguo.\r\n\r\nIndeed, we generate this dataset from this raw data file URL: https://data.deepai.org/conll2003.zip\r\nAnd that URL only contains the English version.", "The German data requires payment\r\n\r\nThe [original task page](https://www.clips.uantwerpen.be/conll2003/ner/) states \"The German data is a collection of articles from the Frankfurter Rundschau. The named entities have been annotated by people of the University of Antwerp. Only the annotations are available here. In order to build these data sets you need access to the ECI Multilingual Text Corpus. It can be ordered from the Linguistic Data Consortium (2003 non-member price: US$ 35.00).\"\r\n\r\nInflation since 2003 has also affected LDC's prices, and today the dataset [LDC94T5](https://catalog.ldc.upenn.edu/LDC94T5) is available under license for $75 a copy. The [license](https://catalog.ldc.upenn.edu/license/eci-slash-mci-user-agreement.pdf) includes a non-distribution condition, which is probably why the data has not turned up openly.\r\n\r\nThe ACL hold copyright of this data; I'll mail them and anyone I can find at ECI to see if they'll open this up now. After all, it worked with Microsoft 3DMM, why not here too, after 28 years? :)\r\n", "Closing this issue as we are not allowed to share publicly the German subset." ]
2022-04-27T00:53:52
2023-07-25T15:10:15
2023-07-25T15:10:15
NONE
null
null
null
null
![image](https://user-images.githubusercontent.com/37113676/165416606-96b5db18-b16c-4b6b-928c-de8620fd943e.png) But on huggingface datasets: ![image](https://user-images.githubusercontent.com/37113676/165416649-8fd77980-ca0d-43f0-935e-f398ba8323a4.png) Where is the German data?
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4230/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4230/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
454 days, 14:16:23
https://api.github.com/repos/huggingface/datasets/issues/4221
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4221/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4221/comments
https://api.github.com/repos/huggingface/datasets/issues/4221/events
https://github.com/huggingface/datasets/issues/4221
1,215,911,182
I_kwDODunzps5IeVUO
4,221
Dictionary Feature
{ "avatar_url": "https://avatars.githubusercontent.com/u/2944532?v=4", "events_url": "https://api.github.com/users/jordiae/events{/privacy}", "followers_url": "https://api.github.com/users/jordiae/followers", "following_url": "https://api.github.com/users/jordiae/following{/other_user}", "gists_url": "https://api.github.com/users/jordiae/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jordiae", "id": 2944532, "login": "jordiae", "node_id": "MDQ6VXNlcjI5NDQ1MzI=", "organizations_url": "https://api.github.com/users/jordiae/orgs", "received_events_url": "https://api.github.com/users/jordiae/received_events", "repos_url": "https://api.github.com/users/jordiae/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jordiae/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jordiae/subscriptions", "type": "User", "url": "https://api.github.com/users/jordiae", "user_view_type": "public" }
[ { "color": "d876e3", "default": true, "description": "Further information is requested", "id": 1935892912, "name": "question", "node_id": "MDU6TGFiZWwxOTM1ODkyOTEy", "url": "https://api.github.com/repos/huggingface/datasets/labels/question" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
[ "Hi @jordiae,\r\n\r\nInstead of the `Sequence` feature, you can use just a regular list: put the dict between `[` and `]`:\r\n```python\r\n\"list_of_dict_feature\": [\r\n {\r\n \"key1_in_dict\": datasets.Value(\"string\"),\r\n \"key2_in_dict\": datasets.Value(\"int32\"),\r\n ...\r\n }\r\n],\r\n```\r\n\r\nFeel free to re-open this issue if that does not work for your use case.", "> Hi @jordiae,\r\n> \r\n> Instead of the `Sequence` feature, you can use just a regular list: put the dict between `[` and `]`:\r\n> \r\n> ```python\r\n> \"list_of_dict_feature\": [\r\n> {\r\n> \"key1_in_dict\": datasets.Value(\"string\"),\r\n> \"key2_in_dict\": datasets.Value(\"int32\"),\r\n> ...\r\n> }\r\n> ],\r\n> ```\r\n> \r\n> Feel free to re-open this issue if that does not work for your use case.\r\n\r\nThank you" ]
2022-04-26T12:50:18
2022-04-29T14:52:19
2022-04-28T17:04:58
NONE
null
null
null
null
Hi, I'm trying to create the loading script for a dataset in which one feature is a list of dictionaries, which afaik doesn't fit very well the values and structures supported by Value and Sequence. Is there any suggested workaround, am I missing something? Thank you in advance.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4221/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4221/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
2 days, 4:14:40
https://api.github.com/repos/huggingface/datasets/issues/4217
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4217/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4217/comments
https://api.github.com/repos/huggingface/datasets/issues/4217/events
https://github.com/huggingface/datasets/issues/4217
1,214,688,141
I_kwDODunzps5IZquN
4,217
Big_Patent dataset broken
{ "avatar_url": "https://avatars.githubusercontent.com/u/54189843?v=4", "events_url": "https://api.github.com/users/Matthew-Larsen/events{/privacy}", "followers_url": "https://api.github.com/users/Matthew-Larsen/followers", "following_url": "https://api.github.com/users/Matthew-Larsen/following{/other_user}", "gists_url": "https://api.github.com/users/Matthew-Larsen/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Matthew-Larsen", "id": 54189843, "login": "Matthew-Larsen", "node_id": "MDQ6VXNlcjU0MTg5ODQz", "organizations_url": "https://api.github.com/users/Matthew-Larsen/orgs", "received_events_url": "https://api.github.com/users/Matthew-Larsen/received_events", "repos_url": "https://api.github.com/users/Matthew-Larsen/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Matthew-Larsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Matthew-Larsen/subscriptions", "type": "User", "url": "https://api.github.com/users/Matthew-Larsen", "user_view_type": "public" }
[ { "color": "8B51EF", "default": false, "description": "", "id": 4069435429, "name": "hosted-on-google-drive", "node_id": "LA_kwDODunzps7yjqgl", "url": "https://api.github.com/repos/huggingface/datasets/labels/hosted-on-google-drive" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
[ "Thanks for reporting. The issue seems not to be directly related to the dataset viewer or the `datasets` library, but instead to it being hosted on Google Drive.\r\n\r\nSee related issues: https://github.com/huggingface/datasets/issues?q=is%3Aissue+is%3Aopen+drive.google.com\r\n\r\nTo quote [@lhoestq](https://github.com/huggingface/datasets/issues/4075#issuecomment-1087362551):\r\n\r\n> PS: if possible, please try to not use Google Drive links in your dataset script, since Google Drive has download quotas and is not always reliable.\r\n\r\n", "We should find out if the dataset license allows redistribution and contact the data owners to propose them to host their data on our Hub.", "The data owners have agreed on hosting their data on the Hub." ]
2022-04-25T15:31:45
2022-05-26T06:29:43
2022-05-02T18:21:15
NONE
null
null
null
null
## Dataset viewer issue for '*big_patent*' **Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/big_patent/viewer/all/train)* *Unable to view because it says FileNotFound, also cannot download it through the python API* Am I the one who added this dataset ? No
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4217/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4217/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
7 days, 2:49:30
https://api.github.com/repos/huggingface/datasets/issues/4211
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4211/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4211/comments
https://api.github.com/repos/huggingface/datasets/issues/4211/events
https://github.com/huggingface/datasets/issues/4211
1,214,361,837
I_kwDODunzps5IYbDt
4,211
DatasetDict containing Datasets with different features when pushed to hub gets remapped features
{ "avatar_url": "https://avatars.githubusercontent.com/u/61748653?v=4", "events_url": "https://api.github.com/users/pietrolesci/events{/privacy}", "followers_url": "https://api.github.com/users/pietrolesci/followers", "following_url": "https://api.github.com/users/pietrolesci/following{/other_user}", "gists_url": "https://api.github.com/users/pietrolesci/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/pietrolesci", "id": 61748653, "login": "pietrolesci", "node_id": "MDQ6VXNlcjYxNzQ4NjUz", "organizations_url": "https://api.github.com/users/pietrolesci/orgs", "received_events_url": "https://api.github.com/users/pietrolesci/received_events", "repos_url": "https://api.github.com/users/pietrolesci/repos", "site_admin": false, "starred_url": "https://api.github.com/users/pietrolesci/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pietrolesci/subscriptions", "type": "User", "url": "https://api.github.com/users/pietrolesci", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" } ]
[ "Hi @pietrolesci, thanks for reporting.\r\n\r\nPlease note that this is a design purpose: a `DatasetDict` has the same features for all its datasets. Normally, a `DatasetDict` is composed of several sub-datasets each corresponding to a different **split**.\r\n\r\nTo handle sub-datasets with different features, we use another approach: use different **configurations** instead of **splits**.\r\n\r\nHowever, for the moment `push_to_hub` does not support specifying different configurations. IMHO, we should implement this.", "Hi @albertvillanova,\r\n\r\nThanks a lot for your reply! I got it now. The strange thing for me was to have it correctly working (i.e., DatasetDict with different features in some datasets) locally and not on the Hub. It would be great to have configuration supported by `push_to_hub`. Personally, this latter functionality allowed me to iterate rather quickly on dataset curation.\r\n\r\nAgain, thanks for your time @albertvillanova!\r\n\r\nBest,\r\nPietro", "Hi! Yes, we should override `DatasetDict.__setitem__` and throw an error if features dictionaries are different. `DatasetDict` is a subclass of `dict`, so `DatasetDict.{update/setdefault}` need to be overridden as well. We could avoid this by subclassing `UserDict`, but then we would get the name collision - `DatasetDict.data` vs. `UserDict.data`. This makes me think we should rename the `data` attribute of `DatasetDict`/`Dataset` for easier dict subclassing (would also simplify https://github.com/huggingface/datasets/pull/3997) and to follow good Python practices. Another option is to have a custom `UserDict` class in `py_utils`, but it can be hard to keep this class consistent with the built-in `UserDict`. \r\n\r\n@albertvillanova @lhoestq wdyt?", "I would keep things simple and keep subclassing dict. Regarding the features check, I guess this can be done only for `push_to_hub` right ? It is the only function right now that requires the underlying datasets to be splits (e.g. train/test) and have the same features.\r\n\r\nNote that later you will be able to push datasets with different features as different dataset **configurations** (similarly to the [GLUE subsets](https://huggingface.co/datasets/glue) for example). We will work on this soon", "Hi @lhoestq,\r\n\r\nReturning to this thread to ask whether the possibility to create `DatasetDict` with different configurations will be supported in the future.\r\n\r\nBest,\r\nPietro", "DatasetDict is likely to always require the datasets to have the same columns and types, while different configurations may have different columns and types.\r\n\r\nWhy would you like to see that ?\r\nIf it's related to push_to_hub, we plan to allow pushing several configs, but not using DatasetDict", "Hi @lhoestq and @pietrolesci,\r\n\r\nI have been curious about this question as well. I don't have experience working with different configurations, but I can give a bit more detail on the work flow that I have been using with `Dataset_dict`.\r\n\r\nAs @pietrolesci mentions, I have been using `push_to_hub` to quickly iterate on dataset curation for different ML experiments - locally I create a set of dataset splits e.g. `train/val/test/inference`, then convert them to `HF_Datasets` and finally a to `Dataset_Dict` to `push_to_hub`. Where I have run into issues is when I want to include different metadata for different splits. For example, I have situations where I only have meta-data for one of the splits (e.g. test) or situations where I am working with `inference` data that does not have labels. Currently I use a rather hacky work around by adding \"dummy\" columns for missing columns to avoid the error:\r\n\r\n```\r\nValueError: All datasets in `DatasetDict` should have the same features\r\n```\r\n\r\nI am curious why `DatasetDict` will likely not support this functionality? I don't know much about working with different configurations, but allowing for different columns between datasets / splits would be a very helpful use-case for me. Are there any docs for using different configuration OR a more info about incorporating it with `push_to_hub`.\r\n\r\nBest wishes,\r\nJonathan\r\n\r\n", "+1", "> I am curious why DatasetDict will likely not support this functionality?\r\n\r\nThere's a possibility we may merge the Dataset and DatasetDict classes. The DatasetDict purpose was to define a way to get the train/test splits of a dataset.\r\n\r\nsee the discussions at https://github.com/huggingface/datasets/issues/5189\r\n\r\n> Are there any docs for using different configuration OR a more info about incorporating it with push_to_hub.\r\n\r\nThere's a PR open to allow to upload a dataset with a certain configuration name. Then later you can reload this specific configuration using `load_dataset(ds_name, config_name)`\r\n\r\nsee the PR at https://github.com/huggingface/datasets/pull/5213", "Hi, regarding the following information:\r\n\r\n> Please note that this is a design purpose: a `DatasetDict` has the same features for all its datasets. Normally, a `DatasetDict` is composed of several sub-datasets each corresponding to a different **split**.\r\n> \r\n> To handle sub-datasets with different features, we use another approach: use different **configurations** instead of **splits**.\r\n\r\nAltough this is often implied (such as how else would `DatasetDict` be able to process multiple splits in the same way?), I would expect it to be written somewhere in the docs plainly and maybe even in bold. Also I would expect to see it in multiple places such as:\r\n\r\n- in docstring of `DatasetDict`\r\n- in nlp/image/audio guides on how to create a dataset\r\n- [in conceptual guide on how to create a loading script](https://huggingface.co/docs/datasets/main/en/about_dataset_load)\r\n\r\n\r\nI think this addition would benefit the docs, especially when you guide a newbie (such as me) through the process of creating a dataset. As I said, you somehow suspect that this is in fact the case, but without reading it in the docs you cannot be sure." ]
2022-04-25T11:22:54
2023-04-06T19:25:50
2022-05-20T15:15:30
NONE
null
null
null
null
Hi there, I am trying to load a dataset to the Hub. This dataset is a `DatasetDict` composed of various splits. Some splits have a different `Feature` mapping. Locally, the DatasetDict preserves the individual features but if I `push_to_hub` and then `load_dataset`, the features are all the same. Dataset and code to reproduce available [here](https://huggingface.co/datasets/pietrolesci/robust_nli). In short: I have 3 feature mapping ```python Tri_features = Features( { "idx": Value(dtype="int64"), "premise": Value(dtype="string"), "hypothesis": Value(dtype="string"), "label": ClassLabel(num_classes=3, names=["entailment", "neutral", "contradiction"]), } ) Ent_features = Features( { "idx": Value(dtype="int64"), "premise": Value(dtype="string"), "hypothesis": Value(dtype="string"), "label": ClassLabel(num_classes=2, names=["non-entailment", "entailment"]), } ) Con_features = Features( { "idx": Value(dtype="int64"), "premise": Value(dtype="string"), "hypothesis": Value(dtype="string"), "label": ClassLabel(num_classes=2, names=["non-contradiction", "contradiction"]), } ) ``` Then I create different datasets ```python dataset_splits = {} for split in df["split"].unique(): print(split) df_split = df.loc[df["split"] == split].copy() if split in Tri_dataset: df_split["label"] = df_split["label"].map({"entailment": 0, "neutral": 1, "contradiction": 2}) ds = Dataset.from_pandas(df_split, features=Tri_features) elif split in Ent_bin_dataset: df_split["label"] = df_split["label"].map({"non-entailment": 0, "entailment": 1}) ds = Dataset.from_pandas(df_split, features=Ent_features) elif split in Con_bin_dataset: df_split["label"] = df_split["label"].map({"non-contradiction": 0, "contradiction": 1}) ds = Dataset.from_pandas(df_split, features=Con_features) else: print("ERROR:", split) dataset_splits[split] = ds datasets = DatasetDict(dataset_splits) ``` I then push to hub ```python datasets.push_to_hub("pietrolesci/robust_nli", token="<token>") ``` Finally, I load it from the hub ```python datasets_loaded_from_hub = load_dataset("pietrolesci/robust_nli") ``` And I get that ```python datasets["LI_TS"].features != datasets_loaded_from_hub["LI_TS"].features ``` since ```python "label": ClassLabel(num_classes=2, names=["non-contradiction", "contradiction"]) ``` gets remapped to ```python "label": ClassLabel(num_classes=3, names=["entailment", "neutral", "contradiction"]) ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4211/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4211/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
25 days, 3:52:36
https://api.github.com/repos/huggingface/datasets/issues/4210
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4210/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4210/comments
https://api.github.com/repos/huggingface/datasets/issues/4210/events
https://github.com/huggingface/datasets/issues/4210
1,214,089,130
I_kwDODunzps5IXYeq
4,210
TypeError: Cannot cast array data from dtype('O') to dtype('int64') according to the rule 'safe'
{ "avatar_url": "https://avatars.githubusercontent.com/u/163333?v=4", "events_url": "https://api.github.com/users/loretoparisi/events{/privacy}", "followers_url": "https://api.github.com/users/loretoparisi/followers", "following_url": "https://api.github.com/users/loretoparisi/following{/other_user}", "gists_url": "https://api.github.com/users/loretoparisi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/loretoparisi", "id": 163333, "login": "loretoparisi", "node_id": "MDQ6VXNlcjE2MzMzMw==", "organizations_url": "https://api.github.com/users/loretoparisi/orgs", "received_events_url": "https://api.github.com/users/loretoparisi/received_events", "repos_url": "https://api.github.com/users/loretoparisi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/loretoparisi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/loretoparisi/subscriptions", "type": "User", "url": "https://api.github.com/users/loretoparisi", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "Hi! Casting class labels from strings is currently not supported in the CSV loader, but you can get the same result with an additional map as follows:\r\n```python\r\nfrom datasets import load_dataset,Features,Value,ClassLabel\r\nclass_names = [\"cmn\",\"deu\",\"rus\",\"fra\",\"eng\",\"jpn\",\"spa\",\"ita\",\"kor\",\"vie\",\"nld\",\"epo\",\"por\",\"tur\",\"heb\",\"hun\",\"ell\",\"ind\",\"ara\",\"arz\",\"fin\",\"bul\",\"yue\",\"swe\",\"ukr\",\"bel\",\"que\",\"ces\",\"swh\",\"nno\",\"wuu\",\"nob\",\"zsm\",\"est\",\"kat\",\"pol\",\"lat\",\"urd\",\"sqi\",\"isl\",\"fry\",\"afr\",\"ron\",\"fao\",\"san\",\"bre\",\"tat\",\"yid\",\"uig\",\"uzb\",\"srp\",\"qya\",\"dan\",\"pes\",\"slk\",\"eus\",\"cycl\",\"acm\",\"tgl\",\"lvs\",\"kaz\",\"hye\",\"hin\",\"lit\",\"ben\",\"cat\",\"bos\",\"hrv\",\"tha\",\"orv\",\"cha\",\"mon\",\"lzh\",\"scn\",\"gle\",\"mkd\",\"slv\",\"frm\",\"glg\",\"vol\",\"ain\",\"jbo\",\"tok\",\"ina\",\"nds\",\"mal\",\"tlh\",\"roh\",\"ltz\",\"oss\",\"ido\",\"gla\",\"mlt\",\"sco\",\"ast\",\"jav\",\"oci\",\"ile\",\"ota\",\"xal\",\"tel\",\"sjn\",\"nov\",\"khm\",\"tpi\",\"ang\",\"aze\",\"tgk\",\"tuk\",\"chv\",\"hsb\",\"dsb\",\"bod\",\"sme\",\"cym\",\"mri\",\"ksh\",\"kmr\",\"ewe\",\"kab\",\"ber\",\"tpw\",\"udm\",\"lld\",\"pms\",\"lad\",\"grn\",\"mlg\",\"xho\",\"pnb\",\"grc\",\"hat\",\"lao\",\"npi\",\"cor\",\"nah\",\"avk\",\"mar\",\"guj\",\"pan\",\"kir\",\"myv\",\"prg\",\"sux\",\"crs\",\"ckt\",\"bak\",\"zlm\",\"hil\",\"cbk\",\"chr\",\"nav\",\"lkt\",\"enm\",\"arq\",\"lin\",\"abk\",\"pcd\",\"rom\",\"gsw\",\"tam\",\"zul\",\"awa\",\"wln\",\"amh\",\"bar\",\"hbo\",\"mhr\",\"bho\",\"mrj\",\"ckb\",\"osx\",\"pfl\",\"mgm\",\"sna\",\"mah\",\"hau\",\"kan\",\"nog\",\"sin\",\"glv\",\"dng\",\"kal\",\"liv\",\"vro\",\"apc\",\"jdt\",\"fur\",\"che\",\"haw\",\"yor\",\"crh\",\"pdc\",\"ppl\",\"kin\",\"shs\",\"mnw\",\"tet\",\"sah\",\"kum\",\"ngt\",\"nya\",\"pus\",\"hif\",\"mya\",\"moh\",\"wol\",\"tir\",\"ton\",\"lzz\",\"oar\",\"lug\",\"brx\",\"non\",\"mww\",\"hak\",\"nlv\",\"ngu\",\"bua\",\"aym\",\"vec\",\"ibo\",\"tkl\",\"bam\",\"kha\",\"ceb\",\"lou\",\"fuc\",\"smo\",\"gag\",\"lfn\",\"arg\",\"umb\",\"tyv\",\"kjh\",\"oji\",\"cyo\",\"urh\",\"kzj\",\"pam\",\"srd\",\"lmo\",\"swg\",\"mdf\",\"gil\",\"snd\",\"tso\",\"sot\",\"zza\",\"tsn\",\"pau\",\"som\",\"egl\",\"ady\",\"asm\",\"ori\",\"dtp\",\"cho\",\"max\",\"kam\",\"niu\",\"sag\",\"ilo\",\"kaa\",\"fuv\",\"nch\",\"hoc\",\"iba\",\"gbm\",\"sun\",\"war\",\"mvv\",\"pap\",\"ary\",\"kxi\",\"csb\",\"pag\",\"cos\",\"rif\",\"kek\",\"krc\",\"aii\",\"ban\",\"ssw\",\"tvl\",\"mfe\",\"tah\",\"bvy\",\"bcl\",\"hnj\",\"nau\",\"nst\",\"afb\",\"quc\",\"min\",\"tmw\",\"mad\",\"bjn\",\"mai\",\"cjy\",\"got\",\"hsn\",\"gan\",\"tzl\",\"dws\",\"ldn\",\"afh\",\"sgs\",\"krl\",\"vep\",\"rue\",\"tly\",\"mic\",\"ext\",\"izh\",\"sma\",\"jam\",\"cmo\",\"mwl\",\"kpv\",\"koi\",\"bis\",\"ike\",\"run\",\"evn\",\"ryu\",\"mnc\",\"aoz\",\"otk\",\"kas\",\"aln\",\"akl\",\"yua\",\"shy\",\"fkv\",\"gos\",\"fij\",\"thv\",\"zgh\",\"gcf\",\"cay\",\"xmf\",\"tig\",\"div\",\"lij\",\"rap\",\"hrx\",\"cpi\",\"tts\",\"gaa\",\"tmr\",\"iii\",\"ltg\",\"bzt\",\"syc\",\"emx\",\"gom\",\"chg\",\"osp\",\"stq\",\"frr\",\"fro\",\"nys\",\"toi\",\"new\",\"phn\",\"jpa\",\"rel\",\"drt\",\"chn\",\"pli\",\"laa\",\"bal\",\"hdn\",\"hax\",\"mik\",\"ajp\",\"xqa\",\"pal\",\"crk\",\"mni\",\"lut\",\"ayl\",\"ood\",\"sdh\",\"ofs\",\"nus\",\"kiu\",\"diq\",\"qxq\",\"alt\",\"bfz\",\"klj\",\"mus\",\"srn\",\"guc\",\"lim\",\"zea\",\"shi\",\"mnr\",\"bom\",\"sat\",\"szl\"]\r\nfeatures = Features({ 'label': ClassLabel(names=class_names), 'text': Value('string')})\r\nnum_labels = features['label'].num_classes\r\ndata_files = { \"train\": \"train.csv\", \"test\": \"test.csv\" }\r\nsentences = load_dataset(\r\n \"loretoparisi/tatoeba-sentences\",\r\n data_files=data_files,\r\n delimiter='\\t', \r\n column_names=['label', 'text'],\r\n)\r\n# You can make this part faster with num_proc=<some int>\r\nsentences = sentences.map(lambda ex: features[\"label\"].str2int(ex[\"label\"]) if ex[\"label\"] is not None else None, features=features)\r\n```\r\n\r\n@lhoestq IIRC, I suggested adding `cast_to_storage` to `ClassLabel` + `table_cast` to the packaged loaders if the `ClassLabel`/`Image`/`Audio` type is present in `features` to avoid this kind of error, but your concern was speed. IMO shouldn't be a problem if we do `table_cast` only when these features are present.", "I agree packaged loaders should support `ClassLabel` feature without throwing an error.", "@albertvillanova @mariosasko thank you, with that change now I get\r\n```\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\n[<ipython-input-9-eeb68eeb9bec>](https://localhost:8080/#) in <module>()\r\n 11 )\r\n 12 # You can make this part faster with num_proc=<some int>\r\n---> 13 sentences = sentences.map(lambda ex: features[\"label\"].str2int(ex[\"label\"]) if ex[\"label\"] is not None else None, features=features)\r\n 14 sentences = sentences.shuffle()\r\n\r\n8 frames\r\n[/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in validate_function_output(processed_inputs, indices)\r\n 2193 if processed_inputs is not None and not isinstance(processed_inputs, (Mapping, pa.Table)):\r\n 2194 raise TypeError(\r\n-> 2195 f\"Provided `function` which is applied to all elements of table returns a variable of type {type(processed_inputs)}. Make sure provided `function` returns a variable of type `dict` (or a pyarrow table) to update the dataset or `None` if you are only interested in side effects.\"\r\n 2196 )\r\n 2197 elif isinstance(indices, list) and isinstance(processed_inputs, Mapping):\r\n\r\nTypeError: Provided `function` which is applied to all elements of table returns a variable of type <class 'int'>. Make sure provided `function` returns a variable of type `dict` (or a pyarrow table) to update the dataset or `None` if you are only interested in side effects.\r\n```\r\n\r\nthe error is raised by [this](https://github.com/huggingface/datasets/blob/master/src/datasets/arrow_dataset.py#L2221)\r\n\r\n```\r\n[/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in validate_function_output(processed_inputs, indices)\r\n```", "@mariosasko changed it like\r\n\r\n```python\r\nsentences = sentences.map(lambda ex: {\"label\" : features[\"label\"].str2int(ex[\"label\"]) if ex[\"label\"] is not None else None}, features=features)\r\n```\r\n\r\nto avoid the above errorr.", "Any update on this? Is this correct ?\r\n> @mariosasko changed it like\r\n> \r\n> ```python\r\n> sentences = sentences.map(lambda ex: {\"label\" : features[\"label\"].str2int(ex[\"label\"]) if ex[\"label\"] is not None else None}, features=features)\r\n> ```\r\n> \r\n> to avoid the above errorr.\r\n\r\n" ]
2022-04-25T07:28:42
2022-05-31T12:16:31
2022-05-31T12:16:31
NONE
null
null
null
null
### System Info ```shell - `transformers` version: 4.18.0 - Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - Huggingface_hub version: 0.5.1 - PyTorch version (GPU?): 1.10.0+cu111 (True) - Tensorflow version (GPU?): 2.8.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ``` ### Who can help? @LysandreJik ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```python from datasets import load_dataset,Features,Value,ClassLabel class_names = ["cmn","deu","rus","fra","eng","jpn","spa","ita","kor","vie","nld","epo","por","tur","heb","hun","ell","ind","ara","arz","fin","bul","yue","swe","ukr","bel","que","ces","swh","nno","wuu","nob","zsm","est","kat","pol","lat","urd","sqi","isl","fry","afr","ron","fao","san","bre","tat","yid","uig","uzb","srp","qya","dan","pes","slk","eus","cycl","acm","tgl","lvs","kaz","hye","hin","lit","ben","cat","bos","hrv","tha","orv","cha","mon","lzh","scn","gle","mkd","slv","frm","glg","vol","ain","jbo","tok","ina","nds","mal","tlh","roh","ltz","oss","ido","gla","mlt","sco","ast","jav","oci","ile","ota","xal","tel","sjn","nov","khm","tpi","ang","aze","tgk","tuk","chv","hsb","dsb","bod","sme","cym","mri","ksh","kmr","ewe","kab","ber","tpw","udm","lld","pms","lad","grn","mlg","xho","pnb","grc","hat","lao","npi","cor","nah","avk","mar","guj","pan","kir","myv","prg","sux","crs","ckt","bak","zlm","hil","cbk","chr","nav","lkt","enm","arq","lin","abk","pcd","rom","gsw","tam","zul","awa","wln","amh","bar","hbo","mhr","bho","mrj","ckb","osx","pfl","mgm","sna","mah","hau","kan","nog","sin","glv","dng","kal","liv","vro","apc","jdt","fur","che","haw","yor","crh","pdc","ppl","kin","shs","mnw","tet","sah","kum","ngt","nya","pus","hif","mya","moh","wol","tir","ton","lzz","oar","lug","brx","non","mww","hak","nlv","ngu","bua","aym","vec","ibo","tkl","bam","kha","ceb","lou","fuc","smo","gag","lfn","arg","umb","tyv","kjh","oji","cyo","urh","kzj","pam","srd","lmo","swg","mdf","gil","snd","tso","sot","zza","tsn","pau","som","egl","ady","asm","ori","dtp","cho","max","kam","niu","sag","ilo","kaa","fuv","nch","hoc","iba","gbm","sun","war","mvv","pap","ary","kxi","csb","pag","cos","rif","kek","krc","aii","ban","ssw","tvl","mfe","tah","bvy","bcl","hnj","nau","nst","afb","quc","min","tmw","mad","bjn","mai","cjy","got","hsn","gan","tzl","dws","ldn","afh","sgs","krl","vep","rue","tly","mic","ext","izh","sma","jam","cmo","mwl","kpv","koi","bis","ike","run","evn","ryu","mnc","aoz","otk","kas","aln","akl","yua","shy","fkv","gos","fij","thv","zgh","gcf","cay","xmf","tig","div","lij","rap","hrx","cpi","tts","gaa","tmr","iii","ltg","bzt","syc","emx","gom","chg","osp","stq","frr","fro","nys","toi","new","phn","jpa","rel","drt","chn","pli","laa","bal","hdn","hax","mik","ajp","xqa","pal","crk","mni","lut","ayl","ood","sdh","ofs","nus","kiu","diq","qxq","alt","bfz","klj","mus","srn","guc","lim","zea","shi","mnr","bom","sat","szl"] features = Features({ 'label': ClassLabel(names=class_names), 'text': Value('string')}) num_labels = features['label'].num_classes data_files = { "train": "train.csv", "test": "test.csv" } sentences = load_dataset("loretoparisi/tatoeba-sentences", data_files=data_files, delimiter='\t', column_names=['label', 'text'], features = features ``` ERROR: ``` ClassLabel(num_classes=403, names=['cmn', 'deu', 'rus', 'fra', 'eng', 'jpn', 'spa', 'ita', 'kor', 'vie', 'nld', 'epo', 'por', 'tur', 'heb', 'hun', 'ell', 'ind', 'ara', 'arz', 'fin', 'bul', 'yue', 'swe', 'ukr', 'bel', 'que', 'ces', 'swh', 'nno', 'wuu', 'nob', 'zsm', 'est', 'kat', 'pol', 'lat', 'urd', 'sqi', 'isl', 'fry', 'afr', 'ron', 'fao', 'san', 'bre', 'tat', 'yid', 'uig', 'uzb', 'srp', 'qya', 'dan', 'pes', 'slk', 'eus', 'cycl', 'acm', 'tgl', 'lvs', 'kaz', 'hye', 'hin', 'lit', 'ben', 'cat', 'bos', 'hrv', 'tha', 'orv', 'cha', 'mon', 'lzh', 'scn', 'gle', 'mkd', 'slv', 'frm', 'glg', 'vol', 'ain', 'jbo', 'tok', 'ina', 'nds', 'mal', 'tlh', 'roh', 'ltz', 'oss', 'ido', 'gla', 'mlt', 'sco', 'ast', 'jav', 'oci', 'ile', 'ota', 'xal', 'tel', 'sjn', 'nov', 'khm', 'tpi', 'ang', 'aze', 'tgk', 'tuk', 'chv', 'hsb', 'dsb', 'bod', 'sme', 'cym', 'mri', 'ksh', 'kmr', 'ewe', 'kab', 'ber', 'tpw', 'udm', 'lld', 'pms', 'lad', 'grn', 'mlg', 'xho', 'pnb', 'grc', 'hat', 'lao', 'npi', 'cor', 'nah', 'avk', 'mar', 'guj', 'pan', 'kir', 'myv', 'prg', 'sux', 'crs', 'ckt', 'bak', 'zlm', 'hil', 'cbk', 'chr', 'nav', 'lkt', 'enm', 'arq', 'lin', 'abk', 'pcd', 'rom', 'gsw', 'tam', 'zul', 'awa', 'wln', 'amh', 'bar', 'hbo', 'mhr', 'bho', 'mrj', 'ckb', 'osx', 'pfl', 'mgm', 'sna', 'mah', 'hau', 'kan', 'nog', 'sin', 'glv', 'dng', 'kal', 'liv', 'vro', 'apc', 'jdt', 'fur', 'che', 'haw', 'yor', 'crh', 'pdc', 'ppl', 'kin', 'shs', 'mnw', 'tet', 'sah', 'kum', 'ngt', 'nya', 'pus', 'hif', 'mya', 'moh', 'wol', 'tir', 'ton', 'lzz', 'oar', 'lug', 'brx', 'non', 'mww', 'hak', 'nlv', 'ngu', 'bua', 'aym', 'vec', 'ibo', 'tkl', 'bam', 'kha', 'ceb', 'lou', 'fuc', 'smo', 'gag', 'lfn', 'arg', 'umb', 'tyv', 'kjh', 'oji', 'cyo', 'urh', 'kzj', 'pam', 'srd', 'lmo', 'swg', 'mdf', 'gil', 'snd', 'tso', 'sot', 'zza', 'tsn', 'pau', 'som', 'egl', 'ady', 'asm', 'ori', 'dtp', 'cho', 'max', 'kam', 'niu', 'sag', 'ilo', 'kaa', 'fuv', 'nch', 'hoc', 'iba', 'gbm', 'sun', 'war', 'mvv', 'pap', 'ary', 'kxi', 'csb', 'pag', 'cos', 'rif', 'kek', 'krc', 'aii', 'ban', 'ssw', 'tvl', 'mfe', 'tah', 'bvy', 'bcl', 'hnj', 'nau', 'nst', 'afb', 'quc', 'min', 'tmw', 'mad', 'bjn', 'mai', 'cjy', 'got', 'hsn', 'gan', 'tzl', 'dws', 'ldn', 'afh', 'sgs', 'krl', 'vep', 'rue', 'tly', 'mic', 'ext', 'izh', 'sma', 'jam', 'cmo', 'mwl', 'kpv', 'koi', 'bis', 'ike', 'run', 'evn', 'ryu', 'mnc', 'aoz', 'otk', 'kas', 'aln', 'akl', 'yua', 'shy', 'fkv', 'gos', 'fij', 'thv', 'zgh', 'gcf', 'cay', 'xmf', 'tig', 'div', 'lij', 'rap', 'hrx', 'cpi', 'tts', 'gaa', 'tmr', 'iii', 'ltg', 'bzt', 'syc', 'emx', 'gom', 'chg', 'osp', 'stq', 'frr', 'fro', 'nys', 'toi', 'new', 'phn', 'jpa', 'rel', 'drt', 'chn', 'pli', 'laa', 'bal', 'hdn', 'hax', 'mik', 'ajp', 'xqa', 'pal', 'crk', 'mni', 'lut', 'ayl', 'ood', 'sdh', 'ofs', 'nus', 'kiu', 'diq', 'qxq', 'alt', 'bfz', 'klj', 'mus', 'srn', 'guc', 'lim', 'zea', 'shi', 'mnr', 'bom', 'sat', 'szl'], id=None) Value(dtype='string', id=None) Using custom data configuration loretoparisi--tatoeba-sentences-7b2c5e991f398f39 Downloading and preparing dataset csv/loretoparisi--tatoeba-sentences to /root/.cache/huggingface/datasets/csv/loretoparisi--tatoeba-sentences-7b2c5e991f398f39/0.0.0/433e0ccc46f9880962cc2b12065189766fbb2bee57a221866138fb9203c83519... Downloading data files: 100% 2/2 [00:18<00:00, 8.06s/it] Downloading data: 100% 391M/391M [00:13<00:00, 35.3MB/s] Downloading data: 100% 92.4M/92.4M [00:02<00:00, 36.5MB/s] Failed to read file '/root/.cache/huggingface/datasets/downloads/933132df9905194ea9faeb30cabca8c49318795612f6495fcb941a290191dd5d' with error <class 'ValueError'>: invalid literal for int() with base 10: 'cmn' --------------------------------------------------------------------------- TypeError Traceback (most recent call last) /usr/local/lib/python3.7/dist-packages/pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._convert_tokens() TypeError: Cannot cast array data from dtype('O') to dtype('int64') according to the rule 'safe' During handling of the above exception, another exception occurred: ValueError Traceback (most recent call last) 15 frames /usr/local/lib/python3.7/dist-packages/pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._convert_tokens() ValueError: invalid literal for int() with base 10: 'cmn' ``` while loading without `features` it loads without errors ``` sentences = load_dataset("loretoparisi/tatoeba-sentences", data_files=data_files, delimiter='\t', column_names=['label', 'text'] ) ``` but the `label` col seems to be wrong (without the `ClassLabel` object): ``` sentences['train'].features {'label': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None)} ``` The dataset was https://huggingface.co/datasets/loretoparisi/tatoeba-sentences Dataset format is: ``` ces Nechci vědět, co je tam uvnitř. ces Kdo o tom chce slyšet? deu Tom sagte, er fühle sich nicht wohl. ber Mel-iyi-d anida-t tura ? hun Gondom lesz rá rögtön. ber Mel-iyi-d anida-tt tura ? deu Ich will dich nicht reden hören. ``` ### Expected behavior ```shell correctly load train and test files. ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/4210/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4210/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
36 days, 4:47:49
https://api.github.com/repos/huggingface/datasets/issues/4199
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4199/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4199/comments
https://api.github.com/repos/huggingface/datasets/issues/4199/events
https://github.com/huggingface/datasets/issues/4199
1,211,953,308
I_kwDODunzps5IPPCc
4,199
Cache miss during reload for datasets using image fetch utilities through map
{ "avatar_url": "https://avatars.githubusercontent.com/u/3616806?v=4", "events_url": "https://api.github.com/users/apsdehal/events{/privacy}", "followers_url": "https://api.github.com/users/apsdehal/followers", "following_url": "https://api.github.com/users/apsdehal/following{/other_user}", "gists_url": "https://api.github.com/users/apsdehal/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/apsdehal", "id": 3616806, "login": "apsdehal", "node_id": "MDQ6VXNlcjM2MTY4MDY=", "organizations_url": "https://api.github.com/users/apsdehal/orgs", "received_events_url": "https://api.github.com/users/apsdehal/received_events", "repos_url": "https://api.github.com/users/apsdehal/repos", "site_admin": false, "starred_url": "https://api.github.com/users/apsdehal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/apsdehal/subscriptions", "type": "User", "url": "https://api.github.com/users/apsdehal", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" } ]
[ "Hi ! Maybe one of the objects in the function is not deterministic across sessions ? You can read more about it and how to investigate here: https://huggingface.co/docs/datasets/about_cache", "Hi @apsdehal! Can you verify that replacing\r\n```python\r\ndef fetch_single_image(image_url, timeout=None, retries=0):\r\n for _ in range(retries + 1):\r\n try:\r\n request = urllib.request.Request(\r\n image_url,\r\n data=None,\r\n headers={\"user-agent\": get_datasets_user_agent()},\r\n )\r\n with urllib.request.urlopen(request, timeout=timeout) as req:\r\n image = PIL.Image.open(io.BytesIO(req.read()))\r\n break\r\n except Exception:\r\n image = None\r\n return image\r\n```\r\nwith \r\n```python\r\nUSER_AGENT = get_datasets_user_agent()\r\n\r\ndef fetch_single_image(image_url, timeout=None, retries=0):\r\n for _ in range(retries + 1):\r\n try:\r\n request = urllib.request.Request(\r\n image_url,\r\n data=None,\r\n headers={\"user-agent\": USER_AGENT},\r\n )\r\n with urllib.request.urlopen(request, timeout=timeout) as req:\r\n image = PIL.Image.open(io.BytesIO(req.read()))\r\n break\r\n except Exception:\r\n image = None\r\n return image\r\n```\r\nfixes the issue?", "Thanks @mariosasko. That does fix the issue. In general, I think these image downloading utilities since they are being used by a lot of image dataset should be provided as a part of `datasets` library right to keep the logic consistent and READMEs smaller? If they already exists, that is also great, please point me to those. I saw that `http_get` does exist.", "You can find my rationale (and a proposed solution) for why these utilities are not a part of `datasets` here: https://github.com/huggingface/datasets/pull/4100#issuecomment-1097994003.", "Makes sense. But, I think as the number of image datasets as grow, more people are copying pasting original code from docs to work as it is while we make fixes to them later. I think we do need a central place for these to avoid that confusion as well as more easier access to image datasets. Should we restart that discussion, possible on slack?" ]
2022-04-22T07:47:08
2022-04-26T17:00:32
2022-04-26T13:38:26
CONTRIBUTOR
null
null
null
null
## Describe the bug It looks like that result of `.map` operation dataset are missing the cache when you reload the script and always run from scratch. In same interpretor session, they are able to find the cache and reload it. But, when you exit the interpretor and reload it, the downloading starts from scratch. ## Steps to reproduce the bug Using the example provided in `red_caps` dataset. ```python from concurrent.futures import ThreadPoolExecutor from functools import partial import io import urllib import PIL.Image import datasets from datasets import load_dataset from datasets.utils.file_utils import get_datasets_user_agent def fetch_single_image(image_url, timeout=None, retries=0): for _ in range(retries + 1): try: request = urllib.request.Request( image_url, data=None, headers={"user-agent": get_datasets_user_agent()}, ) with urllib.request.urlopen(request, timeout=timeout) as req: image = PIL.Image.open(io.BytesIO(req.read())) break except Exception: image = None return image def fetch_images(batch, num_threads, timeout=None, retries=0): fetch_single_image_with_args = partial(fetch_single_image, timeout=timeout, retries=retries) with ThreadPoolExecutor(max_workers=num_threads) as executor: batch["image"] = list(executor.map(lambda image_urls: [fetch_single_image_with_args(image_url) for image_url in image_urls], batch["image_url"])) return batch def process_image_urls(batch): processed_batch_image_urls = [] for image_url in batch["image_url"]: processed_example_image_urls = [] image_url_splits = re.findall(r"http\S+", image_url) for image_url_split in image_url_splits: if "imgur" in image_url_split and "," in image_url_split: for image_url_part in image_url_split.split(","): if not image_url_part: continue image_url_part = image_url_part.strip() root, ext = os.path.splitext(image_url_part) if not root.startswith("http"): root = "http://i.imgur.com/" + root root = root.split("#")[0] if not ext: ext = ".jpg" ext = re.split(r"[?%]", ext)[0] image_url_part = root + ext processed_example_image_urls.append(image_url_part) else: processed_example_image_urls.append(image_url_split) processed_batch_image_urls.append(processed_example_image_urls) batch["image_url"] = processed_batch_image_urls return batch dset = load_dataset("red_caps", "jellyfish") dset = dset.map(process_image_urls, batched=True, num_proc=4) features = dset["train"].features.copy() features["image"] = datasets.Sequence(datasets.Image()) num_threads = 5 dset = dset.map(fetch_images, batched=True, batch_size=50, features=features, fn_kwargs={"num_threads": num_threads}) ``` Run this in an interpretor or as a script twice and see that the cache is missed the second time. ## Expected results At reload there should not be any cache miss ## Actual results Every time script is run, cache is missed and dataset is built from scratch. ## Environment info - `datasets` version: 2.1.1.dev0 - Platform: Linux-4.19.0-20-cloud-amd64-x86_64-with-glibc2.10 - Python version: 3.8.13 - PyArrow version: 7.0.0 - Pandas version: 1.4.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4199/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4199/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
4 days, 5:51:18
https://api.github.com/repos/huggingface/datasets/issues/4198
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4198/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4198/comments
https://api.github.com/repos/huggingface/datasets/issues/4198/events
https://github.com/huggingface/datasets/issues/4198
1,211,456,559
I_kwDODunzps5INVwv
4,198
There is no dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/1625647?v=4", "events_url": "https://api.github.com/users/wilfoderek/events{/privacy}", "followers_url": "https://api.github.com/users/wilfoderek/followers", "following_url": "https://api.github.com/users/wilfoderek/following{/other_user}", "gists_url": "https://api.github.com/users/wilfoderek/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/wilfoderek", "id": 1625647, "login": "wilfoderek", "node_id": "MDQ6VXNlcjE2MjU2NDc=", "organizations_url": "https://api.github.com/users/wilfoderek/orgs", "received_events_url": "https://api.github.com/users/wilfoderek/received_events", "repos_url": "https://api.github.com/users/wilfoderek/repos", "site_admin": false, "starred_url": "https://api.github.com/users/wilfoderek/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wilfoderek/subscriptions", "type": "User", "url": "https://api.github.com/users/wilfoderek", "user_view_type": "public" }
[]
closed
false
null
[]
[]
2022-04-21T19:19:26
2022-05-03T11:29:05
2022-04-22T06:12:25
NONE
null
null
null
null
## Dataset viewer issue for '*name of the dataset*' **Link:** *link to the dataset viewer page* *short description of the issue* Am I the one who added this dataset ? Yes-No
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4198/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4198/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
10:52:59
https://api.github.com/repos/huggingface/datasets/issues/4196
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4196/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4196/comments
https://api.github.com/repos/huggingface/datasets/issues/4196/events
https://github.com/huggingface/datasets/issues/4196
1,211,271,261
I_kwDODunzps5IMohd
4,196
Embed image and audio files in `save_to_disk`
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
[]
2022-04-21T16:25:18
2022-12-14T18:22:59
2022-12-14T18:22:59
MEMBER
null
null
null
null
Following https://github.com/huggingface/datasets/pull/4184, currently a dataset saved using `save_to_disk` doesn't actually contain the bytes of the image or audio files. Instead it stores the path to your local files. Adding `embed_external_files` and set it to True by default to save_to_disk would be kind of a breaking change since some users will get bigger Arrow files when updating the lib, but the advantages are nice: - the resulting dataset is self contained, in case you want to delete your cache for example or share it with someone else - users also upload these Arrow files to cloud storage via the fs parameter, and in this case they would expect to upload a self-contained dataset - consistency with push_to_hub This can be implemented at the same time as sharding for `save_to_disk` for efficiency, and reuse the helpers from `push_to_hub` to embed the external files. cc @mariosasko
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 6, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 6, "url": "https://api.github.com/repos/huggingface/datasets/issues/4196/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4196/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
237 days, 1:57:41
https://api.github.com/repos/huggingface/datasets/issues/4192
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4192/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4192/comments
https://api.github.com/repos/huggingface/datasets/issues/4192/events
https://github.com/huggingface/datasets/issues/4192
1,210,692,554
I_kwDODunzps5IKbPK
4,192
load_dataset can't load local dataset,Unable to find ...
{ "avatar_url": "https://avatars.githubusercontent.com/u/33253979?v=4", "events_url": "https://api.github.com/users/ahf876828330/events{/privacy}", "followers_url": "https://api.github.com/users/ahf876828330/followers", "following_url": "https://api.github.com/users/ahf876828330/following{/other_user}", "gists_url": "https://api.github.com/users/ahf876828330/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ahf876828330", "id": 33253979, "login": "ahf876828330", "node_id": "MDQ6VXNlcjMzMjUzOTc5", "organizations_url": "https://api.github.com/users/ahf876828330/orgs", "received_events_url": "https://api.github.com/users/ahf876828330/received_events", "repos_url": "https://api.github.com/users/ahf876828330/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ahf876828330/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ahf876828330/subscriptions", "type": "User", "url": "https://api.github.com/users/ahf876828330", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "Hi! :)\r\n\r\nI believe that should work unless `dataset_infos.json` isn't actually a dataset. For Hugging Face datasets, there is usually a file named `dataset_infos.json` which contains metadata about the dataset (eg. the dataset citation, license, description, etc). Can you double-check that `dataset_infos.json` isn't just metadata please?", "Hi @ahf876828330, \r\n\r\nAs @stevhliu pointed out, the proper way to load a dataset is not trying to load its metadata file.\r\n\r\nIn your case, as the dataset script is local, you should better point to your local loading script:\r\n```python\r\ndataset = load_dataset(\"dataset/opus_books.py\")\r\n```\r\n\r\nPlease, feel free to re-open this issue if the previous code snippet does not work for you.", "> Hi! :)\r\n> \r\n> I believe that should work unless `dataset_infos.json` isn't actually a dataset. For Hugging Face datasets, there is usually a file named `dataset_infos.json` which contains metadata about the dataset (eg. the dataset citation, license, description, etc). Can you double-check that `dataset_infos.json` isn't just metadata please?\r\n\r\nYes,you are right!So if I have a metadata dataset local,How can I turn it to a dataset that can be used by the load_dataset() function?Are there some examples?", "The metadata file isn't a dataset so you can't turn it into one. You should try @albertvillanova's code snippet above (now merged in the docs [here](https://huggingface.co/docs/datasets/master/en/loading#local-loading-script)), which uses your local loading script `opus_books.py` to:\r\n\r\n1. Download the actual dataset. \r\n2. Once the dataset is downloaded, `load_dataset` will load it for you." ]
2022-04-21T08:28:58
2022-04-25T16:51:57
2022-04-22T07:39:53
NONE
null
null
null
null
Traceback (most recent call last): File "/home/gs603/ahf/pretrained/model.py", line 48, in <module> dataset = load_dataset("json",data_files="dataset/dataset_infos.json") File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/load.py", line 1675, in load_dataset **config_kwargs, File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/load.py", line 1496, in load_dataset_builder data_files=data_files, File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/load.py", line 1155, in dataset_module_factory download_mode=download_mode, File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/load.py", line 800, in get_module data_files = DataFilesDict.from_local_or_remote(patterns, use_auth_token=self.downnload_config.use_auth_token) File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/data_files.py", line 582, in from_local_or_remote if not isinstance(patterns_for_key, DataFilesList) File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/data_files.py", line 544, in from_local_or_remote data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions) File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/data_files.py", line 194, in resolve_patterns_locally_or_by_urls for path in _resolve_single_pattern_locally(base_path, pattern, allowed_extensions): File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/data_files.py", line 144, in _resolve_single_pattern_locally raise FileNotFoundError(error_msg) FileNotFoundError: Unable to find '/home/gs603/ahf/pretrained/dataset/dataset_infos.json' at /home/gs603/ahf/pretrained ![image](https://user-images.githubusercontent.com/33253979/164413285-84ea65ac-9126-408f-9cd2-ce4751a5dd73.png) ![image](https://user-images.githubusercontent.com/33253979/164413338-4735142f-408b-41d9-ab87-8484de2be54f.png) the code is in the model.py,why I can't use the load_dataset function to load my local dataset?
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4192/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4192/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
23:10:55
https://api.github.com/repos/huggingface/datasets/issues/4191
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4191/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4191/comments
https://api.github.com/repos/huggingface/datasets/issues/4191/events
https://github.com/huggingface/datasets/issues/4191
1,210,028,090
I_kwDODunzps5IH5A6
4,191
feat: create an `Array3D` column from a list of arrays of dimension 2
{ "avatar_url": "https://avatars.githubusercontent.com/u/55560583?v=4", "events_url": "https://api.github.com/users/SaulLu/events{/privacy}", "followers_url": "https://api.github.com/users/SaulLu/followers", "following_url": "https://api.github.com/users/SaulLu/following{/other_user}", "gists_url": "https://api.github.com/users/SaulLu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/SaulLu", "id": 55560583, "login": "SaulLu", "node_id": "MDQ6VXNlcjU1NTYwNTgz", "organizations_url": "https://api.github.com/users/SaulLu/orgs", "received_events_url": "https://api.github.com/users/SaulLu/received_events", "repos_url": "https://api.github.com/users/SaulLu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/SaulLu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SaulLu/subscriptions", "type": "User", "url": "https://api.github.com/users/SaulLu", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
[ "Hi @SaulLu, thanks for your proposal.\r\n\r\nJust I got a bit confused about the dimensions...\r\n- For the 2D case, you mention it is possible to create an `Array2D` from a list of arrays of dimension 1\r\n- However, you give an example of creating an `Array2D` from arrays of dimension 2:\r\n - the values of `data_map` are arrays of dimension 2\r\n - the outer list in `prepare_dataset_2D` should not be taken into account in the dimension counting, as it is used because in `map` you pass `batched=True`\r\n\r\nNote that for the 3D alternatives you mention:\r\n- In `prepare_dataset_3D_ter`, you create an `Array3D` from arrays of dimension 3:\r\n - the array `data_map[index][np.newaxis, :, :]` has dimension 3\r\n - the outer list in `prepare_dataset_3D_ter` is the one used by `batched=True`\r\n- In `prepare_dataset_3D_bis`, you create an `Array3D` from a list of list of lists:\r\n - the value of `data_map[index].tolist()` is a list of lists\r\n - it is enclosed by another list `[data_map[index].tolist()]`, thus giving a list of list of lists\r\n - the outer list is the one used by `batched=True`\r\n\r\nTherefore, if I understand correctly, your request would be to be able to create an `Array3D` from a list of an array of dimension 2:\r\n- In `prepare_dataset_3D`, `data_map[index]` is an array of dimension 2\r\n- it is enclosed by a list `[data_map[index]]`, thus giving a list of an array of dimension 2\r\n- the outer list is the one used by `batched=True`\r\n\r\nPlease, feel free to tell me if I did not understand you correctly.", "Hi @albertvillanova ,\r\n\r\nIndeed my message was confusing and you guessed right :smile: : I think would be interesting to be able to create an Array3D from a list of an array of dimension 2. \r\n\r\nFor the 2D case I should have given as a \"similar\" example:\r\n```python\r\n\r\ndata_map_1D = {\r\n 1: np.array([0.2, 0.4]),\r\n 2: np.array([0.1, 0.4]),\r\n}\r\n\r\ndef prepare_dataset_2D(batch):\r\n batch[\"pixel_values\"] = [[data_map_1D[index]] for index in batch[\"id\"]]\r\n return batch\r\n \r\nds_2D = ds.map(\r\n prepare_dataset_2D, \r\n batched=True, \r\n remove_columns=ds.column_names, \r\n features=features.Features({\"pixel_values\": features.Array2D(shape=(1, 2), dtype=\"float32\")})\r\n)\r\n```" ]
2022-04-20T18:04:32
2022-05-12T15:08:40
2022-05-12T15:08:40
CONTRIBUTOR
null
null
null
null
**Is your feature request related to a problem? Please describe.** It is possible to create an `Array2D` column from a list of arrays of dimension 1. Similarly, I think it might be nice to be able to create a `Array3D` column from a list of lists of arrays of dimension 1. To illustrate my proposal, let's take the following toy dataset t: ```python import numpy as np from datasets import Dataset, features data_map = { 1: np.array([[0.2, 0,4],[0.19, 0,3]]), 2: np.array([[0.1, 0,4],[0.19, 0,3]]), } def create_toy_ds(): my_dict = {"id":[1, 2]} return Dataset.from_dict(my_dict) ds = create_toy_ds() ``` The following 2D processing works without any errors raised: ```python def prepare_dataset_2D(batch): batch["pixel_values"] = [data_map[index] for index in batch["id"]] return batch ds_2D = ds.map( prepare_dataset_2D, batched=True, remove_columns=ds.column_names, features=features.Features({"pixel_values": features.Array2D(shape=(2, 3), dtype="float32")}) ) ``` The following 3D processing doesn't work: ```python def prepare_dataset_3D(batch): batch["pixel_values"] = [[data_map[index]] for index in batch["id"]] return batch ds_3D = ds.map( prepare_dataset_3D, batched=True, remove_columns=ds.column_names, features=features.Features({"pixel_values": features.Array3D(shape=(1, 2, 3, dtype="float32")}) ) ``` The error raised is: ``` --------------------------------------------------------------------------- ArrowInvalid Traceback (most recent call last) [<ipython-input-6-676547e4cd41>](https://localhost:8080/#) in <module>() 3 batched=True, 4 remove_columns=ds.column_names, ----> 5 features=features.Features({"pixel_values": features.Array3D(shape=(1, 2, 3), dtype="float32")}) 6 ) 12 frames [/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc) 1971 new_fingerprint=new_fingerprint, 1972 disable_tqdm=disable_tqdm, -> 1973 desc=desc, 1974 ) 1975 else: [/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in wrapper(*args, **kwargs) 518 self: "Dataset" = kwargs.pop("self") 519 # apply actual function --> 520 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 521 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 522 for dataset in datasets: [/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in wrapper(*args, **kwargs) 485 } 486 # apply actual function --> 487 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 488 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 489 # re-apply format to the output [/usr/local/lib/python3.7/dist-packages/datasets/fingerprint.py](https://localhost:8080/#) in wrapper(*args, **kwargs) 456 # Call actual function 457 --> 458 out = func(self, *args, **kwargs) 459 460 # Update fingerprint of in-place transforms + update in-place history of transforms [/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in _map_single(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only) 2354 writer.write_table(batch) 2355 else: -> 2356 writer.write_batch(batch) 2357 if update_data and writer is not None: 2358 writer.finalize() # close_stream=bool(buf_writer is None)) # We only close if we are writing in a file [/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py](https://localhost:8080/#) in write_batch(self, batch_examples, writer_batch_size) 505 col_try_type = try_features[col] if try_features is not None and col in try_features else None 506 typed_sequence = OptimizedTypedSequence(batch_examples[col], type=col_type, try_type=col_try_type, col=col) --> 507 arrays.append(pa.array(typed_sequence)) 508 inferred_features[col] = typed_sequence.get_inferred_type() 509 schema = inferred_features.arrow_schema if self.pa_writer is None else self.schema /usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib.array() /usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib._handle_arrow_array_protocol() [/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py](https://localhost:8080/#) in __arrow_array__(self, type) 175 storage = list_of_np_array_to_pyarrow_listarray(data, type=pa_type.value_type) 176 else: --> 177 storage = pa.array(data, pa_type.storage_dtype) 178 return pa.ExtensionArray.from_storage(pa_type, storage) 179 /usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib.array() /usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib._sequence_to_array() /usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status() /usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.check_status() ArrowInvalid: Can only convert 1-dimensional array values ``` **Describe the solution you'd like** No error in the second scenario and an identical result to the following snippets. **Describe alternatives you've considered** There are other alternatives that work such as: ```python def prepare_dataset_3D_bis(batch): batch["pixel_values"] = [[data_map[index].tolist()] for index in batch["id"]] return batch ds_3D_bis = ds.map( prepare_dataset_3D_bis, batched=True, remove_columns=ds.column_names, features=features.Features({"pixel_values": features.Array3D(shape=(1, 2, 3), dtype="float32")}) ) ``` or ```python def prepare_dataset_3D_ter(batch): batch["pixel_values"] = [data_map[index][np.newaxis, :, :] for index in batch["id"]] return batch ds_3D_ter = ds.map( prepare_dataset_3D_ter, batched=True, remove_columns=ds.column_names, features=features.Features({"pixel_values": features.Array3D(shape=(1, 2, 3), dtype="float32")}) ) ``` But both solutions require the user to be aware that `data_map[index]` is an `np.array` type. cc @lhoestq as we discuss this offline :smile:
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/4191/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4191/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
21 days, 21:04:08
https://api.github.com/repos/huggingface/datasets/issues/4185
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4185/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4185/comments
https://api.github.com/repos/huggingface/datasets/issues/4185/events
https://github.com/huggingface/datasets/issues/4185
1,209,429,743
I_kwDODunzps5IFm7v
4,185
Librispeech documentation, clarification on format
{ "avatar_url": "https://avatars.githubusercontent.com/u/59132?v=4", "events_url": "https://api.github.com/users/albertz/events{/privacy}", "followers_url": "https://api.github.com/users/albertz/followers", "following_url": "https://api.github.com/users/albertz/following{/other_user}", "gists_url": "https://api.github.com/users/albertz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertz", "id": 59132, "login": "albertz", "node_id": "MDQ6VXNlcjU5MTMy", "organizations_url": "https://api.github.com/users/albertz/orgs", "received_events_url": "https://api.github.com/users/albertz/received_events", "repos_url": "https://api.github.com/users/albertz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertz/subscriptions", "type": "User", "url": "https://api.github.com/users/albertz", "user_view_type": "public" }
[]
open
false
null
[]
[ "(@patrickvonplaten )", "Also cc @lhoestq here", "The documentation in the code is definitely outdated - thanks for letting me know, I'll remove it in https://github.com/huggingface/datasets/pull/4184 .\r\n\r\nYou're exactly right `audio` `array` already decodes the audio file to the correct waveform. This is done on the fly, which is also why one should **not** do `ds[\"audio\"][\"array\"][0]` as this will decode all dataset samples, but instead `ds[0][\"audio\"][\"array\"]` see: https://huggingface.co/docs/datasets/audio_process#audio-datasets\r\n\r\n", "So, again to clarify: On disk, only the raw flac file content is stored? Is this also the case after `save_to_disk`?\r\n\r\nAnd is it simple to also store it re-encoded as ogg or mp3 instead?\r\n", "Hey, \r\n\r\nSorry yeah I was just about to look into this! We actually had an outdated version of Librispeech ASR that didn't save any files, but instead converted the audio files to a byte string, then was then decoded on-the-fly. This however is not very user-friendly so we recently decided to instead show the full path of the audio files with the `path` parameter.\r\n\r\nI'm currently changing this for Librispeech here: https://github.com/huggingface/datasets/pull/4184 .\r\nYou should be able to see the audio file in the original `flac` format under `path` then. I don't think it's a good idea to convert to MP3 out-of-the-box, but we could maybe think about some kind of convert function for audio datasets cc @lhoestq ? ", "> I don't think it's a good idea to convert to MP3 out-of-the-box, but we could maybe think about some kind of convert function for audio datasets cc @lhoestq ?\r\n\r\nSure, I would expect that `load_dataset(\"librispeech_asr\")` would give you the original (not re-encoded) data (flac or already decoded). So such re-encoding logic would be some separate generic function. So I could do sth like `dataset.reencode_as_ogg(**ogg_encode_opts).save_to_disk(...)` or so.\r\n", "A follow-up question: I wonder whether a Parquet dataset is maybe more what we actually want to have? (Following also my comment here: https://github.com/huggingface/datasets/pull/4184#issuecomment-1105045491.) Because I think we actually would prefer to embed the data content in the dataset.\r\n\r\nSo, instead of `save_to_disk`/`load_from_disk`, we would use `to_parquet`,`from_parquet`? Is there any downside? Are arrow files more efficient?\r\n\r\nRelated is also the doc update in #4193.\r\n", "`save_to_disk` saves the dataset as an Arrow file, which is the format we use to load a dataset using memory mapping. This way the dataset does not fill your RAM, but is read from your disk instead.\r\n\r\nTherefore you can directly reload a dataset saved with `save_to_disk` using `load_from_disk`.\r\n\r\nParquet files are used for cold storage: to use memory mapping on a Parquet dataset, you first have to convert it to Arrow. We use Parquet to reduce the I/O when pushing/downloading data from the Hugging face Hub. When you load a Parquet file from the Hub, it is converted to Arrow on the fly during the download." ]
2022-04-20T09:35:55
2022-04-21T11:00:53
null
NONE
null
null
null
null
https://github.com/huggingface/datasets/blob/cd3ce34ab1604118351e1978d26402de57188901/datasets/librispeech_asr/librispeech_asr.py#L53 > Note that in order to limit the required storage for preparing this dataset, the audio > is stored in the .flac format and is not converted to a float32 array. To convert, the audio > file to a float32 array, please make use of the `.map()` function as follows: > > ```python > import soundfile as sf > def map_to_array(batch): > speech_array, _ = sf.read(batch["file"]) > batch["speech"] = speech_array > return batch > dataset = dataset.map(map_to_array, remove_columns=["file"]) > ``` Is this still true? In my case, `ds["train.100"]` returns: ``` Dataset({ features: ['file', 'audio', 'text', 'speaker_id', 'chapter_id', 'id'], num_rows: 28539 }) ``` and taking the first instance yields: ``` {'file': '374-180298-0000.flac', 'audio': {'path': '374-180298-0000.flac', 'array': array([ 7.01904297e-04, 7.32421875e-04, 7.32421875e-04, ..., -2.74658203e-04, -1.83105469e-04, -3.05175781e-05]), 'sampling_rate': 16000}, 'text': 'CHAPTER SIXTEEN I MIGHT HAVE TOLD YOU OF THE BEGINNING OF THIS LIAISON IN A FEW LINES BUT I WANTED YOU TO SEE EVERY STEP BY WHICH WE CAME I TO AGREE TO WHATEVER MARGUERITE WISHED', 'speaker_id': 374, 'chapter_id': 180298, 'id': '374-180298-0000'} ``` The `audio` `array` seems to be already decoded. So such convert/decode code as mentioned in the doc is wrong? But I wonder, is it actually stored as flac on disk, and the decoding is done on-the-fly? Or was it decoded already during the preparation and is stored as raw samples on disk? Note that I also used `datasets.load_dataset("librispeech_asr", "clean").save_to_disk(...)` and then `datasets.load_from_disk(...)` in this example. Does this change anything on how it is stored on disk? A small related question: Actually I would prefer to even store it as mp3 or ogg on disk. Is this easy to convert?
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4185/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4185/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
https://api.github.com/repos/huggingface/datasets/issues/4182
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4182/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4182/comments
https://api.github.com/repos/huggingface/datasets/issues/4182/events
https://github.com/huggingface/datasets/issues/4182
1,208,285,235
I_kwDODunzps5IBPgz
4,182
Zenodo.org download is not responding
{ "avatar_url": "https://avatars.githubusercontent.com/u/32985207?v=4", "events_url": "https://api.github.com/users/dkajtoch/events{/privacy}", "followers_url": "https://api.github.com/users/dkajtoch/followers", "following_url": "https://api.github.com/users/dkajtoch/following{/other_user}", "gists_url": "https://api.github.com/users/dkajtoch/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dkajtoch", "id": 32985207, "login": "dkajtoch", "node_id": "MDQ6VXNlcjMyOTg1MjA3", "organizations_url": "https://api.github.com/users/dkajtoch/orgs", "received_events_url": "https://api.github.com/users/dkajtoch/received_events", "repos_url": "https://api.github.com/users/dkajtoch/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dkajtoch/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dkajtoch/subscriptions", "type": "User", "url": "https://api.github.com/users/dkajtoch", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "[Off topic but related: Is the uptime of S3 provably better than Zenodo's?]", "Hi @dkajtoch, please note that at HuggingFace we are not hosting this dataset: we are just using a script to download their data file and create a dataset from it.\r\n\r\nIt was the dataset owners decision to host their data at Zenodo. You can see this on their website: https://marcobaroni.org/composes/sick.html\r\n\r\nAnd yes, you are right: Zenodo is currently having some incidents and people are reporting problems from it.\r\n\r\nOn the other hand, we could contact the data owners and propose them to host their data at our Hugging Face Hub.\r\n\r\n@julien-c I guess so.\r\n", "Thanks @albertvillanova. I know that the problem lies in the source data. I just wanted to point out that these kind of problems are unavoidable without having one place where data sources are cached. Websites may go down or data sources may move. Having a copy in Hugging Face Hub would be a great solution. ", "Definitely, @dkajtoch! But we have to ask permission to the data owners. And many dataset licenses directly forbid data redistribution: in those cases we are not allowed to host their data on our Hub.", "Ahhh good point! License is the problem :(" ]
2022-04-19T12:26:57
2022-04-20T07:11:05
2022-04-20T07:11:05
CONTRIBUTOR
null
null
null
null
## Describe the bug Source download_url from zenodo.org does not respond. `_DOWNLOAD_URL = "https://zenodo.org/record/2787612/files/SICK.zip?download=1"` Other datasets also use zenodo.org to store data and they cannot be downloaded as well. It would be better to actually use more reliable way to store original data like s3 bucket. ## Steps to reproduce the bug ```python load_dataset("sick") ``` ## Expected results Dataset should be downloaded. ## Actual results ConnectionError: Couldn't reach https://zenodo.org/record/2787612/files/SICK.zip?download=1 (ReadTimeout(ReadTimeoutError("HTTPSConnectionPool(host='zenodo.org', port=443): Read timed out. (read timeout=100)"))) ## Environment info - `datasets` version: 2.1.0 - Platform: Darwin-21.4.0-x86_64-i386-64bit - Python version: 3.7.11 - PyArrow version: 7.0.0 - Pandas version: 1.3.5
{ "avatar_url": "https://avatars.githubusercontent.com/u/32985207?v=4", "events_url": "https://api.github.com/users/dkajtoch/events{/privacy}", "followers_url": "https://api.github.com/users/dkajtoch/followers", "following_url": "https://api.github.com/users/dkajtoch/following{/other_user}", "gists_url": "https://api.github.com/users/dkajtoch/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dkajtoch", "id": 32985207, "login": "dkajtoch", "node_id": "MDQ6VXNlcjMyOTg1MjA3", "organizations_url": "https://api.github.com/users/dkajtoch/orgs", "received_events_url": "https://api.github.com/users/dkajtoch/received_events", "repos_url": "https://api.github.com/users/dkajtoch/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dkajtoch/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dkajtoch/subscriptions", "type": "User", "url": "https://api.github.com/users/dkajtoch", "user_view_type": "public" }
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/4182/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4182/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
18:44:08
https://api.github.com/repos/huggingface/datasets/issues/4181
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4181/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4181/comments
https://api.github.com/repos/huggingface/datasets/issues/4181/events
https://github.com/huggingface/datasets/issues/4181
1,208,194,805
I_kwDODunzps5IA5b1
4,181
Support streaming FLEURS dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten", "user_view_type": "public" }
[ { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
closed
false
null
[]
[ "Yes, you just have to use `dl_manager.iter_archive` instead of `dl_manager.download_and_extract`.\r\n\r\nThat's because `download_and_extract` doesn't support TAR archives in streaming mode.", "Tried to make it streamable, but I don't think it's really possible. @lhoestq @polinaeterna maybe you guys can check: \r\nhttps://huggingface.co/datasets/google/fleurs/commit/dcf80160cd77977490a8d32b370c027107f2407b \r\n\r\nreal quick. \r\n\r\nI think the problem is that we cannot ensure that the metadata file is found before the audio. Or is this possible somehow @lhoestq ? ", "@patrickvonplaten I think the metadata file should be found first because the audio files are contained in a folder next to the metadata files (just as in common voice), so the metadata files should be \"on top of the list\" as they are closer to the root in the directories hierarchy ", "@patrickvonplaten but apparently it doesn't... I don't really know why.", "Yeah! Any ideas what could be the reason here? cc @lhoestq ?", "The order of the files is determined when the TAR archive is created, depending on the commands the creator ran.\r\nIf the metadata file is not at the beginning of the file, that makes streaming completely inefficient. In this case the TAR archive needs to be recreated in an appropriate order.", "Actually we could maybe just host the metadata file ourselves and then stream the audio data only. Don't think that this would be a problem for the FLEURS authors (I can ask them :-)) ", "I made a PR to their repo to support streaming (by uploading the metadata file to the Hub). See:\r\n- https://huggingface.co/datasets/google/fleurs/discussions/4", "I'm closing this issue as the PR above has been merged." ]
2022-04-19T11:09:56
2022-07-25T11:44:02
2022-07-25T11:44:02
CONTRIBUTOR
null
null
null
null
## Dataset viewer issue for '*name of the dataset*' https://huggingface.co/datasets/google/fleurs ``` Status code: 400 Exception: NotImplementedError Message: Extraction protocol for TAR archives like 'https://storage.googleapis.com/xtreme_translations/FLEURS/af_za.tar.gz' is not implemented in streaming mode. Please use `dl_manager.iter_archive` instead. ``` Am I the one who added this dataset ? Yes Can I fix this somehow in the script? @lhoestq @severo
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4181/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4181/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
97 days, 0:34:06
https://api.github.com/repos/huggingface/datasets/issues/4180
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4180/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4180/comments
https://api.github.com/repos/huggingface/datasets/issues/4180/events
https://github.com/huggingface/datasets/issues/4180
1,208,042,320
I_kwDODunzps5IAUNQ
4,180
Add some iteration method on a dataset column (specific for inference)
{ "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Narsil", "id": 204321, "login": "Narsil", "node_id": "MDQ6VXNlcjIwNDMyMQ==", "organizations_url": "https://api.github.com/users/Narsil/orgs", "received_events_url": "https://api.github.com/users/Narsil/received_events", "repos_url": "https://api.github.com/users/Narsil/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "type": "User", "url": "https://api.github.com/users/Narsil", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
[ "Thanks for the suggestion ! I agree it would be nice to have something directly in `datasets` to do something as simple as that\r\n\r\ncc @albertvillanova @mariosasko @polinaeterna What do you think if we have something similar to pandas `Series` that wouldn't bring everything in memory when doing `dataset[\"audio\"]` ? Currently it returns a list with all the decoded audio data in memory.\r\n\r\nIt would be a breaking change though, since `isinstance(dataset[\"audio\"], list)` wouldn't work anymore, but we could implement a `Sequence` so that `dataset[\"audio\"][0]` still works and only loads one item in memory.\r\n\r\nYour alternative suggestion with `iterate` is also sensible, though maybe less satisfactory in terms of experience IMO", "I agree that current behavior (decoding all audio file sin the dataset when accessing `dataset[\"audio\"]`) is not useful, IMHO. Indeed in our docs, we are constantly warning our collaborators not to do that.\r\n\r\nTherefore I upvote for a \"useful\" behavior of `dataset[\"audio\"]`. I don't think the breaking change is important in this case, as I guess no many people use it with its current behavior. Therefore, for me it seems reasonable to return a generator (instead of an in-memeory list) for \"special\" features, like Audio/Image.\r\n\r\n@lhoestq on the other hand I don't understand your proposal about Pandas-like... ", "I recall I had the same idea while working on the `Image` feature, so I agree implementing something similar to `pd.Series` that lazily brings elements in memory would be beneficial.", "@lhoestq @mariosasko Could you please give a link to that new feature of `pandas.Series`? As far as I remember since I worked with pandas for more than 6 years, there was no lazy in-memory feature; it was everything in-memory; that was the reason why other frameworks were created, like Vaex or Dask, e.g. ", "Yea pandas doesn't do lazy loading. I was referring to pandas.Series to say that they have a dedicated class to represent a column ;)" ]
2022-04-19T09:15:45
2025-06-17T13:08:50
2025-06-17T13:08:50
CONTRIBUTOR
null
null
null
null
**Is your feature request related to a problem? Please describe.** A clear and concise description of what the problem is. Currently, `dataset["audio"]` will load EVERY element in the dataset in RAM, which can be quite big for an audio dataset. Having an iterator (or sequence) type of object, would make inference with `transformers` 's `pipeline` easier to use and not so memory hungry. **Describe the solution you'd like** A clear and concise description of what you want to happen. For a non breaking change: ```python for audio in dataset.iterate("audio"): # {"array": np.array(...), "sampling_rate":...} ``` For a breaking change solution (not necessary), changing the type of `dataset["audio"]` to a sequence type so that ```python pipe = pipeline(model="...") for out in pipe(dataset["audio"]): # {"text":....} ``` could work **Describe alternatives you've considered** A clear and concise description of any alternative solutions or features you've considered. ```python def iterate(dataset, key): for item in dataset: yield dataset[key] for out in pipeline(iterate(dataset, "audio")): # {"array": ...} ``` This works but requires the helper function which feels slightly clunky. **Additional context** Add any other context about the feature request here. The context is actually to showcase better integration between `pipeline` and `datasets` in the Quicktour demo: https://github.com/huggingface/transformers/pull/16723/files @lhoestq
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4180/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4180/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
1155 days, 3:53:05
https://api.github.com/repos/huggingface/datasets/issues/4179
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4179/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4179/comments
https://api.github.com/repos/huggingface/datasets/issues/4179/events
https://github.com/huggingface/datasets/issues/4179
1,208,001,118
I_kwDODunzps5IAKJe
4,179
Dataset librispeech_asr fails to load
{ "avatar_url": "https://avatars.githubusercontent.com/u/59132?v=4", "events_url": "https://api.github.com/users/albertz/events{/privacy}", "followers_url": "https://api.github.com/users/albertz/followers", "following_url": "https://api.github.com/users/albertz/following{/other_user}", "gists_url": "https://api.github.com/users/albertz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertz", "id": 59132, "login": "albertz", "node_id": "MDQ6VXNlcjU5MTMy", "organizations_url": "https://api.github.com/users/albertz/orgs", "received_events_url": "https://api.github.com/users/albertz/received_events", "repos_url": "https://api.github.com/users/albertz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertz/subscriptions", "type": "User", "url": "https://api.github.com/users/albertz", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "@patrickvonplaten Hi! I saw that you prepared this? :)", "Another thing, but maybe this should be a separate issue: As I see from the code, it would try to use up to 16 simultaneous downloads? This is problematic for Librispeech or anything on OpenSLR. On [the homepage](https://www.openslr.org/), it says:\r\n\r\n> If you want to download things from this site, please download them one at a time, and please don't use any fancy software-- just download things from your browser or use 'wget'. We have a firewall rule to drop connections from hosts with more than 5 simultaneous connections, and certain types of download software may activate this rule.\r\n\r\nRelated: https://github.com/tensorflow/datasets/issues/3885", "Hey @albertz,\r\n\r\nNice to see you here! It's been a while ;-) ", "Sorry maybe the docs haven't been super clear here. By `split` we mean one of `train.500`, `train.360`, `train.100`, `validation`, `test`. For Librispeech, you'll have to specific a config (either `other` or `clean`) though:\r\n\r\n```py\r\ndatasets.load_dataset(\"librispeech_asr\", \"clean\")\r\n```\r\n\r\nshould work and give you all splits (being \"train\", \"test\", ...) for the clean config of the dataset.\r\n", "If you need both `\"clean\"` and `\"other\"` I think you'll have to do concatenate them as follows: \r\n\r\n```py\r\nfrom datasets import concatenate_datasets, load_dataset\r\n\r\nother = load_dataset(\"librispeech_asr\", \"other\")\r\nclean = load_dataset(\"librispeech_asr\", \"clean\")\r\n\r\nlibrispeech = concatenate_datasets([other, clean])\r\n```\r\n\r\nSee https://huggingface.co/docs/datasets/v2.1.0/en/process#concatenate", "Downloading one split would be:\r\n\r\n```py\r\nfrom datasets import load_dataset\r\n\r\nother = load_dataset(\"librispeech_asr\", \"other\", split=\"train.500\")\r\n```\r\n\r\n\r\n", "cc @lhoestq FYI maybe the docs can be improved here", "Ah thanks. But wouldn't it be easier/nicer (and more canonical) to just make it in a way that simply `load_dataset(\"librispeech_asr\")` works?", "Pinging @lhoestq here, think this could make sense! Not sure however how the dictionary would then look like", "Would it make sense to have `clean` as the default config ?\r\n\r\nAlso I think `load_dataset(\"librispeech_asr\")` should have raised you an error that says that you need to specify a config\r\n\r\nI also opened a PR to improve the doc: https://github.com/huggingface/datasets/pull/4183", "> Would it make sense to have `clean` as the default config ?\r\n\r\nI think a user would expect that the default would give you the full dataset.\r\n\r\n> Also I think `load_dataset(\"librispeech_asr\")` should have raised you an error that says that you need to specify a config\r\n\r\nIt does raise an error, but this error confused me because I did not understand why I needed a config, or why I could not simply download the whole dataset, which is what people usually do with Librispeech.\r\n", "+1 for @albertz. Also think lots of people download the whole dataset (`\"clean\"` + `\"other\"`) for Librispeech.\r\n\r\nThink there are also some people though who:\r\n- a) Don't have the memory to store the whole dataset\r\n- b) Just want to evaluate on one of the two configs", "Ok ! Adding the \"all\" configuration would do the job then, thanks ! In the \"all\" configuration we can merge all the train.xxx splits into one \"train\" split, or keep them separate depending on what's the most practical to use (probably put everything in \"train\" no ?)", "I'm not too familiar with how to work with HuggingFace datasets, but people often do some curriculum learning scheme, where they start with train.100, later go over to train.100 + train.360, and then later use the whole train (960h). It would be good if this is easily possible.\r\n", "Hey @albertz, \r\n\r\nopened a PR here. Think by adding the \"subdataset\" class to each split \"train\", \"dev\", \"other\" as shown here: https://github.com/huggingface/datasets/pull/4184/files#r853272727 it should be easily possible (e.g. with the filter function https://huggingface.co/docs/datasets/v2.1.0/en/package_reference/main_classes#datasets.Dataset.filter )", "But also since everything is cached one could also just do:\r\n\r\n```python\r\nload_dataset(\"librispeech\", \"clean\", \"train.100\")\r\nload_dataset(\"librispeech\", \"clean\", \"train.100+train.360\")\r\nload_dataset(\"librispeech\" \"all\", \"train\") \r\n```", "Hi @patrickvonplaten ,\r\n\r\nload_dataset(\"librispeech_asr\", \"clean\", \"train.100\") actually downloads the whole dataset and not the 100 hr split, is this a bug?", "Hmm, I don't really see how that's possible: https://github.com/huggingface/datasets/blob/d22e39a0693d4be7410cf9a5d41fd5aac22be3cc/datasets/librispeech_asr/librispeech_asr.py#L51\r\n\r\nNote that all datasets related to `\"clean\"` are downloaded, but only `\"train.100\"` should be used. \r\n\r\ncc @lhoestq @albertvillanova @mariosasko can we do anything against download dataset links that are not related to the \"split\" that one actually needs. E.g. why should the split `\"train.360\"` be downloaded if for the user executes the above command:\r\n\r\n```py\r\nload_dataset(\"librispeech_asr\", \"clean\", \"train.100\")\r\n```", "@patrickvonplaten This problem is a bit harder than it may seem, and it has to do with how our scripts are structured - `_split_generators` downloads data for a split before its definition. There was an attempt to fix this in https://github.com/huggingface/datasets/pull/2249, but it wasn't flexible enough. Luckily, I have a plan of attack, and this issue is on our short-term roadmap, so I'll work on it soon.\r\n\r\nIn the meantime, one can use streaming or manually download a dataset script, remove unwanted splits and load a dataset via `load_dataset`.", "> load_dataset(\"librispeech_asr\", \"clean\", \"train.100\") actually downloads the whole dataset and not the 100 hr split, is this a bug?\r\n\r\nSince this bug is still there and google led me here when I was searching for a solution, I am writing down how to quickly fix it (as suggested by @mariosasko) for whoever else is not familiar with how the HF Hub works.\r\n\r\nDownload the [librispeech_asr.py](https://huggingface.co/datasets/librispeech_asr/blob/main/librispeech_asr.py) script and remove the unwanted splits both from the [`_DL_URLS` dictionary](https://huggingface.co/datasets/librispeech_asr/blob/main/librispeech_asr.py#L47-L68) and from the [`_split_generators` function](https://huggingface.co/datasets/librispeech_asr/blob/main/librispeech_asr.py#L121-L241).\r\n[Here ](https://huggingface.co/datasets/andreagasparini/librispeech_test_only) I made an example with only the test sets.\r\n\r\nThen either save the script locally and load the dataset via \r\n```python\r\nload_dataset(\"${local_path}/librispeech_asr.py\")\r\n```\r\n\r\nor [create a new dataset repo on the hub](https://huggingface.co/new-dataset) named \"librispeech_asr\" and upload the script there, then you can just run\r\n```python\r\nload_dataset(\"${hugging_face_username}/librispeech_asr\")\r\n```", "Fixed by https://github.com/huggingface/datasets/pull/4184" ]
2022-04-19T08:45:48
2022-07-27T16:10:00
2022-07-27T16:10:00
NONE
null
null
null
null
## Describe the bug The dataset librispeech_asr (standard Librispeech) fails to load. ## Steps to reproduce the bug ```python datasets.load_dataset("librispeech_asr") ``` ## Expected results It should download and prepare the whole dataset (all subsets). In [the doc](https://huggingface.co/datasets/librispeech_asr), it says it has two configurations (clean and other). However, the dataset doc says that not specifying `split` should just load the whole dataset, which is what I want. Also, in case of this specific dataset, this is also the standard what the community uses. When you look at any publications with results on Librispeech, they always use the whole train dataset for training. ## Actual results ``` ... File "/home/az/.cache/huggingface/modules/datasets_modules/datasets/librispeech_asr/1f4602f6b5fed8d3ab3e3382783173f2e12d9877e98775e34d7780881175096c/librispeech_asr.py", line 119, in LibrispeechASR._split_generators line: archive_path = dl_manager.download(_DL_URLS[self.config.name]) locals: archive_path = <not found> dl_manager = <local> <datasets.utils.download_manager.DownloadManager object at 0x7fc07b426160> dl_manager.download = <local> <bound method DownloadManager.download of <datasets.utils.download_manager.DownloadManager object at 0x7fc07b426160>> _DL_URLS = <global> {'clean': {'dev': 'http://www.openslr.org/resources/12/dev-clean.tar.gz', 'test': 'http://www.openslr.org/resources/12/test-clean.tar.gz', 'train.100': 'http://www.openslr.org/resources/12/train-clean-100.tar.gz', 'train.360': 'http://www.openslr.org/resources/12/train-clean-360.tar.gz'}, 'other'... self = <local> <datasets_modules.datasets.librispeech_asr.1f4602f6b5fed8d3ab3e3382783173f2e12d9877e98775e34d7780881175096c.librispeech_asr.LibrispeechASR object at 0x7fc12a633310> self.config = <local> BuilderConfig(name='default', version=0.0.0, data_dir='/home/az/i6/setups/2022-03-20--sis/work/i6_core/datasets/huggingface/DownloadAndPrepareHuggingFaceDatasetJob.TV6Nwm6dFReF/output/data_dir', data_files=None, description=None) self.config.name = <local> 'default', len = 7 KeyError: 'default' ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.1.0 - Platform: Linux-5.4.0-107-generic-x86_64-with-glibc2.31 - Python version: 3.9.9 - PyArrow version: 6.0.1 - Pandas version: 1.4.2
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4179/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4179/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
99 days, 7:24:12
https://api.github.com/repos/huggingface/datasets/issues/4176
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4176/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4176/comments
https://api.github.com/repos/huggingface/datasets/issues/4176/events
https://github.com/huggingface/datasets/issues/4176
1,206,515,563
I_kwDODunzps5H6fdr
4,176
Very slow between two operations
{ "avatar_url": "https://avatars.githubusercontent.com/u/26405281?v=4", "events_url": "https://api.github.com/users/yanan1116/events{/privacy}", "followers_url": "https://api.github.com/users/yanan1116/followers", "following_url": "https://api.github.com/users/yanan1116/following{/other_user}", "gists_url": "https://api.github.com/users/yanan1116/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yanan1116", "id": 26405281, "login": "yanan1116", "node_id": "MDQ6VXNlcjI2NDA1Mjgx", "organizations_url": "https://api.github.com/users/yanan1116/orgs", "received_events_url": "https://api.github.com/users/yanan1116/received_events", "repos_url": "https://api.github.com/users/yanan1116/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yanan1116/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yanan1116/subscriptions", "type": "User", "url": "https://api.github.com/users/yanan1116", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[]
2022-04-17T23:52:29
2022-04-18T00:03:00
2022-04-18T00:03:00
NONE
null
null
null
null
Hello, in the processing stage, I use two operations. The first one : map + filter, is very fast and it uses the full cores, while the socond step is very slow and did not use full cores. Also, there is a significant lag between them. Am I missing something ? ``` raw_datasets = raw_datasets.map(split_func, batched=False, num_proc=args.preprocessing_num_workers, load_from_cache_file=not args.overwrite_cache, desc = "running split para ==>")\ .filter(lambda example: example['text1']!='' and example['text2']!='', num_proc=args.preprocessing_num_workers, desc="filtering ==>") processed_datasets = raw_datasets.map( preprocess_function, batched=True, num_proc=args.preprocessing_num_workers, remove_columns=column_names, load_from_cache_file=not args.overwrite_cache, desc="Running tokenizer on dataset===>", ) ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/26405281?v=4", "events_url": "https://api.github.com/users/yanan1116/events{/privacy}", "followers_url": "https://api.github.com/users/yanan1116/followers", "following_url": "https://api.github.com/users/yanan1116/following{/other_user}", "gists_url": "https://api.github.com/users/yanan1116/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yanan1116", "id": 26405281, "login": "yanan1116", "node_id": "MDQ6VXNlcjI2NDA1Mjgx", "organizations_url": "https://api.github.com/users/yanan1116/orgs", "received_events_url": "https://api.github.com/users/yanan1116/received_events", "repos_url": "https://api.github.com/users/yanan1116/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yanan1116/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yanan1116/subscriptions", "type": "User", "url": "https://api.github.com/users/yanan1116", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4176/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4176/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
0:10:31
https://api.github.com/repos/huggingface/datasets/issues/4169
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4169/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4169/comments
https://api.github.com/repos/huggingface/datasets/issues/4169/events
https://github.com/huggingface/datasets/issues/4169
1,203,995,869
I_kwDODunzps5Hw4Td
4,169
Timit_asr dataset cannot be previewed recently
{ "avatar_url": "https://avatars.githubusercontent.com/u/75192317?v=4", "events_url": "https://api.github.com/users/YingLi001/events{/privacy}", "followers_url": "https://api.github.com/users/YingLi001/followers", "following_url": "https://api.github.com/users/YingLi001/following{/other_user}", "gists_url": "https://api.github.com/users/YingLi001/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/YingLi001", "id": 75192317, "login": "YingLi001", "node_id": "MDQ6VXNlcjc1MTkyMzE3", "organizations_url": "https://api.github.com/users/YingLi001/orgs", "received_events_url": "https://api.github.com/users/YingLi001/received_events", "repos_url": "https://api.github.com/users/YingLi001/repos", "site_admin": false, "starred_url": "https://api.github.com/users/YingLi001/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/YingLi001/subscriptions", "type": "User", "url": "https://api.github.com/users/YingLi001", "user_view_type": "public" }
[]
closed
false
null
[]
[ "Thanks for reporting. The bug has already been detected, and we hope to fix it soon.", "TIMIT is now a dataset that requires manual download, see #4145 \r\n\r\nTherefore it might take a bit more time to fix it", "> TIMIT is now a dataset that requires manual download, see #4145\r\n> \r\n> Therefore it might take a bit more time to fix it\r\n\r\nThank you for your quickly response. Exactly, I also found the manual download issue in the morning. But when I used *list_datasets()* to check the available datasets, *'timit_asr'* is still in the list. So I am a little bit confused. If *'timit_asr'* need to be manually downloaded, does that mean we can **not** automatically download it **any more** in the future?", "Yes exactly. If you try to load the dataset it will ask you to download it manually first, and to pass the downloaded and extracted data like `load_dataset(\"timir_asr\", data_dir=\"path/to/extracted/data\")`\r\n\r\nThe URL we were using was coming from a host that doesn't have the permission to redistribute the data, and the dataset owners (LDC) notified us about it.", "I downloaded the timit_asr data and unzipped. But I can't run my code. Could you resolve this problem for me? Thanks\r\n\r\n import soundfile as sf\r\n import torch\r\n from datasets import load_dataset\r\n dataset = load_dataset(\"timit_asr\", data_dir=\"/Users/nguyenvannham/Documents/test_case/data\")\r\n \r\n \r\n Generating train split: 0 examples [00:00, ? examples/s]\r\n\r\nGenerating train split: 0 examples [00:00, ? examples/s]Traceback (most recent call last):\r\n\r\n File \"/opt/anaconda3/envs/audio/lib/python3.9/site-packages/datasets/builder.py\", line 1571, in _prepare_split_single\r\n for key, record in generator:\r\n\r\n File \"/Users/nguyenvannham/.cache/huggingface/modules/datasets_modules/datasets/timit_asr/43f9448dd5db58e95ee48a277f466481b151f112ea53e27f8173784da9254fb2/timit_asr.py\", line 138, in _generate_examples\r\n with txt_path.open(encoding=\"utf-8\") as op:\r\n\r\n File \"/opt/anaconda3/envs/audio/lib/python3.9/pathlib.py\", line 1252, in open\r\n return io.open(self, mode, buffering, encoding, errors, newline,\r\n\r\n File \"/opt/anaconda3/envs/audio/lib/python3.9/pathlib.py\", line 1120, in _opener\r\n return self._accessor.open(self, flags, mode)\r\n\r\nFileNotFoundError: [Errno 2] No such file or directory: '/Users/nguyenvannham/Documents/test_case/data/train/DR1/FCJF0/SA1.WAV.TXT'\r\n\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n\r\n File \"/var/folders/t9/l8d3rwpn1k33_gjtqs732lzc0000gn/T/ipykernel_3891/1203313828.py\", line 1, in <module>\r\n dataset = load_dataset(\"timit_asr\", data_dir=\"/Users/nguyenvannham/Documents/test_case/data\")\r\n\r\n File \"/opt/anaconda3/envs/audio/lib/python3.9/site-packages/datasets/load.py\", line 1758, in load_dataset\r\n builder_instance.download_and_prepare(\r\n\r\n File \"/opt/anaconda3/envs/audio/lib/python3.9/site-packages/datasets/builder.py\", line 860, in download_and_prepare\r\n self._download_and_prepare(\r\n\r\n File \"/opt/anaconda3/envs/audio/lib/python3.9/site-packages/datasets/builder.py\", line 1612, in _download_and_prepare\r\n super()._download_and_prepare(\r\n\r\n File \"/opt/anaconda3/envs/audio/lib/python3.9/site-packages/datasets/builder.py\", line 953, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n\r\n File \"/opt/anaconda3/envs/audio/lib/python3.9/site-packages/datasets/builder.py\", line 1450, in _prepare_split\r\n for job_id, done, content in self._prepare_split_single(\r\n\r\n File \"/opt/anaconda3/envs/audio/lib/python3.9/site-packages/datasets/builder.py\", line 1607, in _prepare_split_single\r\n raise DatasetGenerationError(\"An error occurred while generating the dataset\") from e\r\n\r\nDatasetGenerationError: An error occurred while generating the dataset" ]
2022-04-14T03:28:31
2023-02-03T04:54:57
2022-05-06T16:06:51
NONE
null
null
null
null
## Dataset viewer issue for '*timit_asr*' **Link:** *https://huggingface.co/datasets/timit_asr* Issue: The timit-asr dataset cannot be previewed recently. Am I the one who added this dataset ? Yes-No No
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4169/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4169/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
22 days, 12:38:20
https://api.github.com/repos/huggingface/datasets/issues/4163
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4163/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4163/comments
https://api.github.com/repos/huggingface/datasets/issues/4163/events
https://github.com/huggingface/datasets/issues/4163
1,203,539,268
I_kwDODunzps5HvI1E
4,163
Optional Content Warning for Datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/20826878?v=4", "events_url": "https://api.github.com/users/TristanThrush/events{/privacy}", "followers_url": "https://api.github.com/users/TristanThrush/followers", "following_url": "https://api.github.com/users/TristanThrush/following{/other_user}", "gists_url": "https://api.github.com/users/TristanThrush/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/TristanThrush", "id": 20826878, "login": "TristanThrush", "node_id": "MDQ6VXNlcjIwODI2ODc4", "organizations_url": "https://api.github.com/users/TristanThrush/orgs", "received_events_url": "https://api.github.com/users/TristanThrush/received_events", "repos_url": "https://api.github.com/users/TristanThrush/repos", "site_admin": false, "starred_url": "https://api.github.com/users/TristanThrush/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TristanThrush/subscriptions", "type": "User", "url": "https://api.github.com/users/TristanThrush", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
[ "Hi! You can use the `extra_gated_prompt` YAML field in a dataset card for displaying custom messages/warnings that the user must accept before gaining access to the actual dataset. This option also keeps the viewer hidden until the user agrees to terms. ", "Hi @mariosasko, thanks for explaining how to add this feature. \r\n\r\nIf the current dataset yaml is:\r\n```\r\n---\r\nannotations_creators:\r\n- expert\r\nlanguage_creators:\r\n- expert-generated\r\nlanguages:\r\n- en\r\nlicense:\r\n- cc-by-4.0\r\nmultilinguality:\r\n- monolingual\r\npretty_name: HatemojiBuild\r\nsize_categories:\r\n- 1K<n<10K\r\nsource_datasets:\r\n- original\r\ntask_categories:\r\n- text-classification\r\ntask_ids:\r\n- hate-speech-detection\r\n---\r\n```\r\n\r\nCan you provide a minimal working example of how to added the gated prompt?\r\n\r\nThanks!", "```\r\n---\r\nannotations_creators:\r\n- expert\r\nlanguage_creators:\r\n- expert-generated\r\nlanguages:\r\n- en\r\nlicense:\r\n- cc-by-4.0\r\nmultilinguality:\r\n- monolingual\r\npretty_name: HatemojiBuild\r\nsize_categories:\r\n- 1K<n<10K\r\nsource_datasets:\r\n- original\r\ntask_categories:\r\n- text-classification\r\ntask_ids:\r\n- hate-speech-detection\r\nextra_gated_prompt: \"This repository contains harmful content.\"\r\n---\r\n```\r\n\\+ enable `User Access requests` under the Settings pane.\r\n\r\nThere's a brief guide here https://discuss.huggingface.co/t/how-to-customize-the-user-access-requests-message/13953 , and you can see the field in action here, https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0/blob/main/README.md (you need to agree the terms in the Dataset Card pane to be able to access the files pane, so this comes up 403 at first).\r\n\r\nAnd a working example here! https://huggingface.co/datasets/DDSC/dkhate :) Great to be able to mitigate harms in text.", "-- is there a way to gate content anonymously, i.e. without registering which users access it?", "+1 to @leondz's question. One scenario is if you don't want the dataset to be indexed by search engines or viewed in browser b/c of upstream conditions on data, but don't want to collect emails. Some ability to turn off the dataset viewer or add a gating mechanism without emails would be fantastic." ]
2022-04-13T16:38:01
2022-06-09T20:39:02
null
CONTRIBUTOR
null
null
null
null
**Is your feature request related to a problem? Please describe.** A clear and concise description of what the problem is. We now have hate speech datasets on the hub, like this one: https://huggingface.co/datasets/HannahRoseKirk/HatemojiBuild I'm wondering if there is an option to select a content warning message that appears before the dataset preview? Otherwise, people immediately see hate speech when clicking on this dataset. **Describe the solution you'd like** A clear and concise description of what you want to happen. Implementation of a content warning message that separates users from the dataset preview until they click out of the warning. **Describe alternatives you've considered** A clear and concise description of any alternative solutions or features you've considered. Possibly just a way to remove the dataset preview completely? I think I like the content warning option better, though. **Additional context** Add any other context about the feature request here.
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4163/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4163/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
https://api.github.com/repos/huggingface/datasets/issues/4160
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4160/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4160/comments
https://api.github.com/repos/huggingface/datasets/issues/4160/events
https://github.com/huggingface/datasets/issues/4160
1,202,845,874
I_kwDODunzps5Hsfiy
4,160
RGBA images not showing
{ "avatar_url": "https://avatars.githubusercontent.com/u/15624271?v=4", "events_url": "https://api.github.com/users/cceyda/events{/privacy}", "followers_url": "https://api.github.com/users/cceyda/followers", "following_url": "https://api.github.com/users/cceyda/following{/other_user}", "gists_url": "https://api.github.com/users/cceyda/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cceyda", "id": 15624271, "login": "cceyda", "node_id": "MDQ6VXNlcjE1NjI0Mjcx", "organizations_url": "https://api.github.com/users/cceyda/orgs", "received_events_url": "https://api.github.com/users/cceyda/received_events", "repos_url": "https://api.github.com/users/cceyda/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cceyda/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cceyda/subscriptions", "type": "User", "url": "https://api.github.com/users/cceyda", "user_view_type": "public" }
[ { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" }, { "color": "6C5FC0", "default": false, "description": "", "id": 4030246674, "name": "dataset-viewer-rgba-images", "node_id": "LA_kwDODunzps7wOK8S", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer-rgba-images" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo", "user_view_type": "public" } ]
[ "Thanks for reporting. It's a known issue, and we hope to fix it soon.", "Fixed, thanks!" ]
2022-04-13T06:59:23
2022-06-21T16:43:11
2022-06-21T16:43:11
CONTRIBUTOR
null
null
null
null
## Dataset viewer issue for ceyda/smithsonian_butterflies_transparent [**Link:** *link to the dataset viewer page*](https://huggingface.co/datasets/ceyda/smithsonian_butterflies_transparent) ![image](https://user-images.githubusercontent.com/15624271/163117683-e91edb28-41bf-43d9-b371-5c62e14f40c9.png) Am I the one who added this dataset ? Yes 👉 More of a general issue of 'RGBA' png images not being supported (the dataset itself is just for the huggan sprint and not that important, consider it just an example)
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4160/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4160/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
69 days, 9:43:48
https://api.github.com/repos/huggingface/datasets/issues/4152
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4152/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4152/comments
https://api.github.com/repos/huggingface/datasets/issues/4152/events
https://github.com/huggingface/datasets/issues/4152
1,202,034,115
I_kwDODunzps5HpZXD
4,152
ArrayND error in pyarrow 5
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
[ "Where do we bump the required pyarrow version? Any inputs on how I fix this issue? ", "We need to bump it in `setup.py` as well as update some CI job to use pyarrow 6 instead of 5 in `.circleci/config.yaml` and `.github/workflows/benchmarks.yaml`" ]
2022-04-12T15:41:40
2022-05-04T09:29:46
2022-05-04T09:29:46
MEMBER
null
null
null
null
As found in https://github.com/huggingface/datasets/pull/3903, The ArrayND features fail on pyarrow 5: ```python import pyarrow as pa from datasets import Array2D from datasets.table import cast_array_to_feature arr = pa.array([[[0]]]) feature_type = Array2D(shape=(1, 1), dtype="int64") cast_array_to_feature(arr, feature_type) ``` raises ```python --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-8-04610f9fa78c> in <module> ----> 1 cast_array_to_feature(pa.array([[[0]]]), Array2D(shape=(1, 1), dtype="int32")) ~/Desktop/hf/datasets/src/datasets/table.py in wrapper(array, *args, **kwargs) 1672 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) 1673 else: -> 1674 return func(array, *args, **kwargs) 1675 1676 return wrapper ~/Desktop/hf/datasets/src/datasets/table.py in cast_array_to_feature(array, feature, allow_number_to_str) 1806 return array_cast(array, get_nested_type(feature), allow_number_to_str=allow_number_to_str) 1807 elif not isinstance(feature, (Sequence, dict, list, tuple)): -> 1808 return array_cast(array, feature(), allow_number_to_str=allow_number_to_str) 1809 raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}") 1810 ~/Desktop/hf/datasets/src/datasets/table.py in wrapper(array, *args, **kwargs) 1672 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) 1673 else: -> 1674 return func(array, *args, **kwargs) 1675 1676 return wrapper ~/Desktop/hf/datasets/src/datasets/table.py in array_cast(array, pa_type, allow_number_to_str) 1705 array = array.storage 1706 if isinstance(pa_type, pa.ExtensionType): -> 1707 return pa_type.wrap_array(array) 1708 elif pa.types.is_struct(array.type): 1709 if pa.types.is_struct(pa_type) and ( AttributeError: 'Array2DExtensionType' object has no attribute 'wrap_array' ``` The thing is that `cast_array_to_feature` is called when writing an Arrow file, so creating an Arrow dataset using any ArrayND type currently fails. `wrap_array` has been added in pyarrow 6, so we can either bump the required pyarrow version or fix this for pyarrow 5
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4152/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4152/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
21 days, 17:48:06
https://api.github.com/repos/huggingface/datasets/issues/4150
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4150/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4150/comments
https://api.github.com/repos/huggingface/datasets/issues/4150/events
https://github.com/huggingface/datasets/issues/4150
1,201,689,730
I_kwDODunzps5HoFSC
4,150
Inconsistent splits generation for datasets without loading script (packaged dataset puts everything into a single split)
{ "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/polinaeterna", "id": 16348744, "login": "polinaeterna", "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "repos_url": "https://api.github.com/users/polinaeterna/repos", "site_admin": false, "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "type": "User", "url": "https://api.github.com/users/polinaeterna", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[]
2022-04-12T11:15:55
2022-04-28T21:02:44
2022-04-28T21:02:44
CONTRIBUTOR
null
null
null
null
## Describe the bug Splits for dataset loaders without scripts are prepared inconsistently. I think it might be confusing for users. ## Steps to reproduce the bug * If you load a packaged datasets from Hub, it infers splits from directory structure / filenames (check out the data [here](https://huggingface.co/datasets/nateraw/test-imagefolder-dataset)): ```python ds = load_dataset("nateraw/test-imagefolder-dataset") print(ds) ### Output: DatasetDict({ train: Dataset({ features: ['image', 'label'], num_rows: 6 }) test: Dataset({ features: ['image', 'label'], num_rows: 4 }) }) ``` * If you do the same from locally stored data specifying only directory path you'll get the same: ```python ds = load_dataset("/path/to/local/data/test-imagefolder-dataset") print(ds) ### Output: DatasetDict({ train: Dataset({ features: ['image', 'label'], num_rows: 6 }) test: Dataset({ features: ['image', 'label'], num_rows: 4 }) }) ``` * However, if you explicitely specify package name (like `imagefolder`, `csv`, `json`), all the data is put into a single split: ```python ds = load_dataset("imagefolder", data_dir="/path/to/local/data/test-imagefolder-dataset") print(ds) ### Output: DatasetDict({ train: Dataset({ features: ['image', 'label'], num_rows: 10 }) }) ``` ## Expected results For `load_dataset("imagefolder", data_dir="/path/to/local/data/test-imagefolder-dataset")` I expect the same output as of the two first options.
{ "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/polinaeterna", "id": 16348744, "login": "polinaeterna", "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "repos_url": "https://api.github.com/users/polinaeterna/repos", "site_admin": false, "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "type": "User", "url": "https://api.github.com/users/polinaeterna", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4150/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4150/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
16 days, 9:46:49
https://api.github.com/repos/huggingface/datasets/issues/4149
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4149/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4149/comments
https://api.github.com/repos/huggingface/datasets/issues/4149/events
https://github.com/huggingface/datasets/issues/4149
1,201,389,221
I_kwDODunzps5Hm76l
4,149
load_dataset for winoground returning decoding error
{ "avatar_url": "https://avatars.githubusercontent.com/u/4686956?v=4", "events_url": "https://api.github.com/users/odellus/events{/privacy}", "followers_url": "https://api.github.com/users/odellus/followers", "following_url": "https://api.github.com/users/odellus/following{/other_user}", "gists_url": "https://api.github.com/users/odellus/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/odellus", "id": 4686956, "login": "odellus", "node_id": "MDQ6VXNlcjQ2ODY5NTY=", "organizations_url": "https://api.github.com/users/odellus/orgs", "received_events_url": "https://api.github.com/users/odellus/received_events", "repos_url": "https://api.github.com/users/odellus/repos", "site_admin": false, "starred_url": "https://api.github.com/users/odellus/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/odellus/subscriptions", "type": "User", "url": "https://api.github.com/users/odellus", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "I thought I had fixed it with this after some helpful hints from @severo\r\n```python\r\nimport datasets \r\ntoken = 'hf_XXXXX'\r\ndataset = datasets.load_dataset(\r\n 'facebook/winoground', \r\n name='facebook--winoground', \r\n split='train', \r\n streaming=True,\r\n use_auth_token=token,\r\n)\r\n```\r\nbut I found out that wasn't the case\r\n```python\r\n[x for x in dataset]\r\n...\r\nClientResponseError: 401, message='Unauthorized', url=URL('https://huggingface.co/datasets/facebook/winoground/resolve/a86a60456fbbd242e9a744199071a6bd3e7fd9de/examples.jsonl')\r\n```", "Hi ! This dataset structure (image + labels in a JSON file) is not supported yet, though we're adding support for this in in #4069 \r\n\r\nThe following structure will be supported soon:\r\n```\r\nmetadata.json\r\nimages/\r\n image0.png\r\n image1.png\r\n ...\r\n```\r\nWhere `metadata.json` is a JSON Lines file with labels or other metadata, and each line must have a \"file_name\" field with the name of the image file.\r\n\r\nFor the moment are only supported:\r\n- JSON files only\r\n- image files only\r\n\r\nSince this dataset is a mix of the two, at the moment it fails trying to read the images as JSON.\r\n\r\nTherefore to be able to load this dataset we need to wait for the new structure to be supported (very soon ^^), or add a dataset script in the repository that reads both the JSON and the images cc @TristanThrush \r\n", "We'll also investigate the issue with the streaming download manager in https://github.com/huggingface/datasets/issues/4139 ;) thanks for reporting", "Are there any updates on this?", "In the meantime, anyone can always download the images.zip and examples.jsonl files directly from huggingface.co - let me know if anyone has issues with that.", "I mirrored the files at https://huggingface.co/datasets/facebook/winoground in a folder on my local machine `winground`\r\nand when I tried\r\n```python\r\nimport datasets\r\nds = datasets.load_from_disk('./winoground')\r\n```\r\nI get the following error\r\n```python\r\n--------------------------------------------------------------------------\r\nFileNotFoundError Traceback (most recent call last)\r\nInput In [2], in <cell line: 1>()\r\n----> 1 ds = datasets.load_from_disk('./winoground')\r\n\r\nFile ~/.local/lib/python3.8/site-packages/datasets/load.py:1759, in load_from_disk(dataset_path, fs, keep_in_memory)\r\n 1757 return DatasetDict.load_from_disk(dataset_path, fs, keep_in_memory=keep_in_memory)\r\n 1758 else:\r\n-> 1759 raise FileNotFoundError(\r\n 1760 f\"Directory {dataset_path} is neither a dataset directory nor a dataset dict directory.\"\r\n 1761 )\r\n\r\nFileNotFoundError: Directory ./winoground is neither a dataset directory nor a dataset dict directory.\r\n```\r\nso still some work to be done on the backend imo.", "Note that `load_from_disk` is the function that reloads an Arrow dataset saved with `my_dataset.save_to_disk`.\r\n\r\nOnce we do support images with metadata you'll be able to use `load_dataset(\"facebook/winoground\")` directly (or `load_dataset(\"./winoground\")` of you've cloned the winoground repository locally).", "Apologies for the delay. I added a custom dataset loading script for winoground. It should work now, with an auth token:\r\n\r\n`examples = load_dataset('facebook/winoground', use_auth_token=<your auth token>)`\r\n\r\nLet me know if there are any issues", "Adding the dataset loading script definitely didn't take as long as I thought it would 😅", "killer" ]
2022-04-12T08:16:16
2022-05-04T23:40:38
2022-05-04T23:40:38
CONTRIBUTOR
null
null
null
null
## Describe the bug I am trying to use datasets to load winoground and I'm getting a JSON decoding error. ## Steps to reproduce the bug ```python from datasets import load_dataset token = 'hf_XXXXX' # my HF access token datasets = load_dataset('facebook/winoground', use_auth_token=token) ``` ## Expected results I downloaded images.zip and examples.jsonl manually. I was expecting to have some trouble decoding json so I didn't use jsonlines but instead was able to get a complete set of 400 examples by doing ```python import json with open('examples.jsonl', 'r') as f: examples = f.read().split('\n') # Thinking this would error if the JSON is not utf-8 encoded json_data = [json.loads(x) for x in examples] print(json_data[-1]) ``` and I see ```python {'caption_0': 'someone is overdoing it', 'caption_1': 'someone is doing it over', 'collapsed_tag': 'Relation', 'id': 399, 'image_0': 'ex_399_img_0', 'image_1': 'ex_399_img_1', 'num_main_preds': 1, 'secondary_tag': 'Morpheme-Level', 'tag': 'Scope, Preposition'} ``` so I'm not sure what's going on here honestly. The file `examples.jsonl` doesn't have non-UTF-8 encoded text. ## Actual results During the split operation after downloading, datasets encounters an error in the JSON ([trace](https://gist.github.com/odellus/e55d390ca203386bf551f38e0c63a46b) abbreviated for brevity). ``` datasets/packaged_modules/json/json.py:144 in Json._generate_tables(self, files) ... UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.4 - Platform: Linux-5.13.0-39-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 7.0.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/4686956?v=4", "events_url": "https://api.github.com/users/odellus/events{/privacy}", "followers_url": "https://api.github.com/users/odellus/followers", "following_url": "https://api.github.com/users/odellus/following{/other_user}", "gists_url": "https://api.github.com/users/odellus/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/odellus", "id": 4686956, "login": "odellus", "node_id": "MDQ6VXNlcjQ2ODY5NTY=", "organizations_url": "https://api.github.com/users/odellus/orgs", "received_events_url": "https://api.github.com/users/odellus/received_events", "repos_url": "https://api.github.com/users/odellus/repos", "site_admin": false, "starred_url": "https://api.github.com/users/odellus/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/odellus/subscriptions", "type": "User", "url": "https://api.github.com/users/odellus", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4149/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4149/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
22 days, 15:24:22
https://api.github.com/repos/huggingface/datasets/issues/4148
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4148/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4148/comments
https://api.github.com/repos/huggingface/datasets/issues/4148/events
https://github.com/huggingface/datasets/issues/4148
1,201,169,242
I_kwDODunzps5HmGNa
4,148
fix confusing bleu metric example
{ "avatar_url": "https://avatars.githubusercontent.com/u/6253193?v=4", "events_url": "https://api.github.com/users/aizawa-naoki/events{/privacy}", "followers_url": "https://api.github.com/users/aizawa-naoki/followers", "following_url": "https://api.github.com/users/aizawa-naoki/following{/other_user}", "gists_url": "https://api.github.com/users/aizawa-naoki/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/aizawa-naoki", "id": 6253193, "login": "aizawa-naoki", "node_id": "MDQ6VXNlcjYyNTMxOTM=", "organizations_url": "https://api.github.com/users/aizawa-naoki/orgs", "received_events_url": "https://api.github.com/users/aizawa-naoki/received_events", "repos_url": "https://api.github.com/users/aizawa-naoki/repos", "site_admin": false, "starred_url": "https://api.github.com/users/aizawa-naoki/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aizawa-naoki/subscriptions", "type": "User", "url": "https://api.github.com/users/aizawa-naoki", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
[]
2022-04-12T06:18:26
2022-04-13T14:16:34
2022-04-13T14:16:34
NONE
null
null
null
null
**Is your feature request related to a problem? Please describe.** I would like to see the example in "Metric Card for BLEU" changed. The 0th element in the predictions list is not closed in square brackets, and the 1st list is missing a comma. The BLEU score are calculated correctly, but it is difficult to understand, so it would be helpful if you could correct this. ``` >> predictions = [ ... ["hello", "there", "general", "kenobi", # <- no closing square bracket. ... ["foo", "bar" "foobar"] # <- no comma between "bar" and "foobar" ... ] >>> references = [ ... [["hello", "there", "general", "kenobi"]], ... [["foo", "bar", "foobar"]] ... ] >>> bleu = datasets.load_metric("bleu") >>> results = bleu.compute(predictions=predictions, references=references) >>> print(results) {'bleu': 0.6370964381207871, ... ``` **Describe the solution you'd like** ``` >> predictions = [ ... ["hello", "there", "general", "kenobi", # <- no closing square bracket. ... ["foo", "bar" "foobar"] # <- no comma between "bar" and "foobar" ... ] # and >>> print(results) {'bleu':1.0, ... ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4148/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4148/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
1 day, 7:58:08
https://api.github.com/repos/huggingface/datasets/issues/4146
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4146/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4146/comments
https://api.github.com/repos/huggingface/datasets/issues/4146/events
https://github.com/huggingface/datasets/issues/4146
1,200,215,789
I_kwDODunzps5Hidbt
4,146
SAMSum dataset viewer not working
{ "avatar_url": "https://avatars.githubusercontent.com/u/39906333?v=4", "events_url": "https://api.github.com/users/aakashnegi10/events{/privacy}", "followers_url": "https://api.github.com/users/aakashnegi10/followers", "following_url": "https://api.github.com/users/aakashnegi10/following{/other_user}", "gists_url": "https://api.github.com/users/aakashnegi10/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/aakashnegi10", "id": 39906333, "login": "aakashnegi10", "node_id": "MDQ6VXNlcjM5OTA2MzMz", "organizations_url": "https://api.github.com/users/aakashnegi10/orgs", "received_events_url": "https://api.github.com/users/aakashnegi10/received_events", "repos_url": "https://api.github.com/users/aakashnegi10/repos", "site_admin": false, "starred_url": "https://api.github.com/users/aakashnegi10/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aakashnegi10/subscriptions", "type": "User", "url": "https://api.github.com/users/aakashnegi10", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
[ "https://huggingface.co/datasets/samsum\r\n\r\n```\r\nStatus code: 400\r\nException: ValueError\r\nMessage: Cannot seek streaming HTTP file\r\n```", "Currently, only the datasets that can be streamed support the dataset viewer. Maybe @lhoestq @albertvillanova or @mariosasko could give more details about why the dataset cannot be streamed.", "It looks like the host (https://arxiv.org) doesn't allow HTTP Range requests, which is what we use to stream data.\r\n\r\nThis can be fix if we host the data ourselves, which is ok since the dataset is under CC BY-NC-ND 4.0" ]
2022-04-11T16:22:57
2022-04-29T16:26:09
2022-04-29T16:26:09
NONE
null
null
null
null
## Dataset viewer issue for '*name of the dataset*' **Link:** *link to the dataset viewer page* *short description of the issue* Am I the one who added this dataset ? Yes-No
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4146/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4146/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
18 days, 0:03:12
https://api.github.com/repos/huggingface/datasets/issues/4143
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4143/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4143/comments
https://api.github.com/repos/huggingface/datasets/issues/4143/events
https://github.com/huggingface/datasets/issues/4143
1,199,937,961
I_kwDODunzps5HhZmp
4,143
Unable to download `Wikepedia` 20220301.en version
{ "avatar_url": "https://avatars.githubusercontent.com/u/37113676?v=4", "events_url": "https://api.github.com/users/beyondguo/events{/privacy}", "followers_url": "https://api.github.com/users/beyondguo/followers", "following_url": "https://api.github.com/users/beyondguo/following{/other_user}", "gists_url": "https://api.github.com/users/beyondguo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/beyondguo", "id": 37113676, "login": "beyondguo", "node_id": "MDQ6VXNlcjM3MTEzNjc2", "organizations_url": "https://api.github.com/users/beyondguo/orgs", "received_events_url": "https://api.github.com/users/beyondguo/received_events", "repos_url": "https://api.github.com/users/beyondguo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/beyondguo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/beyondguo/subscriptions", "type": "User", "url": "https://api.github.com/users/beyondguo", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "Hi! We've recently updated the Wikipedia script, so these changes are only available on master and can be fetched as follows:\r\n```python\r\ndataset_wikipedia = load_dataset(\"wikipedia\", \"20220301.en\", revision=\"master\")\r\n```", "Hi, how can I load the previous \"20200501.en\" version of wikipedia which had been downloaded to the default path? Thanks!", "@JiaQiSJTU just reinstall the previous verision of the package, e.g. `!pip install -q datasets==1.0.0`" ]
2022-04-11T13:00:14
2022-08-17T00:37:55
2022-04-21T17:04:14
NONE
null
null
null
null
## Describe the bug Unable to download `Wikepedia` dataset, 20220301.en version ## Steps to reproduce the bug ```python !pip install apache_beam mwparserfromhell dataset_wikipedia = load_dataset("wikipedia", "20220301.en") ``` ## Actual results ``` ValueError: BuilderConfig 20220301.en not found. Available: ['20200501.aa', '20200501.ab', '20200501.ace', '20200501.ady', '20200501.af', '20200501.ak', '20200501.als', '20200501.am', '20200501.an', '20200501.ang', '20200501.ar', '20200501.arc', '20200501.arz', '20200501.as', '20200501.ast', '20200501.atj', '20200501.av', '20200501.ay', '20200501.az', '20200501.azb', '20200501.ba', '20200501.bar', '20200501.bat-smg', '20200501.bcl', '20200501.be', '20200501.be-x-old', '20200501.bg', '20200501.bh', '20200501.bi', '20200501.bjn', '20200501.bm', '20200501.bn', '20200501.bo', '20200501.bpy', '20200501.br', '20200501.bs', '20200501.bug', '20200501.bxr', '20200501.ca', '20200501.cbk-zam', '20200501.cdo', '20200501.ce', '20200501.ceb', '20200501.ch', '20200501.cho', '20200501.chr', '20200501.chy', '20200501.ckb', '20200501.co', '20200501.cr', '20200501.crh', '20200501.cs', '20200501.csb', '20200501.cu', '20200501.cv', '20200501.cy', '20200501.da', '20200501.de', '20200501.din', '20200501.diq', '20200501.dsb', '20200501.dty', '20200501.dv', '20200501.dz', '20200501.ee', '20200501.el', '20200501.eml', '20200501.en', '20200501.eo', '20200501.es', '20200501.et', '20200501.eu', '20200501.ext', '20200501.fa', '20200501.ff', '20200501.fi', '20200501.fiu-vro', '20200501.fj', '20200501.fo', '20200501.fr', '20200501.frp', '20200501.frr', '20200501.fur', '20200501.fy', '20200501.ga', '20200501.gag', '20200501.gan', '20200501.gd', '20200501.gl', '20200501.glk', '20200501.gn', '20200501.gom', '20200501.gor', '20200501.got', '20200501.gu', '20200501.gv', '20200501.ha', '20200501.hak', '20200501.haw', '20200501.he', '20200501.hi', '20200501.hif', '20200501.ho', '20200501.hr', '20200501.hsb', '20200501.ht', '20200501.hu', '20200501.hy', '20200501.ia', '20200501.id', '20200501.ie', '20200501.ig', '20200501.ii', '20200501.ik', '20200501.ilo', '20200501.inh', '20200501.io', '20200501.is', '20200501.it', '20200501.iu', '20200501.ja', '20200501.jam', '20200501.jbo', '20200501.jv', '20200501.ka', '20200501.kaa', '20200501.kab', '20200501.kbd', '20200501.kbp', '20200501.kg', '20200501.ki', '20200501.kj', '20200501.kk', '20200501.kl', '20200501.km', '20200501.kn', '20200501.ko', '20200501.koi', '20200501.krc', '20200501.ks', '20200501.ksh', '20200501.ku', '20200501.kv', '20200501.kw', '20200501.ky', '20200501.la', '20200501.lad', '20200501.lb', '20200501.lbe', '20200501.lez', '20200501.lfn', '20200501.lg', '20200501.li', '20200501.lij', '20200501.lmo', '20200501.ln', '20200501.lo', '20200501.lrc', '20200501.lt', '20200501.ltg', '20200501.lv', '20200501.mai', '20200501.map-bms', '20200501.mdf', '20200501.mg', '20200501.mh', '20200501.mhr', '20200501.mi', '20200501.min', '20200501.mk', '20200501.ml', '20200501.mn', '20200501.mr', '20200501.mrj', '20200501.ms', '20200501.mt', '20200501.mus', '20200501.mwl', '20200501.my', '20200501.myv', '20200501.mzn', '20200501.na', '20200501.nah', '20200501.nap', '20200501.nds', '20200501.nds-nl', '20200501.ne', '20200501.new', '20200501.ng', '20200501.nl', '20200501.nn', '20200501.no', '20200501.nov', '20200501.nrm', '20200501.nso', '20200501.nv', '20200501.ny', '20200501.oc', '20200501.olo', '20200501.om', '20200501.or', '20200501.os', '20200501.pa', '20200501.pag', '20200501.pam', '20200501.pap', '20200501.pcd', '20200501.pdc', '20200501.pfl', '20200501.pi', '20200501.pih', '20200501.pl', '20200501.pms', '20200501.pnb', '20200501.pnt', '20200501.ps', '20200501.pt', '20200501.qu', '20200501.rm', '20200501.rmy', '20200501.rn', '20200501.ro', '20200501.roa-rup', '20200501.roa-tara', '20200501.ru', '20200501.rue', '20200501.rw', '20200501.sa', '20200501.sah', '20200501.sat', '20200501.sc', '20200501.scn', '20200501.sco', '20200501.sd', '20200501.se', '20200501.sg', '20200501.sh', '20200501.si', '20200501.simple', '20200501.sk', '20200501.sl', '20200501.sm', '20200501.sn', '20200501.so', '20200501.sq', '20200501.sr', '20200501.srn', '20200501.ss', '20200501.st', '20200501.stq', '20200501.su', '20200501.sv', '20200501.sw', '20200501.szl', '20200501.ta', '20200501.tcy', '20200501.te', '20200501.tet', '20200501.tg', '20200501.th', '20200501.ti', '20200501.tk', '20200501.tl', '20200501.tn', '20200501.to', '20200501.tpi', '20200501.tr', '20200501.ts', '20200501.tt', '20200501.tum', '20200501.tw', '20200501.ty', '20200501.tyv', '20200501.udm', '20200501.ug', '20200501.uk', '20200501.ur', '20200501.uz', '20200501.ve', '20200501.vec', '20200501.vep', '20200501.vi', '20200501.vls', '20200501.vo', '20200501.wa', '20200501.war', '20200501.wo', '20200501.wuu', '20200501.xal', '20200501.xh', '20200501.xmf', '20200501.yi', '20200501.yo', '20200501.za', '20200501.zea', '20200501.zh', '20200501.zh-classical', '20200501.zh-min-nan', '20200501.zh-yue', '20200501.zu'] ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.0.0 - Platform: Ubuntu - Python version: 3.6 - PyArrow version: 6.0.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4143/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4143/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
10 days, 4:04:00
https://api.github.com/repos/huggingface/datasets/issues/4142
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4142/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4142/comments
https://api.github.com/repos/huggingface/datasets/issues/4142/events
https://github.com/huggingface/datasets/issues/4142
1,199,794,750
I_kwDODunzps5Hg2o-
4,142
Add ObjectFolder 2.0 dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4", "events_url": "https://api.github.com/users/osanseviero/events{/privacy}", "followers_url": "https://api.github.com/users/osanseviero/followers", "following_url": "https://api.github.com/users/osanseviero/following{/other_user}", "gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/osanseviero", "id": 7246357, "login": "osanseviero", "node_id": "MDQ6VXNlcjcyNDYzNTc=", "organizations_url": "https://api.github.com/users/osanseviero/orgs", "received_events_url": "https://api.github.com/users/osanseviero/received_events", "repos_url": "https://api.github.com/users/osanseviero/repos", "site_admin": false, "starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions", "type": "User", "url": "https://api.github.com/users/osanseviero", "user_view_type": "public" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
open
false
null
[]
[ "Datasets are not tracked in this repository anymore." ]
2022-04-11T10:57:51
2022-10-05T10:30:49
null
CONTRIBUTOR
null
null
null
null
## Adding a Dataset - **Name:** ObjectFolder 2.0 - **Description:** ObjectFolder 2.0 is a dataset of 1,000 objects in the form of implicit representations. It contains 1,000 Object Files each containing the complete multisensory profile for an object instance. - **Paper:** [*link to the dataset paper if available*](https://arxiv.org/abs/2204.02389) - **Data:** https://github.com/rhgao/ObjectFolder Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4142/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4142/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
https://api.github.com/repos/huggingface/datasets/issues/4141
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4141/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4141/comments
https://api.github.com/repos/huggingface/datasets/issues/4141/events
https://github.com/huggingface/datasets/issues/4141
1,199,610,885
I_kwDODunzps5HgJwF
4,141
Why is the dataset not visible under the dataset preview section?
{ "avatar_url": "https://avatars.githubusercontent.com/u/75028682?v=4", "events_url": "https://api.github.com/users/Nid989/events{/privacy}", "followers_url": "https://api.github.com/users/Nid989/followers", "following_url": "https://api.github.com/users/Nid989/following{/other_user}", "gists_url": "https://api.github.com/users/Nid989/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Nid989", "id": 75028682, "login": "Nid989", "node_id": "MDQ6VXNlcjc1MDI4Njgy", "organizations_url": "https://api.github.com/users/Nid989/orgs", "received_events_url": "https://api.github.com/users/Nid989/received_events", "repos_url": "https://api.github.com/users/Nid989/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Nid989/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Nid989/subscriptions", "type": "User", "url": "https://api.github.com/users/Nid989", "user_view_type": "public" }
[ { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
closed
false
null
[]
[]
2022-04-11T08:36:42
2022-04-11T18:55:32
2022-04-11T17:09:49
NONE
null
null
null
null
## Dataset viewer issue for '*name of the dataset*' **Link:** *link to the dataset viewer page* *short description of the issue* Am I the one who added this dataset ? Yes-No
{ "avatar_url": "https://avatars.githubusercontent.com/u/75028682?v=4", "events_url": "https://api.github.com/users/Nid989/events{/privacy}", "followers_url": "https://api.github.com/users/Nid989/followers", "following_url": "https://api.github.com/users/Nid989/following{/other_user}", "gists_url": "https://api.github.com/users/Nid989/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Nid989", "id": 75028682, "login": "Nid989", "node_id": "MDQ6VXNlcjc1MDI4Njgy", "organizations_url": "https://api.github.com/users/Nid989/orgs", "received_events_url": "https://api.github.com/users/Nid989/received_events", "repos_url": "https://api.github.com/users/Nid989/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Nid989/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Nid989/subscriptions", "type": "User", "url": "https://api.github.com/users/Nid989", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4141/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4141/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
8:33:07
https://api.github.com/repos/huggingface/datasets/issues/4140
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4140/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4140/comments
https://api.github.com/repos/huggingface/datasets/issues/4140/events
https://github.com/huggingface/datasets/issues/4140
1,199,492,356
I_kwDODunzps5Hfs0E
4,140
Error loading arxiv data set
{ "avatar_url": "https://avatars.githubusercontent.com/u/5383918?v=4", "events_url": "https://api.github.com/users/yjqiu/events{/privacy}", "followers_url": "https://api.github.com/users/yjqiu/followers", "following_url": "https://api.github.com/users/yjqiu/following{/other_user}", "gists_url": "https://api.github.com/users/yjqiu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yjqiu", "id": 5383918, "login": "yjqiu", "node_id": "MDQ6VXNlcjUzODM5MTg=", "organizations_url": "https://api.github.com/users/yjqiu/orgs", "received_events_url": "https://api.github.com/users/yjqiu/received_events", "repos_url": "https://api.github.com/users/yjqiu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yjqiu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjqiu/subscriptions", "type": "User", "url": "https://api.github.com/users/yjqiu", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "Hi! I think this error may be related to using an older version of the library. I was able to load the dataset without any issues using the latest version of `datasets`. Can you upgrade to the latest version of `datasets` and try again? :)", "Hi! As @stevhliu suggested, to fix the issue, update the lib to the newest version with:\r\n```\r\npip install -U datasets\r\n```\r\nand download the dataset as follows:\r\n```python\r\nfrom datasets import load_dataset\r\ndset = load_dataset('scientific_papers', 'arxiv', download_mode=\"force_redownload\")\r\n```", "Thanks for the quick response! It works now. The problem is that I used nlp. load_dataset instead of datasets. load_dataset." ]
2022-04-11T07:06:34
2022-04-12T16:24:08
2022-04-12T16:24:08
NONE
null
null
null
null
## Describe the bug A clear and concise description of what the bug is. I met the error below when loading arxiv dataset via `nlp.load_dataset('scientific_papers', 'arxiv',)`. ``` Traceback (most recent call last): File "scripts/summarization.py", line 354, in <module> main(args) File "scripts/summarization.py", line 306, in main model.hf_datasets = nlp.load_dataset('scientific_papers', 'arxiv') File "/opt/conda/envs/longformer/lib/python3.7/site-packages/nlp/load.py", line 549, in load_dataset download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications, File "/opt/conda/envs/longformer/lib/python3.7/site-packages/nlp/builder.py", line 463, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/opt/conda/envs/longformer/lib/python3.7/site-packages/nlp/builder.py", line 522, in _download_and_prepare self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), "dataset source files" File "/opt/conda/envs/longformer/lib/python3.7/site-packages/nlp/utils/info_utils.py", line 38, in verify_checksums raise NonMatchingChecksumError(error_msg + str(bad_urls)) nlp.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://drive.google.com/uc?id=1b3rmCSIoh6VhD4HKWjI4HOW-cSwcwbeC&export=download', 'https://drive.google.com/uc?id=1lvsqvsFi3W-pE1SqNZI0s8NR9rC1tsja&export=download'] ``` I then tried to ignore verification steps by `ignore_verifications=True` and there is another error. ``` Traceback (most recent call last): File "/opt/conda/envs/longformer/lib/python3.7/site-packages/nlp/builder.py", line 537, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/opt/conda/envs/longformer/lib/python3.7/site-packages/nlp/builder.py", line 810, in _prepare_split for key, record in utils.tqdm(generator, unit=" examples", total=split_info.num_examples, leave=False): File "/opt/conda/envs/longformer/lib/python3.7/site-packages/tqdm/std.py", line 1195, in __iter__ for obj in iterable: File "/opt/conda/envs/longformer/lib/python3.7/site-packages/nlp/datasets/scientific_papers/9e4f2cfe3d8494e9f34a84ce49c3214605b4b52a3d8eb199104430d04c52cc12/scientific_papers.py", line 108, in _generate_examples with open(path, encoding="utf-8") as f: NotADirectoryError: [Errno 20] Not a directory: '/home/username/.cache/huggingface/datasets/downloads/c0deae7af7d9c87f25dfadf621f7126f708d7dcac6d353c7564883084a000076/arxiv-dataset/train.txt' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "scripts/summarization.py", line 354, in <module> main(args) File "scripts/summarization.py", line 306, in main model.hf_datasets = nlp.load_dataset('scientific_papers', 'arxiv', ignore_verifications=True) File "/opt/conda/envs/longformer/lib/python3.7/site-packages/nlp/load.py", line 549, in load_dataset download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications, File "/opt/conda/envs/longformer/lib/python3.7/site-packages/nlp/builder.py", line 463, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/opt/conda/envs/longformer/lib/python3.7/site-packages/nlp/builder.py", line 539, in _download_and_prepare raise OSError("Cannot find data file. " + (self.manual_download_instructions or "")) OSError: Cannot find data file. ``` ## Steps to reproduce the bug ```python # Sample code to reproduce the bug ``` ## Expected results A clear and concise description of the expected results. ## Actual results Specify the actual results or traceback. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - Platform: - Python version: - PyArrow version:
{ "avatar_url": "https://avatars.githubusercontent.com/u/5383918?v=4", "events_url": "https://api.github.com/users/yjqiu/events{/privacy}", "followers_url": "https://api.github.com/users/yjqiu/followers", "following_url": "https://api.github.com/users/yjqiu/following{/other_user}", "gists_url": "https://api.github.com/users/yjqiu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yjqiu", "id": 5383918, "login": "yjqiu", "node_id": "MDQ6VXNlcjUzODM5MTg=", "organizations_url": "https://api.github.com/users/yjqiu/orgs", "received_events_url": "https://api.github.com/users/yjqiu/received_events", "repos_url": "https://api.github.com/users/yjqiu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yjqiu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjqiu/subscriptions", "type": "User", "url": "https://api.github.com/users/yjqiu", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4140/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4140/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
1 day, 9:17:34
https://api.github.com/repos/huggingface/datasets/issues/4139
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4139/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4139/comments
https://api.github.com/repos/huggingface/datasets/issues/4139/events
https://github.com/huggingface/datasets/issues/4139
1,199,443,822
I_kwDODunzps5Hfg9u
4,139
Dataset viewer issue for Winoground
{ "avatar_url": "https://avatars.githubusercontent.com/u/7438704?v=4", "events_url": "https://api.github.com/users/alcinos/events{/privacy}", "followers_url": "https://api.github.com/users/alcinos/followers", "following_url": "https://api.github.com/users/alcinos/following{/other_user}", "gists_url": "https://api.github.com/users/alcinos/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alcinos", "id": 7438704, "login": "alcinos", "node_id": "MDQ6VXNlcjc0Mzg3MDQ=", "organizations_url": "https://api.github.com/users/alcinos/orgs", "received_events_url": "https://api.github.com/users/alcinos/received_events", "repos_url": "https://api.github.com/users/alcinos/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alcinos/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alcinos/subscriptions", "type": "User", "url": "https://api.github.com/users/alcinos", "user_view_type": "public" }
[ { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" }, { "color": "51F745", "default": false, "description": "", "id": 4030248571, "name": "dataset-viewer-gated", "node_id": "LA_kwDODunzps7wOLZ7", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer-gated" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo", "user_view_type": "public" }, { "avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4", "events_url": "https://api.github.com/users/SBrandeis/events{/privacy}", "followers_url": "https://api.github.com/users/SBrandeis/followers", "following_url": "https://api.github.com/users/SBrandeis/following{/other_user}", "gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/SBrandeis", "id": 33657802, "login": "SBrandeis", "node_id": "MDQ6VXNlcjMzNjU3ODAy", "organizations_url": "https://api.github.com/users/SBrandeis/orgs", "received_events_url": "https://api.github.com/users/SBrandeis/received_events", "repos_url": "https://api.github.com/users/SBrandeis/repos", "site_admin": false, "starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions", "type": "User", "url": "https://api.github.com/users/SBrandeis", "user_view_type": "public" }, { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" } ]
[ "related (same dataset): https://github.com/huggingface/datasets/issues/4149. But the issue is different. Looking at it", "I thought this issue was related to the error I was seeing, but upon consideration I'd think the dataset viewer would return a 500 (unable to create the split like me) or a 404 (unable to load split b/c it was never created) error if it was having the issue I was seeing in #4149. 401 message makes it look like dataset viewer isn't passing through the identity of the user who has signed the licensing agreement when making the request to GET [examples.jsonl](https://huggingface.co/datasets/facebook/winoground/resolve/a86a60456fbbd242e9a744199071a6bd3e7fd9de/examples.jsonl).", "Pinging @SBrandeis, as it seems related to gated datasets and access tokens.", "To replicate:\r\n\r\n```python\r\n>>> import datasets\r\n>>> dataset= datasets.load_dataset('facebook/winoground', name='facebook--winoground', split='train', use_auth_token=\"hf_app_...\", streaming=True)\r\n>>> next(iter(dataset))\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 497, in __iter__\r\n for key, example in self._iter():\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 494, in _iter\r\n yield from ex_iterable\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 87, in __iter__\r\n yield from self.generate_examples_fn(**self.kwargs)\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 439, in wrapper\r\n for key, table in generate_tables_fn(**kwargs):\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py\", line 85, in _generate_tables\r\n for file_idx, file in enumerate(files):\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/utils/streaming_download_manager.py\", line 679, in __iter__\r\n yield from self.generator(*self.args, **self.kwargs)\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/utils/streaming_download_manager.py\", line 731, in _iter_from_urlpaths\r\n for dirpath, _, filenames in xwalk(urlpath, use_auth_token=use_auth_token):\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/utils/streaming_download_manager.py\", line 623, in xwalk\r\n for dirpath, dirnames, filenames in fs.walk(main_hop):\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/spec.py\", line 372, in walk\r\n listing = self.ls(path, detail=True, **kwargs)\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/asyn.py\", line 85, in wrapper\r\n return sync(self.loop, func, *args, **kwargs)\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/asyn.py\", line 65, in sync\r\n raise return_result\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/asyn.py\", line 25, in _runner\r\n result[0] = await coro\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py\", line 196, in _ls\r\n out = await self._ls_real(url, detail=detail, **kwargs)\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py\", line 150, in _ls_real\r\n self._raise_not_found_for_status(r, url)\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py\", line 208, in _raise_not_found_for_status\r\n response.raise_for_status()\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/aiohttp/client_reqrep.py\", line 1004, in raise_for_status\r\n raise ClientResponseError(\r\naiohttp.client_exceptions.ClientResponseError: 401, message='Unauthorized', url=URL('https://huggingface.co/datasets/facebook/winoground/resolve/a86a60456fbbd242e9a744199071a6bd3e7fd9de/examples.jsonl')\r\n```\r\n\r\n*edited to fix `use_token` -> `use_auth_token`, thx @odellus*", "~~Using your command to replicate and changing `use_token` to `use_auth_token` fixes the problem I was seeing in #4149.~~\r\nNevermind it gave me an iterator to a method returning the same 401s. Changing `use_token` to `use_auth_token` does not fix the issue.", "After investigation with @severo , we found a potential culprit: https://github.com/huggingface/datasets/blob/3cd0a009a43f9f174056d70bfa2ca32216181926/src/datasets/utils/streaming_download_manager.py#L610-L624\r\n\r\nThe streaming manager does not seem to pass `use_auth_token` to `fsspec` when streaming and not iterating content of a zip archive\r\n\r\ncc @albertvillanova @lhoestq ", "I was able to reproduce it on a private dataset, let me work on a fix", "Hey @lhoestq, Thanks for working on a fix! Any plans to merge #4173 into master? ", "Thanks for the heads up, I still need to fix some tests that are failing in the CI before merging ;)", "The fix has been merged, we'll do a new release soon, and update the dataset viewer", "Fixed, thanks!\r\n<img width=\"1119\" alt=\"Capture d’écran 2022-06-21 à 18 41 09\" src=\"https://user-images.githubusercontent.com/1676121/174853571-afb0749c-4178-4c89-ab40-bb162a449788.png\">\r\n" ]
2022-04-11T06:11:41
2022-06-21T16:43:58
2022-06-21T16:43:58
NONE
null
null
null
null
## Dataset viewer issue for 'Winoground' **Link:** [*link to the dataset viewer page*](https://huggingface.co/datasets/facebook/winoground/viewer/facebook--winoground/train) *short description of the issue* Getting 401, message='Unauthorized' The dataset is subject to authorization, but I can access the files from the interface, so I assume I'm granted to access it. I'd assume the permission somehow doesn't propagate to the dataset viewer tool. Am I the one who added this dataset ? No
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo", "user_view_type": "public" }
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/4139/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4139/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
71 days, 10:32:17
https://api.github.com/repos/huggingface/datasets/issues/4138
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4138/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4138/comments
https://api.github.com/repos/huggingface/datasets/issues/4138/events
https://github.com/huggingface/datasets/issues/4138
1,199,291,730
I_kwDODunzps5He71S
4,138
Incorrect Russian filenames encoding after extraction by datasets.DownloadManager.download_and_extract()
{ "avatar_url": "https://avatars.githubusercontent.com/u/55381086?v=4", "events_url": "https://api.github.com/users/iluvvatar/events{/privacy}", "followers_url": "https://api.github.com/users/iluvvatar/followers", "following_url": "https://api.github.com/users/iluvvatar/following{/other_user}", "gists_url": "https://api.github.com/users/iluvvatar/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/iluvvatar", "id": 55381086, "login": "iluvvatar", "node_id": "MDQ6VXNlcjU1MzgxMDg2", "organizations_url": "https://api.github.com/users/iluvvatar/orgs", "received_events_url": "https://api.github.com/users/iluvvatar/received_events", "repos_url": "https://api.github.com/users/iluvvatar/repos", "site_admin": false, "starred_url": "https://api.github.com/users/iluvvatar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/iluvvatar/subscriptions", "type": "User", "url": "https://api.github.com/users/iluvvatar", "user_view_type": "public" }
[]
closed
false
null
[]
[ "To reproduce:\r\n\r\n```python\r\n>>> import datasets\r\n>>> datasets.get_dataset_split_names('MalakhovIlya/RuREBus', config_name='raw_txt')\r\nTraceback (most recent call last):\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/inspect.py\", line 280, in get_dataset_config_info\r\n for split_generator in builder._split_generators(\r\n File \"/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/MalakhovIlya--RuREBus/21046f5f1a0cf91187d68c30918d78d934ec7113ec435e146776d4f28f12c4ed/RuREBus.py\", line 101, in _split_generators\r\n decode_file_names(folder)\r\n File \"/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/MalakhovIlya--RuREBus/21046f5f1a0cf91187d68c30918d78d934ec7113ec435e146776d4f28f12c4ed/RuREBus.py\", line 26, in decode_file_names\r\n for root, dirs, files in os.walk(folder, topdown=False):\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/streaming.py\", line 66, in wrapper\r\n return function(*args, use_auth_token=use_auth_token, **kwargs)\r\nTypeError: xwalk() got an unexpected keyword argument 'topdown'\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/inspect.py\", line 323, in get_dataset_split_names\r\n info = get_dataset_config_info(\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/inspect.py\", line 285, in get_dataset_config_info\r\n raise SplitsNotFoundError(\"The split names could not be parsed from the dataset config.\") from err\r\ndatasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.\r\n```\r\n\r\nIt's not related to the dataset viewer. Maybe @albertvillanova or @lhoestq could help more on this issue.", "Hi! This issue stems from the fact that `xwalk`, which is a streamable version of `os.walk`, doesn't support the `topdown` param due to `fsspec`'s `walk` also not supporting it, so fixing this issue could be tricky. \r\n\r\n@MalakhovIlyaPavlovich You can avoid the error by tweaking your data processing and not using this param. (and `Path.rename`, which also cannot be streamed) ", "@mariosasko thank you for your reply. I couldn't reproduce error showed by @severo either on Ubuntu 20.04.3 LTS, Windows 10 and Google Colab environments. But trying to avoid using os.walk(topdown=False) and Path.rename(), In _split_generators I replaced\r\n```\r\ndef decode_file_names(folder):\r\n for root, dirs, files in os.walk(folder, topdown=False):\r\n root = Path(root)\r\n for file in files:\r\n old_name = root / Path(file)\r\n new_name = root / Path(\r\n file.encode('cp437').decode('cp866'))\r\n old_name.rename(new_name)\r\n for dir in dirs:\r\n old_name = root / Path(dir)\r\n new_name = root / Path(dir.encode('cp437').decode('cp866'))\r\n old_name.rename(new_name)\r\n\r\nfolder = dl_manager.download_and_extract(self._RAW_TXT_URLS)['raw_txt']\r\ndecode_file_names(folder)\r\n```\r\nby\r\n```\r\ndef extract(zip_file_path):\r\n p = Path(zip_file_path)\r\n dest_dir = str(p.parent / 'extracted' / p.stem)\r\n os.makedirs(dest_dir, exist_ok=True)\r\n with zipfile.ZipFile(zip_file_path) as archive:\r\n for file_info in tqdm(archive.infolist(), desc='Extracting'):\r\n filename = file_info.filename.encode('cp437').decode('cp866')\r\n target = os.path.join(dest_dir, *filename.split('/'))\r\n os.makedirs(os.path.dirname(target), exist_ok=True)\r\n if not file_info.is_dir():\r\n with archive.open(file_info) as source, open(target, 'wb') as dest:\r\n shutil.copyfileobj(source, dest)\r\n return dest_dir\r\n\r\nzip_file = dl_manager.download(self._RAW_TXT_URLS)['raw_txt']\r\nif not is_url(zip_file):\r\n folder = extract(zip_file)\r\nelse:\r\n folder = None\r\n```\r\nand now everything works well except data viewer for \"raw_txt\" subset: dataset preview on hub shows \"No data.\". As far as I understand dl_manager.download returns original URL when we are calling datasets.get_dataset_split_names and my suspicions are that dataset viewer can do smth similar. I couldn't find information about how it works. I would be very grateful, if you could tell me how to fix this)", "This is what I get when I try to stream the `raw_txt` subset:\r\n```python\r\n>>> dset = load_dataset(\"MalakhovIlya/RuREBus\", \"raw_txt\", split=\"raw_txt\", streaming=True)\r\n>>> next(iter(dset))\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\nStopIteration\r\n```\r\nSo there is a bug in your script.", "streaming=True helped me to find solution. I fixed\r\n```\r\ndef extract(zip_file_path):\r\n p = Path(zip_file_path)\r\n dest_dir = str(p.parent / 'extracted' / p.stem)\r\n os.makedirs(dest_dir, exist_ok=True)\r\n with zipfile.ZipFile(zip_file_path) as archive:\r\n for file_info in tqdm(archive.infolist(), desc='Extracting'):\r\n filename = file_info.filename.encode('cp437').decode('cp866')\r\n target = os.path.join(dest_dir, *filename.split('/'))\r\n os.makedirs(os.path.dirname(target), exist_ok=True)\r\n if not file_info.is_dir():\r\n with archive.open(file_info) as source, open(target, 'wb') as dest:\r\n shutil.copyfileobj(source, dest)\r\n return dest_dir\r\n\r\nzip_file = dl_manager.download(self._RAW_TXT_URLS)['raw_txt']\r\nfolder = extract(zip_file)\r\n```\r\nby \r\n```\r\nfolder = dl_manager.download_and_extract(self._RAW_TXT_URLS)['raw_txt']\r\npath = os.path.join(folder, 'MED_txt/unparsed_txt')\r\nfor root, dirs, files in os.walk(path):\r\n decoded_root_name = Path(root).name.encode('cp437').decode('cp866')\r\n```\r\n@mariosasko thank you for your help :)" ]
2022-04-11T02:07:13
2022-04-19T03:15:46
2022-04-16T15:46:29
NONE
null
null
null
null
## Dataset viewer issue for 'MalakhovIlya/RuREBus' **Link:** https://huggingface.co/datasets/MalakhovIlya/RuREBus **Description** Using os.walk(topdown=False) in DatasetBuilder causes following error: Status code: 400 Exception: TypeError Message: xwalk() got an unexpected keyword argument 'topdown' Couldn't find where "xwalk" come from. How can I fix this? Am I the one who added this dataset ? Yes
{ "avatar_url": "https://avatars.githubusercontent.com/u/55381086?v=4", "events_url": "https://api.github.com/users/iluvvatar/events{/privacy}", "followers_url": "https://api.github.com/users/iluvvatar/followers", "following_url": "https://api.github.com/users/iluvvatar/following{/other_user}", "gists_url": "https://api.github.com/users/iluvvatar/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/iluvvatar", "id": 55381086, "login": "iluvvatar", "node_id": "MDQ6VXNlcjU1MzgxMDg2", "organizations_url": "https://api.github.com/users/iluvvatar/orgs", "received_events_url": "https://api.github.com/users/iluvvatar/received_events", "repos_url": "https://api.github.com/users/iluvvatar/repos", "site_admin": false, "starred_url": "https://api.github.com/users/iluvvatar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/iluvvatar/subscriptions", "type": "User", "url": "https://api.github.com/users/iluvvatar", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4138/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4138/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
5 days, 13:39:16
https://api.github.com/repos/huggingface/datasets/issues/4134
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4134/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4134/comments
https://api.github.com/repos/huggingface/datasets/issues/4134/events
https://github.com/huggingface/datasets/issues/4134
1,197,937,146
I_kwDODunzps5HZxH6
4,134
ELI5 supporting documents
{ "avatar_url": "https://avatars.githubusercontent.com/u/69015896?v=4", "events_url": "https://api.github.com/users/saurabh-0077/events{/privacy}", "followers_url": "https://api.github.com/users/saurabh-0077/followers", "following_url": "https://api.github.com/users/saurabh-0077/following{/other_user}", "gists_url": "https://api.github.com/users/saurabh-0077/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/saurabh-0077", "id": 69015896, "login": "saurabh-0077", "node_id": "MDQ6VXNlcjY5MDE1ODk2", "organizations_url": "https://api.github.com/users/saurabh-0077/orgs", "received_events_url": "https://api.github.com/users/saurabh-0077/received_events", "repos_url": "https://api.github.com/users/saurabh-0077/repos", "site_admin": false, "starred_url": "https://api.github.com/users/saurabh-0077/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/saurabh-0077/subscriptions", "type": "User", "url": "https://api.github.com/users/saurabh-0077", "user_view_type": "public" }
[ { "color": "d876e3", "default": true, "description": "Further information is requested", "id": 1935892912, "name": "question", "node_id": "MDU6TGFiZWwxOTM1ODkyOTEy", "url": "https://api.github.com/repos/huggingface/datasets/labels/question" } ]
open
false
null
[]
[ "Hi ! Please post your question on the [forum](https://discuss.huggingface.co/), more people will be able to help you there ;)" ]
2022-04-08T23:36:27
2022-04-13T13:52:46
null
NONE
null
null
null
null
if i am using dense search to create supporting documents for eli5 how much time it will take bcz i read somewhere that it takes about 18 hrs??
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4134/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4134/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
https://api.github.com/repos/huggingface/datasets/issues/4133
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4133/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4133/comments
https://api.github.com/repos/huggingface/datasets/issues/4133/events
https://github.com/huggingface/datasets/issues/4133
1,197,830,623
I_kwDODunzps5HZXHf
4,133
HANS dataset preview broken
{ "avatar_url": "https://avatars.githubusercontent.com/u/61748653?v=4", "events_url": "https://api.github.com/users/pietrolesci/events{/privacy}", "followers_url": "https://api.github.com/users/pietrolesci/followers", "following_url": "https://api.github.com/users/pietrolesci/following{/other_user}", "gists_url": "https://api.github.com/users/pietrolesci/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/pietrolesci", "id": 61748653, "login": "pietrolesci", "node_id": "MDQ6VXNlcjYxNzQ4NjUz", "organizations_url": "https://api.github.com/users/pietrolesci/orgs", "received_events_url": "https://api.github.com/users/pietrolesci/received_events", "repos_url": "https://api.github.com/users/pietrolesci/repos", "site_admin": false, "starred_url": "https://api.github.com/users/pietrolesci/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pietrolesci/subscriptions", "type": "User", "url": "https://api.github.com/users/pietrolesci", "user_view_type": "public" }
[ { "color": "fef2c0", "default": false, "description": "", "id": 3287858981, "name": "streaming", "node_id": "MDU6TGFiZWwzMjg3ODU4OTgx", "url": "https://api.github.com/repos/huggingface/datasets/labels/streaming" } ]
closed
false
null
[]
[ "The dataset cannot be loaded, be it in normal or streaming mode.\r\n\r\n```python\r\n>>> import datasets\r\n>>> dataset=datasets.load_dataset(\"hans\", split=\"train\", streaming=True)\r\n>>> next(iter(dataset))\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 497, in __iter__\r\n for key, example in self._iter():\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 494, in _iter\r\n yield from ex_iterable\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 87, in __iter__\r\n yield from self.generate_examples_fn(**self.kwargs)\r\n File \"/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/hans/1bbcb735c482acd54f2e118074b59cfd2bf5f7a5a285d4d540d1e632216672ac/hans.py\", line 121, in _generate_examples\r\n for idx, line in enumerate(open(filepath, \"rb\")):\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/spec.py\", line 1595, in __next__\r\n out = self.readline()\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/spec.py\", line 1592, in readline\r\n return self.readuntil(b\"\\n\")\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/spec.py\", line 1581, in readuntil\r\n self.seek(start + found + len(char))\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py\", line 676, in seek\r\n raise ValueError(\"Cannot seek streaming HTTP file\")\r\nValueError: Cannot seek streaming HTTP file\r\n>>> dataset=datasets.load_dataset(\"hans\", split=\"train\", streaming=False)\r\nDownloading and preparing dataset hans/plain_text (download: 29.51 MiB, generated: 30.34 MiB, post-processed: Unknown size, total: 59.85 MiB) to /home/slesage/.cache/huggingface/datasets/hans/plain_text/1.0.0/1bbcb735c482acd54f2e118074b59cfd2bf5f7a5a285d4d540d1e632216672ac...\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/load.py\", line 1687, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/builder.py\", line 605, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/builder.py\", line 1104, in _download_and_prepare\r\n super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/builder.py\", line 694, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/builder.py\", line 1087, in _prepare_split\r\n for key, record in logging.tqdm(\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/tqdm/std.py\", line 1180, in __iter__\r\n for obj in iterable:\r\n File \"/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/hans/1bbcb735c482acd54f2e118074b59cfd2bf5f7a5a285d4d540d1e632216672ac/hans.py\", line 121, in _generate_examples\r\n for idx, line in enumerate(open(filepath, \"rb\")):\r\nValueError: readline of closed file\r\n```\r\n\r\n", "Hi! I've opened a PR that should make this dataset stremable. You can test it as follows:\r\n```python\r\nfrom datasets import load_dataset\r\ndset = load_dataset(\"hans\", split=\"train\", streaming=True, revision=\"49decd29839c792ecc24ac88f861cbdec30c1c40\")\r\n```\r\n\r\n@severo The current script doesn't throw an error in normal mode (only in streaming mode) on my local machine or in Colab. Can you update your installation of `datasets` and see if that fixes the issue?", "Thanks for this. It works well, thanks! The dataset viewer is using https://github.com/huggingface/datasets/releases/tag/2.0.0, I'm eager to upgrade to 2.0.1 😉" ]
2022-04-08T21:06:15
2022-04-13T11:57:34
2022-04-13T11:57:34
NONE
null
null
null
null
## Dataset viewer issue for '*hans*' **Link:** [https://huggingface.co/datasets/hans](https://huggingface.co/datasets/hans) HANS dataset preview is broken with error 400 Am I the one who added this dataset ? No
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4133/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4133/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
4 days, 14:51:19
https://api.github.com/repos/huggingface/datasets/issues/4129
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4129/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4129/comments
https://api.github.com/repos/huggingface/datasets/issues/4129/events
https://github.com/huggingface/datasets/issues/4129
1,197,376,796
I_kwDODunzps5HXoUc
4,129
dataset metadata for reproducibility
{ "avatar_url": "https://avatars.githubusercontent.com/u/24982805?v=4", "events_url": "https://api.github.com/users/nbroad1881/events{/privacy}", "followers_url": "https://api.github.com/users/nbroad1881/followers", "following_url": "https://api.github.com/users/nbroad1881/following{/other_user}", "gists_url": "https://api.github.com/users/nbroad1881/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/nbroad1881", "id": 24982805, "login": "nbroad1881", "node_id": "MDQ6VXNlcjI0OTgyODA1", "organizations_url": "https://api.github.com/users/nbroad1881/orgs", "received_events_url": "https://api.github.com/users/nbroad1881/received_events", "repos_url": "https://api.github.com/users/nbroad1881/repos", "site_admin": false, "starred_url": "https://api.github.com/users/nbroad1881/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nbroad1881/subscriptions", "type": "User", "url": "https://api.github.com/users/nbroad1881", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
[ "+1 on this idea. This could be powerful for helping better track datasets used for model training and help with automatic model card creation. \r\n\r\nOne possible way of doing this would be to store some/most/all the arguments passed to `load_dataset` if a hub id is passed. i.e. store the Hub ID, configuration, etc. \r\n\r\ncc @tomaarsen" ]
2022-04-08T14:17:28
2023-09-29T09:23:56
null
NONE
null
null
null
null
When pulling a dataset from the hub, it would be useful to have some metadata about the specific dataset and version that is used. The metadata could then be passed to the `Trainer` which could then be saved to a model card. This is useful for people who run many experiments on different versions (commits/branches) of the same dataset. The dataset could have a list of “source datasets” metadata and ignore what happens to them before arriving in the Trainer (i.e. ignore mapping, filtering, etc.). Here is a basic representation (made by @lhoestq ) ```python >>> from datasets import load_dataset >>> >>> my_dataset = load_dataset(...)["train"] >>> my_dataset = my_dataset.map(...) >>> >>> my_dataset.sources [HFHubDataset(repo_id=..., revision=..., arguments={...})] ```
null
{ "+1": 6, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 6, "url": "https://api.github.com/repos/huggingface/datasets/issues/4129/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4129/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
https://api.github.com/repos/huggingface/datasets/issues/4126
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4126/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4126/comments
https://api.github.com/repos/huggingface/datasets/issues/4126/events
https://github.com/huggingface/datasets/issues/4126
1,196,665,194
I_kwDODunzps5HU6lq
4,126
dataset viewer issue for common_voice
{ "avatar_url": "https://avatars.githubusercontent.com/u/24724502?v=4", "events_url": "https://api.github.com/users/laphang/events{/privacy}", "followers_url": "https://api.github.com/users/laphang/followers", "following_url": "https://api.github.com/users/laphang/following{/other_user}", "gists_url": "https://api.github.com/users/laphang/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/laphang", "id": 24724502, "login": "laphang", "node_id": "MDQ6VXNlcjI0NzI0NTAy", "organizations_url": "https://api.github.com/users/laphang/orgs", "received_events_url": "https://api.github.com/users/laphang/received_events", "repos_url": "https://api.github.com/users/laphang/repos", "site_admin": false, "starred_url": "https://api.github.com/users/laphang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/laphang/subscriptions", "type": "User", "url": "https://api.github.com/users/laphang", "user_view_type": "public" }
[ { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" }, { "color": "F83ACF", "default": false, "description": "", "id": 4027368468, "name": "audio_column", "node_id": "LA_kwDODunzps7wDMQU", "url": "https://api.github.com/repos/huggingface/datasets/labels/audio_column" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo", "user_view_type": "public" } ]
[ "Yes, it's a known issue, and we expect to fix it soon.", "Fixed.\r\n\r\n<img width=\"1393\" alt=\"Capture d’écran 2022-04-25 à 15 42 05\" src=\"https://user-images.githubusercontent.com/1676121/165101176-d729d85b-efff-45a8-bad1-b69223edba5f.png\">\r\n" ]
2022-04-07T23:34:28
2022-04-25T13:42:17
2022-04-25T13:42:16
NONE
null
null
null
null
## Dataset viewer issue for 'common_voice' **Link:** https://huggingface.co/datasets/common_voice Server Error Status code: 400 Exception: TypeError Message: __init__() got an unexpected keyword argument 'audio_column' Am I the one who added this dataset ? No
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4126/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4126/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
17 days, 14:07:48
https://api.github.com/repos/huggingface/datasets/issues/4124
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4124/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4124/comments
https://api.github.com/repos/huggingface/datasets/issues/4124/events
https://github.com/huggingface/datasets/issues/4124
1,196,469,842
I_kwDODunzps5HUK5S
4,124
Image decoding often fails when transforming Image datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/17025191?v=4", "events_url": "https://api.github.com/users/RafayAK/events{/privacy}", "followers_url": "https://api.github.com/users/RafayAK/followers", "following_url": "https://api.github.com/users/RafayAK/following{/other_user}", "gists_url": "https://api.github.com/users/RafayAK/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/RafayAK", "id": 17025191, "login": "RafayAK", "node_id": "MDQ6VXNlcjE3MDI1MTkx", "organizations_url": "https://api.github.com/users/RafayAK/orgs", "received_events_url": "https://api.github.com/users/RafayAK/received_events", "repos_url": "https://api.github.com/users/RafayAK/repos", "site_admin": false, "starred_url": "https://api.github.com/users/RafayAK/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RafayAK/subscriptions", "type": "User", "url": "https://api.github.com/users/RafayAK", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
[ "A quick hack I have found is that we can call the image first before running the transforms and it makes sure the image is decoded before being passed on.\r\n\r\nFor this I just needed to add `example['img'] = example['img']` to the top of my `generate_flipped_data` function, defined above, so that image decode in invoked.\r\n\r\nAfter this minor change this function works:\r\n```python\r\ndef generate_flipped_data(example, p=0.5):\r\n \"\"\"\r\n A Dataset mapping functions that transforms some of the image up-side-down.\r\n If the probability value (p) is 0.5 approximately half the images will be flipped upside-down\r\n Args:\r\n example: An example from the dataset containing a Python dictionary with \"img\" and \"is_flipped\" key-value pair\r\n p: probability of flipping the image up-side-down, Default 0.5\r\n\r\n Returns:\r\n example: A Dataset object\r\n\r\n \"\"\"\r\n example['img'] = example['img'] # <<< This is the only change\r\n if rng.random() > p: # the flip the image and set is_flipped column to 1\r\n example['img'] = example['img'].transpose(\r\n 1) # ImageOps.flip(example['img']) #example['img'].transpose(Image.FLIP_TOP_BOTTOM)\r\n example['is_flipped'] = 1\r\n\r\n return example\r\n```", "Hi @RafayAK, thanks for reporting.\r\n\r\nCurrent implementation of the Image feature performs the decoding only if the \"img\" field is accessed by the mapped function.\r\n\r\nIn your original `generate_flipped_data` function:\r\n- it only accesses the \"img\" field (and thus performs decoding) if `rng.random() > p`;\r\n- on the other hand, for the cases where `rng.random() <= p`, the \"img\" field is not accessed and thus no decoding is performed for those examples\r\n\r\nBy adding the code line `example['img'] = example['img']`, you make sure the \"img\" field is accessed in all cases, and the decoding is done for all examples.\r\n\r\nAlso note that there is a little bug in your implementation: `p` is not the probability of flipping, but the probability of not-flipping; the larger is `p`, the smaller is the probability of flipping.\r\n\r\nSome refactoring (fixing also `p`):\r\n```python\r\ndef generate_flipped_data(example, p=0.5):\r\n \"\"\"\r\n A Dataset mapping functions that transforms some of the image up-side-down.\r\n If the probability value (p) is 0.5 approximately half the images will be flipped upside-down.\r\n\r\n Args:\r\n example: An example from the dataset containing a Python dictionary with \"img\" and \"is_flipped\" key-value pair\r\n p: probability of flipping the image up-side-down, Default 0.5\r\n\r\n Returns:\r\n example: A Dataset object\r\n\r\n \"\"\"\r\n do_flip = rng.random() < p # Note the \"<\" sign here instead of \">\"\r\n example['img'] = example['img'].transpose(1) if do_flip else example['img'] # Note \"img\" is always accessed\r\n example['is_flipped'] = 1 if do_flip else 0\r\n return example", "@albertvillanova Thanks for letting me know this is intended behavior. The docs are severely lacking on this, if I hadn't posted this here I would have never found out how I'm actually supposed to modify images in a Dataset object.", "@albertvillanova Secondly if you check the error message it shows that around 1999 images were successfully created, I'm pretty sure some of them were also flipped during the process. Back to my main contention, sometimes the decoding takes place other times it fails. \r\n\r\nI suppose to run `map` on any dataset all the examples should be invoked even if on some of them we end up doing nothing, is that right?", "Hi @RafayAK! I've opened a PR with the fix, which adds a fallback to reattempt casting to PyArrow format with a more robust (but more expensive) procedure if the first attempt fails. Feel free to test it by installing `datasets` from the PR branch with the following command:\r\n```\r\npip install git+https://github.com/huggingface/datasets.git@fix-4124\r\n```", "@mariosasko I'll try this right away and report back.", "@mariosasko Thanks a lot for looking into this, now the `map` function at least behaves as one would expect a function to behave. \r\n\r\nLooking forward to exploring Hugging Face more and even contributing 😃.\r\n\r\n```bash\r\n $ conda list | grep datasets\r\ndatasets 2.0.1.dev0 pypi_0 pypi\r\n\r\n```\r\n\r\n```python\r\ndef preprocess_data(dataset):\r\n \"\"\"\r\n Helper funtion to pre-process HuggingFace Cifar-100 Dataset to remove fine_label and coarse_label columns and\r\n add is_flipped column\r\n Args:\r\n dataset: HuggingFace CIFAR-100 Dataset Object\r\n\r\n Returns:\r\n new_dataset: A Dataset object with \"img\" and \"is_flipped\" columns only\r\n\r\n \"\"\"\r\n # remove fine_label and coarse_label columns\r\n new_dataset = dataset.remove_columns(['fine_label', 'coarse_label'])\r\n # add the column for is_flipped\r\n new_dataset = new_dataset.add_column(name=\"is_flipped\", column=np.zeros((len(new_dataset)), dtype=np.uint8))\r\n\r\n return new_dataset\r\n\r\n\r\ndef generate_flipped_data(example, p=0.5):\r\n \"\"\"\r\n A Dataset mapping functions that transforms some of the image up-side-down.\r\n If the probability value (p) is 0.5 approximately half the images will be flipped upside-down\r\n Args:\r\n example: An example from the dataset containing a Python dictionary with \"img\" and \"is_flipped\" key-value pair\r\n p: probability of flipping the image up-side-down, Default 0.5\r\n\r\n Returns:\r\n example: A Dataset object\r\n\r\n \"\"\"\r\n # example['img'] = example['img']\r\n if rng.random() > p: # the flip the image and set is_flipped column to 1\r\n example['img'] = example['img'].transpose(\r\n 1) # ImageOps.flip(example['img']) #example['img'].transpose(Image.FLIP_TOP_BOTTOM)\r\n example['is_flipped'] = 1\r\n\r\n return example\r\n\r\nmy_test = preprocess_data(test_dataset)\r\nmy_test = my_test.map(generate_flipped_data)\r\n```\r\n\r\nThe output now show the function was applied successfully:\r\n``` bash\r\n/home/rafay/anaconda3/envs/pytorch_new/bin/python /home/rafay/Documents/you_only_live_once/upside_down_detector/create_dataset.py\r\nDownloading builder script: 5.61kB [00:00, 3.16MB/s] \r\nDownloading metadata: 4.21kB [00:00, 2.56MB/s] \r\nReusing dataset cifar100 (/home/rafay/.cache/huggingface/datasets/cifar100/cifar100/1.0.0/f365c8b725c23e8f0f8d725c3641234d9331cd2f62919d1381d1baa5b3ba3142)\r\nReusing dataset cifar100 (/home/rafay/.cache/huggingface/datasets/cifar100/cifar100/1.0.0/f365c8b725c23e8f0f8d725c3641234d9331cd2f62919d1381d1baa5b3ba3142)\r\n100%|██████████| 10000/10000 [00:01<00:00, 5149.15ex/s]\r\n```\r\n" ]
2022-04-07T19:17:25
2022-04-13T14:01:16
2022-04-13T14:01:16
NONE
null
null
null
null
## Describe the bug When transforming/modifying images in an image dataset using the `map` function the PIL images often fail to decode in time for the image transforms, causing errors. Using a debugger it is easy to see what the problem is, the Image decode invocation does not take place and the resulting image passed around is still raw bytes: ``` [{'bytes': b'\x89PNG\r\n\x1a\n\x00\x00\x00\rIHDR\x00\x00\x00 \x00\x00\x00 \x08\x02\x00\x00\x00\xfc\x18\xed\xa3\x00\x00\x08\x02IDATx\x9cEVIs[\xc7\x11\xeemf\xde\x82\x8d\x80\x08\x89"\xb5V\\\xb6\x94(\xe5\x9f\x90\xca5\x7f$\xa7T\xe5\x9f&9\xd9\x8a\\.\xdb\xa4$J\xa4\x00\x02x\xc0{\xb3t\xe7\x00\xca\x99\xd3\\f\xba\xba\xbf\xa5?|\xfa\xf4\xa2\xeb\xba\xedv\xa3f^\xf8\xd5\x0bY\xb6\x10\xb3\xaaDq\xcd\x83\x87\xdf5\xf3gZ\x1a\x04\x0f\xa0fp\xfa\xe0\xd4\x07?\x9dN\xc4\xb1\x99\xfd\xf2\xcb/\x97\x97\x97H\xa2\xaaf\x16\x82\xaf\xeb\xca{\xbf\xd9l.\xdf\x7f\xfa\xcb_\xff&\x88\x08\x00\x80H\xc0\x80@.;\x0f\x8c@#v\xe3\xe5\xfc\xd1\x9f\xee6q\xbf\xdf\xa6\x14\'\x93\xf1\xc3\xe5\xe3\xd1x\x14c\x8c1\xa5\x1c\x9dsM\xd3\xb4\xed\x08\x89SJ)\xa5\xedv\xbb^\xafNO\x97D\x84Hf .... ``` ## Steps to reproduce the bug ```python from datasets import load_dataset, Dataset import numpy as np # seeded NumPy random number generator for reprodducinble results. rng = np.random.default_rng(seed=0) test_dataset = load_dataset('cifar100', split="test") def preprocess_data(dataset): """ Helper function to pre-process HuggingFace Cifar-100 Dataset to remove fine_label and coarse_label columns and add is_flipped column Args: dataset: HuggingFace CIFAR-100 Dataset Object Returns: new_dataset: A Dataset object with "img" and "is_flipped" columns only """ # remove fine_label and coarse_label columns new_dataset = dataset.remove_columns(['fine_label', 'coarse_label']) # add the column for is_flipped new_dataset = new_dataset.add_column(name="is_flipped", column=np.zeros((len(new_dataset)), dtype=np.uint8)) return new_dataset def generate_flipped_data(example, p=0.5): """ A Dataset mapping function that transforms some of the images up-side-down. If the probability value (p) is 0.5 approximately half the images will be flipped upside-down Args: example: An example from the dataset containing a Python dictionary with "img" and "is_flipped" key-value pair p: the probability of flipping the image up-side-down, Default 0.5 Returns: example: A Dataset object """ # example['img'] = example['img'] if rng.random() > p: # the flip the image and set is_flipped column to 1 example['img'] = example['img'].transpose( 1) # ImageOps.flip(example['img']) #example['img'].transpose(Image.FLIP_TOP_BOTTOM) example['is_flipped'] = 1 return example my_test = preprocess_data(test_dataset) my_test = my_test.map(generate_flipped_data) ``` ## Expected results The dataset should be transformed without problems. ## Actual results ``` /home/rafay/anaconda3/envs/pytorch_new/bin/python /home/rafay/Documents/you_only_live_once/upside_down_detector/create_dataset.py Reusing dataset cifar100 (/home/rafay/.cache/huggingface/datasets/cifar100/cifar100/1.0.0/f365c8b725c23e8f0f8d725c3641234d9331cd2f62919d1381d1baa5b3ba3142) Reusing dataset cifar100 (/home/rafay/.cache/huggingface/datasets/cifar100/cifar100/1.0.0/f365c8b725c23e8f0f8d725c3641234d9331cd2f62919d1381d1baa5b3ba3142) 20%|█▉ | 1999/10000 [00:00<00:01, 5560.44ex/s] Traceback (most recent call last): File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 2326, in _map_single writer.write(example) File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_writer.py", line 441, in write self.write_examples_on_file() File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_writer.py", line 399, in write_examples_on_file self.write_batch(batch_examples=batch_examples) File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_writer.py", line 492, in write_batch arrays.append(pa.array(typed_sequence)) File "pyarrow/array.pxi", line 230, in pyarrow.lib.array File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_writer.py", line 185, in __arrow_array__ out = pa.array(cast_to_python_objects(data, only_1d_for_numpy=True)) File "pyarrow/array.pxi", line 316, in pyarrow.lib.array File "pyarrow/array.pxi", line 39, in pyarrow.lib._sequence_to_array File "pyarrow/error.pxi", line 143, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 99, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Could not convert <PIL.Image.Image image mode=RGB size=32x32 at 0x7F56AEE61DE0> with type Image: did not recognize Python value type when inferring an Arrow data type During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/rafay/Documents/you_only_live_once/upside_down_detector/create_dataset.py", line 55, in <module> my_test = my_test.map(generate_flipped_data) File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 1953, in map return self._map_single( File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 519, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 486, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/fingerprint.py", line 458, in wrapper out = func(self, *args, **kwargs) File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 2360, in _map_single writer.finalize() File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_writer.py", line 522, in finalize self.write_examples_on_file() File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_writer.py", line 399, in write_examples_on_file self.write_batch(batch_examples=batch_examples) File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_writer.py", line 492, in write_batch arrays.append(pa.array(typed_sequence)) File "pyarrow/array.pxi", line 230, in pyarrow.lib.array File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_writer.py", line 185, in __arrow_array__ out = pa.array(cast_to_python_objects(data, only_1d_for_numpy=True)) File "pyarrow/array.pxi", line 316, in pyarrow.lib.array File "pyarrow/array.pxi", line 39, in pyarrow.lib._sequence_to_array File "pyarrow/error.pxi", line 143, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 99, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Could not convert <PIL.Image.Image image mode=RGB size=32x32 at 0x7F56AEE61DE0> with type Image: did not recognize Python value type when inferring an Arrow data type Process finished with exit code 1 ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.0.0 - Platform: Linux(Fedora 35) - Python version: 3.10 - PyArrow version: 7.0.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4124/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4124/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
5 days, 18:43:51
https://api.github.com/repos/huggingface/datasets/issues/4123
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4123/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4123/comments
https://api.github.com/repos/huggingface/datasets/issues/4123/events
https://github.com/huggingface/datasets/issues/4123
1,196,367,512
I_kwDODunzps5HTx6Y
4,123
Building C4 takes forever
{ "avatar_url": "https://avatars.githubusercontent.com/u/15899312?v=4", "events_url": "https://api.github.com/users/StellaAthena/events{/privacy}", "followers_url": "https://api.github.com/users/StellaAthena/followers", "following_url": "https://api.github.com/users/StellaAthena/following{/other_user}", "gists_url": "https://api.github.com/users/StellaAthena/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/StellaAthena", "id": 15899312, "login": "StellaAthena", "node_id": "MDQ6VXNlcjE1ODk5MzEy", "organizations_url": "https://api.github.com/users/StellaAthena/orgs", "received_events_url": "https://api.github.com/users/StellaAthena/received_events", "repos_url": "https://api.github.com/users/StellaAthena/repos", "site_admin": false, "starred_url": "https://api.github.com/users/StellaAthena/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/StellaAthena/subscriptions", "type": "User", "url": "https://api.github.com/users/StellaAthena", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "Hi @StellaAthena, thanks for reporting.\r\n\r\nPlease note, that our `datasets` library performs several operations in order to load a dataset, among them:\r\n- it downloads all the required files: for C4 \"en\", 378.69 GB of JSON GZIPped files\r\n- it parses their content to generate the dataset\r\n- it caches the dataset in an Arrow file: for C4 \"en\", this file size is 1.87 TB\r\n- it memory-maps the Arrow file\r\n\r\nIf it suits your use case, you might load this dataset in streaming mode:\r\n- no Arrow file is generated\r\n- you can iterate over elements immediately (no need to wait to download all the entire files)\r\n\r\n```python\r\nIn [45]: from datasets import load_dataset\r\n ...: ds = load_dataset(\"c4\", \"en\", split=\"train\", streaming=True)\r\n ...: for item in ds:\r\n ...: print(item)\r\n ...: break\r\n ...: \r\n{'text': 'Beginners BBQ Class Taking Place in Missoula!\\nDo you want to get better at making delicious BBQ? You will have the opportunity, put this on your calendar now. Thursday, September 22nd join World Class BBQ Champion, Tony Balay from Lonestar Smoke Rangers. He will be teaching a beginner level class for everyone who wants to get better with their culinary skills.\\nHe will teach you everything you need to know to compete in a KCBS BBQ competition, including techniques, recipes, timelines, meat selection and trimming, plus smoker and fire information.\\nThe cost to be in the class is $35 per person, and for spectators it is free. Included in the cost will be either a t-shirt or apron and you will be tasting samples of each meat that is prepared.', 'timestamp': '2019-04-25T12:57:54Z', 'url': 'https://klyq.com/beginners-bbq-class-taking-place-in-missoula/'}\r\n```\r\nI hope this is useful for your use case." ]
2022-04-07T17:41:30
2023-06-26T22:01:29
2023-06-26T22:01:29
NONE
null
null
null
null
## Describe the bug C4-en is a 300 GB dataset. However, when I try to download it through the hub it takes over _six hours_ to generate the train/test split from the downloaded files. This is an absurd amount of time and an unnecessary waste of resources. ## Steps to reproduce the bug ```python c4 = datasets.load("c4", "en") ``` ## Expected results I would like to be able to download pre-split data. ## Environment info - `datasets` version: 2.0.0 - Platform: Linux-5.13.0-35-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 7.0.0 - Pandas version: 1.4.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/15899312?v=4", "events_url": "https://api.github.com/users/StellaAthena/events{/privacy}", "followers_url": "https://api.github.com/users/StellaAthena/followers", "following_url": "https://api.github.com/users/StellaAthena/following{/other_user}", "gists_url": "https://api.github.com/users/StellaAthena/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/StellaAthena", "id": 15899312, "login": "StellaAthena", "node_id": "MDQ6VXNlcjE1ODk5MzEy", "organizations_url": "https://api.github.com/users/StellaAthena/orgs", "received_events_url": "https://api.github.com/users/StellaAthena/received_events", "repos_url": "https://api.github.com/users/StellaAthena/repos", "site_admin": false, "starred_url": "https://api.github.com/users/StellaAthena/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/StellaAthena/subscriptions", "type": "User", "url": "https://api.github.com/users/StellaAthena", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4123/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4123/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
445 days, 4:19:59
https://api.github.com/repos/huggingface/datasets/issues/4122
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4122/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4122/comments
https://api.github.com/repos/huggingface/datasets/issues/4122/events
https://github.com/huggingface/datasets/issues/4122
1,196,095,072
I_kwDODunzps5HSvZg
4,122
medical_dialog zh has very slow _generate_examples
{ "avatar_url": "https://avatars.githubusercontent.com/u/24982805?v=4", "events_url": "https://api.github.com/users/nbroad1881/events{/privacy}", "followers_url": "https://api.github.com/users/nbroad1881/followers", "following_url": "https://api.github.com/users/nbroad1881/following{/other_user}", "gists_url": "https://api.github.com/users/nbroad1881/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/nbroad1881", "id": 24982805, "login": "nbroad1881", "node_id": "MDQ6VXNlcjI0OTgyODA1", "organizations_url": "https://api.github.com/users/nbroad1881/orgs", "received_events_url": "https://api.github.com/users/nbroad1881/received_events", "repos_url": "https://api.github.com/users/nbroad1881/repos", "site_admin": false, "starred_url": "https://api.github.com/users/nbroad1881/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nbroad1881/subscriptions", "type": "User", "url": "https://api.github.com/users/nbroad1881", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
[ "Hi @nbroad1881, thanks for reporting.\r\n\r\nLet me have a look to try to improve its performance. ", "Thanks @nbroad1881 for reporting! I don't recall it taking so long. I will also have a look at this. \r\n@albertvillanova please let me know if I am doing something unnecessary or time consuming.", "Hi @nbroad1881 and @vrindaprabhu,\r\n\r\nAs a workaround for the performance of the parsing of the raw data files (this could be addressed in a subsequent PR), I have found that there are also processed data files, that do not require parsing. I have added these as new configurations `processed.en` and `processed.zh`:\r\n```python\r\nds = load_dataset(\"medical_dialog\", \"processed.zh\")\r\n```" ]
2022-04-07T14:00:51
2022-04-08T16:20:51
2022-04-08T16:20:51
NONE
null
null
null
null
## Describe the bug After downloading the files from Google Drive, `load_dataset("medical_dialog", "zh", data_dir="./")` takes an unreasonable amount of time. Generating the train/test split for 33% of the dataset takes over 4.5 hours. ## Steps to reproduce the bug The easiest way I've found to download files from Google Drive is to use `gdown` and use Google Colab because the download speeds will be very high due to the fact that they are both in Google Cloud. ```python file_ids = [ "1AnKxGEuzjeQsDHHqL3NqI_aplq2hVL_E", "1tt7weAT1SZknzRFyLXOT2fizceUUVRXX", "1A64VBbsQ_z8wZ2LDox586JIyyO6mIwWc", "1AKntx-ECnrxjB07B6BlVZcFRS4YPTB-J", "1xUk8AAua_x27bHUr-vNoAuhEAjTxOvsu", "1ezKTfe7BgqVN5o-8Vdtr9iAF0IueCSjP", "1tA7bSOxR1RRNqZst8cShzhuNHnayUf7c", "1pA3bCFA5nZDhsQutqsJcH3d712giFb0S", "1pTLFMdN1A3ro-KYghk4w4sMz6aGaMOdU", "1dUSnG0nUPq9TEQyHd6ZWvaxO0OpxVjXD", "1UfCH05nuWiIPbDZxQzHHGAHyMh8dmPQH", ] for i in file_ids: url = f"https://drive.google.com/uc?id={i}" !gdown $url from datasets import load_dataset ds = load_dataset("medical_dialog", "zh", data_dir="./") ``` ## Expected results Faster load time ## Actual results `Generating train split: 33%: 625519/1921127 [4:31:03<31:39:20, 11.37 examples/s]` ## Environment info - `datasets` version: 2.0.0 - Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - PyArrow version: 6.0.1 - Pandas version: 1.3.5 @vrindaprabhu , could you take a look at this since you implemented it? I think the `_generate_examples` function might need to be rewritten
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4122/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4122/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
1 day, 2:20:00
https://api.github.com/repos/huggingface/datasets/issues/4121
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4121/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4121/comments
https://api.github.com/repos/huggingface/datasets/issues/4121/events
https://github.com/huggingface/datasets/issues/4121
1,196,000,018
I_kwDODunzps5HSYMS
4,121
datasets.load_metric can not load a local metirc
{ "avatar_url": "https://avatars.githubusercontent.com/u/51749469?v=4", "events_url": "https://api.github.com/users/SadGare/events{/privacy}", "followers_url": "https://api.github.com/users/SadGare/followers", "following_url": "https://api.github.com/users/SadGare/following{/other_user}", "gists_url": "https://api.github.com/users/SadGare/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/SadGare", "id": 51749469, "login": "SadGare", "node_id": "MDQ6VXNlcjUxNzQ5NDY5", "organizations_url": "https://api.github.com/users/SadGare/orgs", "received_events_url": "https://api.github.com/users/SadGare/received_events", "repos_url": "https://api.github.com/users/SadGare/repos", "site_admin": false, "starred_url": "https://api.github.com/users/SadGare/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SadGare/subscriptions", "type": "User", "url": "https://api.github.com/users/SadGare", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "Hello, could you tell me how this issue can be fixed? I'm coming across the same issue." ]
2022-04-07T12:48:56
2023-01-18T14:30:46
2022-04-07T13:53:27
NONE
null
null
null
null
## Describe the bug No matter how I hard try to tell load_metric that I want to load a local metric file, it still continues to fetch things on the Internet. And unfortunately it says 'ConnectionError: Couldn't reach'. However I can download this file without connectionerror and tell load_metric its local directory. And it comes back where it begins... ## Steps to reproduce the bug ```python metric = load_metric(path=r'C:\Users\Gare\PycharmProjects\Gare\blue\bleu.py') ConnectionError: Couldn't reach https://github.com/tensorflow/nmt/raw/master/nmt/scripts/bleu.py metric = load_metric(path='bleu') ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.12.1/metrics/bleu/bleu.py metric = load_metric(path='./blue/bleu.py') ConnectionError: Couldn't reach https://github.com/tensorflow/nmt/raw/master/nmt/scripts/bleu.py ``` ## Expected results I do read the docs [here](https://huggingface.co/docs/datasets/package_reference/loading_methods#datasets.load_metric). There are no other parameters that help function to distinguish from local and online file but path. As what I code above, it should load from local. ## Actual results > metric = load_metric(path=r'C:\Users\Gare\PycharmProjects\Gare\blue\bleu.py') > ~\AppData\Local\Temp\ipykernel_19636\1855752034.py in <module> ----> 1 metric = load_metric(path=r'C:\Users\Gare\PycharmProjects\Gare\blue\bleu.py') D:\Program Files\Anaconda\envs\Gare\lib\site-packages\datasets\load.py in load_metric(path, config_name, process_id, num_process, cache_dir, experiment_id, keep_in_memory, download_config, download_mode, script_version, **metric_init_kwargs) 817 if data_files is None and data_dir is not None: 818 data_files = os.path.join(data_dir, "**") --> 819 820 self.name = name 821 self.revision = revision D:\Program Files\Anaconda\envs\Gare\lib\site-packages\datasets\load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, dynamic_modules_path, return_resolved_file_path, return_associated_base_path, data_files, **download_kwargs) 639 self, 640 path: str, --> 641 download_config: Optional[DownloadConfig] = None, 642 download_mode: Optional[DownloadMode] = None, 643 dynamic_modules_path: Optional[str] = None, D:\Program Files\Anaconda\envs\Gare\lib\site-packages\datasets\utils\file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs) 297 token = hf_api.HfFolder.get_token() 298 if token: --> 299 headers["authorization"] = f"Bearer {token}" 300 return headers 301 D:\Program Files\Anaconda\envs\Gare\lib\site-packages\datasets\utils\file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token) 604 def _resumable_file_manager(): 605 with open(incomplete_path, "a+b") as f: --> 606 yield f 607 608 temp_file_manager = _resumable_file_manager ConnectionError: Couldn't reach https://github.com/tensorflow/nmt/raw/master/nmt/scripts/bleu.py ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.0.0 - Platform: Windows-10-10.0.22000-SP0 - Python version: 3.7.13 - PyArrow version: 7.0.0 - Pandas version: 1.3.4 Any advice would be appreciated.
{ "avatar_url": "https://avatars.githubusercontent.com/u/51749469?v=4", "events_url": "https://api.github.com/users/SadGare/events{/privacy}", "followers_url": "https://api.github.com/users/SadGare/followers", "following_url": "https://api.github.com/users/SadGare/following{/other_user}", "gists_url": "https://api.github.com/users/SadGare/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/SadGare", "id": 51749469, "login": "SadGare", "node_id": "MDQ6VXNlcjUxNzQ5NDY5", "organizations_url": "https://api.github.com/users/SadGare/orgs", "received_events_url": "https://api.github.com/users/SadGare/received_events", "repos_url": "https://api.github.com/users/SadGare/repos", "site_admin": false, "starred_url": "https://api.github.com/users/SadGare/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SadGare/subscriptions", "type": "User", "url": "https://api.github.com/users/SadGare", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4121/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4121/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
1:04:31
https://api.github.com/repos/huggingface/datasets/issues/4120
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4120/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4120/comments
https://api.github.com/repos/huggingface/datasets/issues/4120/events
https://github.com/huggingface/datasets/issues/4120
1,195,887,430
I_kwDODunzps5HR8tG
4,120
Representing dictionaries (json) objects as features
{ "avatar_url": "https://avatars.githubusercontent.com/u/8031035?v=4", "events_url": "https://api.github.com/users/yanaiela/events{/privacy}", "followers_url": "https://api.github.com/users/yanaiela/followers", "following_url": "https://api.github.com/users/yanaiela/following{/other_user}", "gists_url": "https://api.github.com/users/yanaiela/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yanaiela", "id": 8031035, "login": "yanaiela", "node_id": "MDQ6VXNlcjgwMzEwMzU=", "organizations_url": "https://api.github.com/users/yanaiela/orgs", "received_events_url": "https://api.github.com/users/yanaiela/received_events", "repos_url": "https://api.github.com/users/yanaiela/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yanaiela/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yanaiela/subscriptions", "type": "User", "url": "https://api.github.com/users/yanaiela", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
[]
2022-04-07T11:07:41
2022-04-07T11:07:41
null
CONTRIBUTOR
null
null
null
null
In the process of adding a new dataset to the hub, I stumbled upon the inability to represent dictionaries that contain different key names, unknown in advance (and may differ between samples), original asked in the [forum](https://discuss.huggingface.co/t/representing-nested-dictionary-with-different-keys/16442). For instance: ``` sample1 = {"nps": { "a": {"id": 0, "text": "text1"}, "b": {"id": 1, "text": "text2"}, }} sample2 = {"nps": { "a": {"id": 0, "text": "text1"}, "b": {"id": 1, "text": "text2"}, "c": {"id": 2, "text": "text3"}, }} sample3 = {"nps": { "a": {"id": 0, "text": "text1"}, "b": {"id": 1, "text": "text2"}, "c": {"id": 2, "text": "text3"}, "d": {"id": 3, "text": "text4"}, }} ``` the `nps` field cannot be represented as a Feature while maintaining its original structure. @lhoestq suggested to add JSON as a new feature type, which will solve this problem. It seems like an alternative solution would be to change the original data format, which isn't an optimal solution in my case. Moreover, JSON is a common structure, that will likely to be useful in future datasets as well.
null
{ "+1": 3, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 3, "url": "https://api.github.com/repos/huggingface/datasets/issues/4120/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4120/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
https://api.github.com/repos/huggingface/datasets/issues/4118
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4118/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4118/comments
https://api.github.com/repos/huggingface/datasets/issues/4118/events
https://github.com/huggingface/datasets/issues/4118
1,195,638,944
I_kwDODunzps5HRACg
4,118
Failing CI tests on Windows
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
[]
2022-04-07T07:36:25
2022-04-07T07:57:13
2022-04-07T07:57:13
MEMBER
null
null
null
null
## Describe the bug Our CI Windows tests are failing from yesterday: https://app.circleci.com/pipelines/github/huggingface/datasets/11092/workflows/9cfdb1dd-0fec-4fe0-8122-5f533192ebdc/jobs/67414
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4118/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4118/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
0:20:48
https://api.github.com/repos/huggingface/datasets/issues/4117
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4117/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4117/comments
https://api.github.com/repos/huggingface/datasets/issues/4117/events
https://github.com/huggingface/datasets/issues/4117
1,195,552,406
I_kwDODunzps5HQq6W
4,117
AttributeError: module 'huggingface_hub' has no attribute 'hf_api'
{ "avatar_url": "https://avatars.githubusercontent.com/u/4567991?v=4", "events_url": "https://api.github.com/users/arymbe/events{/privacy}", "followers_url": "https://api.github.com/users/arymbe/followers", "following_url": "https://api.github.com/users/arymbe/following{/other_user}", "gists_url": "https://api.github.com/users/arymbe/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/arymbe", "id": 4567991, "login": "arymbe", "node_id": "MDQ6VXNlcjQ1Njc5OTE=", "organizations_url": "https://api.github.com/users/arymbe/orgs", "received_events_url": "https://api.github.com/users/arymbe/received_events", "repos_url": "https://api.github.com/users/arymbe/repos", "site_admin": false, "starred_url": "https://api.github.com/users/arymbe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/arymbe/subscriptions", "type": "User", "url": "https://api.github.com/users/arymbe", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
[ "Hi @arymbe, thanks for reporting.\r\n\r\nUnfortunately, I'm not able to reproduce your problem.\r\n\r\nCould you please write the complete stack trace? That way we will be able to see which package originates the exception.", "Hello, thank you for your fast replied. this is the complete error that I got\r\n\r\n---------------------------------------------------------------------------\r\n\r\nAttributeError Traceback (most recent call last)\r\n\r\n---------------------------------------------------------------------------\r\n\r\nAttributeError Traceback (most recent call last)\r\n\r\nInput In [27], in <module>\r\n----> 1 from datasets import load_dataset\r\n\r\nvenv/lib/python3.8/site-packages/datasets/__init__.py:39, in <module>\r\n 37 from .arrow_dataset import Dataset, concatenate_datasets\r\n 38 from .arrow_reader import ReadInstruction\r\n---> 39 from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder\r\n 40 from .combine import interleave_datasets\r\n 41 from .dataset_dict import DatasetDict, IterableDatasetDict\r\n\r\nvenv/lib/python3.8/site-packages/datasets/builder.py:40, in <module>\r\n 32 from .arrow_reader import (\r\n 33 HF_GCP_BASE_URL,\r\n 34 ArrowReader,\r\n (...)\r\n 37 ReadInstruction,\r\n 38 )\r\n 39 from .arrow_writer import ArrowWriter, BeamWriter\r\n---> 40 from .data_files import DataFilesDict, sanitize_patterns\r\n 41 from .dataset_dict import DatasetDict, IterableDatasetDict\r\n 42 from .features import Features\r\n\r\nvenv/lib/python3.8/site-packages/datasets/data_files.py:297, in <module>\r\n 292 except FileNotFoundError:\r\n 293 raise FileNotFoundError(f\"The directory at {base_path} doesn't contain any data file\") from None\r\n 296 def _resolve_single_pattern_in_dataset_repository(\r\n--> 297 dataset_info: huggingface_hub.hf_api.DatasetInfo,\r\n 298 pattern: str,\r\n 299 allowed_extensions: Optional[list] = None,\r\n 300 ) -> List[PurePath]:\r\n 301 data_files_ignore = FILES_TO_IGNORE\r\n 302 fs = HfFileSystem(repo_info=dataset_info)\r\n\r\nAttributeError: module 'huggingface_hub' has no attribute 'hf_api'", "This is weird... It is long ago that the package `huggingface_hub` has a submodule called `hf_api`.\r\n\r\nMaybe you have a problem with your installed `huggingface_hub`...\r\n\r\nCould you please try to update it?\r\n```shell\r\npip install -U huggingface_hub\r\n```", "Yap, I've updated several times. Then, I've tried numeral combination of datasets and huggingface_hub versions. However, I think your point is right that there is a problem with my huggingface_hub installation. I'll try another way to find the solution. I'll update it later when I get the solution. Thank you :)", "I'm sorry I can't reproduce your problem.\r\n\r\nMaybe you could try to create a new Python virtual environment and install all dependencies there from scratch. You can use either:\r\n- Python venv: https://docs.python.org/3/library/venv.html\r\n- or conda venv (if you are using conda): https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html", "Facing the same issue.\r\n\r\nResponse from `pip show datasets`\r\n```\r\nName: datasets\r\nVersion: 1.15.1\r\nSummary: HuggingFace community-driven open-source library of datasets\r\nHome-page: https://github.com/huggingface/datasets\r\nAuthor: HuggingFace Inc.\r\nAuthor-email: thomas@huggingface.co\r\nLicense: Apache 2.0\r\nLocation: /usr/local/lib/python3.8/dist-packages\r\nRequires: aiohttp, dill, fsspec, huggingface-hub, multiprocess, numpy, packaging, pandas, pyarrow, requests, tqdm, xxhash\r\nRequired-by: lm-eval\r\n```\r\n\r\nResponse from `pip show huggingface_hub`\r\n\r\n```\r\nName: huggingface-hub\r\nVersion: 0.8.1\r\nSummary: Client library to download and publish models, datasets and other repos on the huggingface.co hub\r\nHome-page: https://github.com/huggingface/huggingface_hub\r\nAuthor: Hugging Face, Inc.\r\nAuthor-email: julien@huggingface.co\r\nLicense: Apache\r\nLocation: /usr/local/lib/python3.8/dist-packages\r\nRequires: filelock, packaging, pyyaml, requests, tqdm, typing-extensions\r\nRequired-by: datasets\r\n```\r\n\r\nresponse from `datasets-cli env`\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/usr/local/bin/datasets-cli\", line 5, in <module>\r\n from datasets.commands.datasets_cli import main\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/__init__.py\", line 37, in <module>\r\n from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/builder.py\", line 44, in <module>\r\n from .data_files import DataFilesDict, _sanitize_patterns\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/data_files.py\", line 120, in <module>\r\n dataset_info: huggingface_hub.hf_api.DatasetInfo,\r\n File \"/usr/local/lib/python3.8/dist-packages/huggingface_hub/__init__.py\", line 105, in __getattr__\r\n raise AttributeError(f\"No {package_name} attribute {name}\")\r\nAttributeError: No huggingface_hub attribute hf_api\r\n```", "A workaround: \r\nI changed lines around Line 125 in `__init__.py` of `huggingface_hub` to something like\r\n```\r\n__getattr__, __dir__, __all__ = _attach(\r\n __name__,\r\n submodules=['hf_api'],\r\n```\r\nand it works ( which gives `datasets` direct access to `huggingface_hub.hf_api` ).", "I was getting the same issue. After trying a few versions, following combination worked for me.\r\ndataset==2.3.2\r\nhuggingface_hub==0.7.0\r\n\r\nIn another environment, I just installed latest repos from pip through `pip install -U transformers datasets tokenizers evaluate`, resulting in following versions. This also worked. Hope it helps someone. \r\n\r\ndatasets-2.3.2 evaluate-0.1.2 huggingface-hub-0.8.1 responses-0.18.0 tokenizers-0.12.1 transformers-4.20.1", "For layoutlm_v3 finetune\r\ndatasets-2.3.2 evaluate-0.1.2 huggingface-hub-0.8.1 responses-0.18.0 tokenizers-0.12.1 transformers-4.12.5", "(For layoutlmv3 fine-tuning) In my case, modifying `requirements.txt` as below worked.\r\n\r\n- python = 3.7\r\n\r\n```\r\ndatasets==2.3.2\r\nevaluate==0.1.2\r\nhuggingface-hub==0.8.1\r\nresponse==0.5.0\r\ntokenizers==0.10.1\r\ntransformers==4.12.5\r\nseqeval==1.2.2\r\ndeepspeed==0.5.7\r\ntensorboard==2.7.0\r\nseqeval==1.2.2\r\nsentencepiece\r\ntimm==0.4.12\r\nPillow\r\neinops\r\ntextdistance\r\nshapely\r\n```", "> For layoutlm_v3 finetune datasets-2.3.2 evaluate-0.1.2 huggingface-hub-0.8.1 responses-0.18.0 tokenizers-0.12.1 transformers-4.12.5\r\n\r\nGOOD!! Thanks!", "I encountered the same issue where the problem is the absence of the 'scipy' library.\r\nTo solve this open your terminal or command prompt and run the following command to install 'scipy': pip install scipy .\r\nRestart the kernel and rerun the cell and it will work.\r\n", "> I was getting the same issue. After trying a few versions, following combination worked for me. dataset==2.3.2 huggingface_hub==0.7.0\r\n> \r\n> In another environment, I just installed latest repos from pip through `pip install -U transformers datasets tokenizers evaluate`, resulting in following versions. This also worked. Hope it helps someone.\r\n> \r\n> datasets-2.3.2 evaluate-0.1.2 huggingface-hub-0.8.1 responses-0.18.0 tokenizers-0.12.1 transformers-4.20.1\r\n\r\n\r\n\r\n> I was getting the same issue. After trying a few versions, following combination worked for me. dataset==2.3.2 huggingface_hub==0.7.0\r\n> \r\n> In another environment, I just installed latest repos from pip through `pip install -U transformers datasets tokenizers evaluate`, resulting in following versions. This also worked. Hope it helps someone.\r\n> \r\n> datasets-2.3.2 evaluate-0.1.2 huggingface-hub-0.8.1 responses-0.18.0 tokenizers-0.12.1 transformers-4.20.1\r\n\r\nI face with the same issue. After using your approach I solve the issue.Thank you very much.\r\n`pip install -U datasets`" ]
2022-04-07T05:52:36
2024-05-07T09:24:35
2022-04-19T15:36:35
NONE
null
null
null
null
## Describe the bug Could you help me please. I got this following error. AttributeError: module 'huggingface_hub' has no attribute 'hf_api' ## Steps to reproduce the bug when I imported the datasets # Sample code to reproduce the bug from datasets import list_datasets, load_dataset, list_metrics, load_metric ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.0.0 - Platform: macOS-12.3-x86_64-i386-64bit - Python version: 3.8.9 - PyArrow version: 7.0.0 - Pandas version: 1.3.5 - Huggingface-hub: 0.5.0 - Transformers: 4.18.0 Thank you in advance.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 3, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 3, "url": "https://api.github.com/repos/huggingface/datasets/issues/4117/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4117/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
12 days, 9:43:59
https://api.github.com/repos/huggingface/datasets/issues/4115
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4115/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4115/comments
https://api.github.com/repos/huggingface/datasets/issues/4115/events
https://github.com/huggingface/datasets/issues/4115
1,194,907,555
I_kwDODunzps5HONej
4,115
ImageFolder add option to ignore some folders like '.ipynb_checkpoints'
{ "avatar_url": "https://avatars.githubusercontent.com/u/15624271?v=4", "events_url": "https://api.github.com/users/cceyda/events{/privacy}", "followers_url": "https://api.github.com/users/cceyda/followers", "following_url": "https://api.github.com/users/cceyda/following{/other_user}", "gists_url": "https://api.github.com/users/cceyda/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cceyda", "id": 15624271, "login": "cceyda", "node_id": "MDQ6VXNlcjE1NjI0Mjcx", "organizations_url": "https://api.github.com/users/cceyda/orgs", "received_events_url": "https://api.github.com/users/cceyda/received_events", "repos_url": "https://api.github.com/users/cceyda/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cceyda/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cceyda/subscriptions", "type": "User", "url": "https://api.github.com/users/cceyda", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
[ "Maybe it would be nice to ignore private dirs like this one (ones starting with `.`) by default. \r\n\r\nCC @mariosasko ", "Maybe we can add a `ignore_hidden_files` flag to the builder configs of our packaged loaders (to be consistent across all of them), wdyt @lhoestq @albertvillanova? ", "I think they should always ignore them actually ! Not sure if adding a flag would be helpful", "@lhoestq But what if the user explicitly requests those files via regex?\r\n\r\n`glob.glob` ignores hidden files (files starting with \".\") by default unless they are explicitly requested, but fsspec's `glob` doesn't follow this behavior, which is probably a bug, so maybe we can raise an issue or open a PR in their repo?", "> @lhoestq But what if the user explicitly requests those files via regex?\r\n\r\nUsually hidden files are meant to be ignored. If they are data files, they must be placed outside a hidden directory in the first place right ? I think it's more sensible to explain this than adding a flag.\r\n\r\n> glob.glob ignores hidden files (files starting with \".\") by default unless they are explicitly requested, but fsspec's glob doesn't follow this behavior, which is probably a bug, so maybe we can raise an issue or open a PR in their repo?\r\n\r\nAfter globbing using `fsspec`, we already ignore files that start with a `.` in `_resolve_single_pattern_locally` and `_resolve_single_pattern_in_dataset_repository`, I guess we can just account for parent directories as well ?\r\n\r\nWe could open an issue on `fsspec` but I think they won't change this since it's an important breaking change for them." ]
2022-04-06T17:29:43
2022-06-01T13:04:16
2022-06-01T13:04:16
CONTRIBUTOR
null
null
null
null
**Is your feature request related to a problem? Please describe.** I sometimes like to peek at the dataset images from jupyterlab. thus '.ipynb_checkpoints' folder appears where my dataset is and (just realized) leads to accidental duplicate image additions. I think this is an easy enough thing to miss especially if the dataset is very large. **Describe the solution you'd like** maybe have an option `ignore` or something .gitignore style `dataset = load_dataset("imagefolder", data_dir="./data/original", ignore="regex?")` **Describe alternatives you've considered** Could filter out manually
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4115/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4115/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
55 days, 19:34:33
https://api.github.com/repos/huggingface/datasets/issues/4114
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4114/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4114/comments
https://api.github.com/repos/huggingface/datasets/issues/4114/events
https://github.com/huggingface/datasets/issues/4114
1,194,855,345
I_kwDODunzps5HOAux
4,114
Allow downloading just some columns of a dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4", "events_url": "https://api.github.com/users/osanseviero/events{/privacy}", "followers_url": "https://api.github.com/users/osanseviero/followers", "following_url": "https://api.github.com/users/osanseviero/following{/other_user}", "gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/osanseviero", "id": 7246357, "login": "osanseviero", "node_id": "MDQ6VXNlcjcyNDYzNTc=", "organizations_url": "https://api.github.com/users/osanseviero/orgs", "received_events_url": "https://api.github.com/users/osanseviero/received_events", "repos_url": "https://api.github.com/users/osanseviero/repos", "site_admin": false, "starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions", "type": "User", "url": "https://api.github.com/users/osanseviero", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
[ "In the general case you can’t always reduce the quantity of data to download, since you can’t parse CSV or JSON data without downloading the whole files right ? ^^ However we could explore this case-by-case I guess", "Actually for csv pandas has `usecols` which allows loading a subset of columns in a more efficient way afaik, but yes, you're right this might be more complex than I thought.", "Bumping the visibility of this :) Is there a recommended way of doing this?", "Passing `columns=[...]` to `load_dataset()` in streaming mode does work if the dataset is in Parquet format, but for other formats it's either not possible or not implemented", "I tried using the `columns=['bambara']` on this dataset `oza75/bambara-tts` which is in parquet, but it does not work. This feature is really useful because sometimes you don't want to download the whole dataset but just a few columns.", "It doesn't work for the dataset with `parquet` format. Are we missing something?", "It only works for `streaming=True`. When not streaming it does download the full files locally before reading the data", "Hi @lhoestq, I have an audio dataset of 250GB on the huggingface hub in parquet format. I only wanted to load the text column. It is taking a lot of time. It seems like it is downloading audio as well even in streaming mode. ", "bump on this", "Something like this worked for me:\n```py\nds = load_dataset(\n \"parler-tts/libritts_r_filtered\",\n \"clean\",\n streaming=True,\n columns=['text_normalized']\n)\n```", "I want to get all the captions from this dataset https://huggingface.co/datasets/gmongaras/Imagenet21K_Recaption\nand annoyingly you need 2x the disk space listed (3.8tb downloaded, and then it needs another 3.8 tb when it does the \"generating train split\") to download and load the dataset using load_dataset. So that made it unfeasible for me, I even wasted a few hours trying to do it on a rented instance with 5tb of disk space (which i thought would be more than enough, but isnt because of the 2x disk space thing)\n\nDoing it in streaming mode is way too slow, going through all the rows will take ages. Please make a simpler solution to just get a few columns", "You stream the dataset and speed it up with multiple workers, can you try this ?\n\n```python\nfrom datasets import load_dataset\nfrom torch.utils.data import DataLoader\n\nds = load_dataset(\"gmongaras/Imagenet21K_Recaption\", split=\"train\", streaming=True, columns=[\"recaption\"])\ndl = DataLoader(ds, num_workers=8)\n\nfor example in dl:\n ...\n```\n\nYou can also be interested in checkpointing/resuming the streaming (e.g. in case of connection error), check this [guide](https://huggingface.co/docs/datasets/v3.3.0/en/use_with_pytorch#checkpoint-and-resume) for more details\n\nAlternatively feel free to try with other tools that can load Parquet and select columns, like DuckDB / Polars / Spark (see the list of integrated libraries [here](https://huggingface.co/docs/hub/datasets-libraries)).", "> You stream the dataset and speed it up with multiple workers, can you try this ?\n> \n> from datasets import load_dataset\n> from torch.utils.data import DataLoader\n> \n> ds = load_dataset(\"gmongaras/Imagenet21K_Recaption\", split=\"train\", streaming=True, columns=[\"recaption\"])\n> dl = DataLoader(ds, num_workers=8)\n> \n> for example in dl:\n> ...\n> You can also be interested in checkpointing/resuming the streaming (e.g. in case of connection error), check this [guide](https://huggingface.co/docs/datasets/v3.3.0/en/use_with_pytorch#checkpoint-and-resume) for more details\n> \n> Alternatively feel free to try with other tools that can load Parquet and select columns, like DuckDB / Polars / Spark (see the list of integrated libraries [here](https://huggingface.co/docs/hub/datasets-libraries)).\n\nthe dataloader is faster but not by much. I get 15 its/s. Will take like 270 hours at this pace.", "Can you check your connection ? I'm at 1,000+ it/s on colab and also on my (consumer) internet connection.\n\nYou can also try to update `datasets`, `pyarrow`, `fsspec` and `huggingface_hub`" ]
2022-04-06T16:38:46
2025-02-17T15:10:56
null
CONTRIBUTOR
null
null
null
null
**Is your feature request related to a problem? Please describe.** Some people are interested in doing label analysis of a CV dataset without downloading all the images. Downloading the whole dataset does not always makes sense for this kind of use case **Describe the solution you'd like** Be able to just download some columns of a dataset, such as doing ```python load_dataset("huggan/wikiart",columns=["artist", "genre"]) ``` Although this might make things a bit complicated in terms of local caching of datasets.
null
{ "+1": 17, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 17, "url": "https://api.github.com/repos/huggingface/datasets/issues/4114/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4114/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
https://api.github.com/repos/huggingface/datasets/issues/4113
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4113/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4113/comments
https://api.github.com/repos/huggingface/datasets/issues/4113/events
https://github.com/huggingface/datasets/issues/4113
1,194,843,532
I_kwDODunzps5HN92M
4,113
Multiprocessing with FileLock fails in python 3.9
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "Closing this one because it must be used this way actually:\r\n```python\r\ndef main():\r\n with FileLock(\"tmp.lock\"):\r\n with Pool(2) as pool:\r\n pool.map(run, range(2))\r\n\r\nif __name__ == \"__main__\":\r\n main()\r\n```" ]
2022-04-06T16:27:09
2022-11-28T11:49:14
2022-11-28T11:49:14
MEMBER
null
null
null
null
On python 3.9, this code hangs: ```python from multiprocessing import Pool from filelock import FileLock def run(i): print(f"got the lock in multi process [{i}]") with FileLock("tmp.lock"): with Pool(2) as pool: pool.map(run, range(2)) ``` This is because the subprocesses try to acquire the lock from the main process for some reason. This is not the case in older versions of python. This can cause many issues in python 3.9. In particular, we use multiprocessing to fetch data files when you load a dataset (as long as there are >16 data files). Therefore `imagefolder` hangs, and I expect any dataset that needs to download >16 files to hang as well. Let's see if we can fix this and have a CI that runs on 3.9. cc @mariosasko @julien-c
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4113/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4113/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
235 days, 19:22:05
https://api.github.com/repos/huggingface/datasets/issues/4112
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4112/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4112/comments
https://api.github.com/repos/huggingface/datasets/issues/4112/events
https://github.com/huggingface/datasets/issues/4112
1,194,752,765
I_kwDODunzps5HNnr9
4,112
ImageFolder with Grayscale images dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/50595514?v=4", "events_url": "https://api.github.com/users/chainyo/events{/privacy}", "followers_url": "https://api.github.com/users/chainyo/followers", "following_url": "https://api.github.com/users/chainyo/following{/other_user}", "gists_url": "https://api.github.com/users/chainyo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/chainyo", "id": 50595514, "login": "chainyo", "node_id": "MDQ6VXNlcjUwNTk1NTE0", "organizations_url": "https://api.github.com/users/chainyo/orgs", "received_events_url": "https://api.github.com/users/chainyo/received_events", "repos_url": "https://api.github.com/users/chainyo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/chainyo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chainyo/subscriptions", "type": "User", "url": "https://api.github.com/users/chainyo", "user_view_type": "public" }
[]
closed
false
null
[]
[ "Hi! Replacing:\r\n```python\r\ntransformed_dataset = dataset.with_transform(transforms)\r\ntransformed_dataset.set_format(type=\"torch\", device=\"cuda\")\r\n```\r\n\r\nwith:\r\n```python\r\ndef transform_func(examples):\r\n examples[\"image\"] = [transforms(img).to(\"cuda\") for img in examples[\"image\"]]\r\n return examples\r\n\r\ntransformed_dataset = dataset.with_transform(transform_func)\r\n```\r\nshould fix the issue. `datasets` doesn't support chaining of transforms (you can think of `set_format`/`with_format` as a predefined transform func for `set_transform`/`with_transforms`), so the last transform (in your case, `set_format`) takes precedence over the previous ones (in your case `with_format`). And the PyTorch formatter is not supported by the Image feature, hence the error (adding support for that is on our short-term roadmap).", "Ok thanks a lot for the code snippet!\r\n\r\nI love the way `datasets` is easy to use but it made it really long to pre-process all the images (400.000 in my case) before training anything. `ImageFolder` from pytorch is faster in my case but force me to have the images on my local machine.\r\n\r\nI don't know how to speed up the process without switching to `ImageFolder` :smile: ", "You can pass `ignore_verifications=True` in `load_dataset` to skip checksum verification, which takes a lot of time if the number of files is large. We will consider making this the default behavior." ]
2022-04-06T15:10:00
2022-04-22T10:21:53
2022-04-22T10:21:52
NONE
null
null
null
null
Hi, I'm facing a problem with a grayscale images dataset I have uploaded [here](https://huggingface.co/datasets/ChainYo/rvl-cdip) (RVL-CDIP) I'm getting an error while I want to use images for training a model with PyTorch DataLoader. Here is the full traceback: ```bash AttributeError: Caught AttributeError in DataLoader worker process 0. Original Traceback (most recent call last): File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop data = fetcher.fetch(index) File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 49, in <listcomp> data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1765, in __getitem__ return self._getitem( File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1750, in _getitem formatted_output = format_table( File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/formatting/formatting.py", line 532, in format_table return formatter(pa_table, query_type=query_type) File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/formatting/formatting.py", line 281, in __call__ return self.format_row(pa_table) File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/formatting/torch_formatter.py", line 58, in format_row return self.recursive_tensorize(row) File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/formatting/torch_formatter.py", line 54, in recursive_tensorize return map_nested(self._recursive_tensorize, data_struct, map_list=False) File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 314, in map_nested mapped = [ File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 315, in <listcomp> _single_map_nested((function, obj, types, None, True, None)) File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 267, in _single_map_nested return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar} File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 267, in <dictcomp> return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar} File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 251, in _single_map_nested return function(data_struct) File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/formatting/torch_formatter.py", line 51, in _recursive_tensorize return self._tensorize(data_struct) File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/formatting/torch_formatter.py", line 38, in _tensorize if np.issubdtype(value.dtype, np.integer): AttributeError: 'bytes' object has no attribute 'dtype' ``` I don't really understand why the image is still a bytes object while I used transformations on it. Here the code I used to upload the dataset (and it worked well): ```python train_dataset = load_dataset("imagefolder", data_dir="data/train") train_dataset = train_dataset["train"] test_dataset = load_dataset("imagefolder", data_dir="data/test") test_dataset = test_dataset["train"] val_dataset = load_dataset("imagefolder", data_dir="data/val") val_dataset = val_dataset["train"] dataset = DatasetDict({ "train": train_dataset, "val": val_dataset, "test": test_dataset }) dataset.push_to_hub("ChainYo/rvl-cdip") ``` Now here is the code I am using to get the dataset and prepare it for training: ```python img_size = 512 batch_size = 128 normalize = [(0.5), (0.5)] data_dir = "ChainYo/rvl-cdip" dataset = load_dataset(data_dir, split="train") transforms = transforms.Compose([ transforms.Resize(img_size), transforms.CenterCrop(img_size), transforms.ToTensor(), transforms.Normalize(*normalize) ]) transformed_dataset = dataset.with_transform(transforms) transformed_dataset.set_format(type="torch", device="cuda") train_dataloader = torch.utils.data.DataLoader( transformed_dataset, batch_size=batch_size, shuffle=True, num_workers=4, pin_memory=True ) ``` But this get me the error above. I don't understand why it's doing this kind of weird thing? Do I need to map something on the dataset? Something like this: ```python labels = dataset.features["label"].names num_labels = dataset.features["label"].num_classes def preprocess_data(examples): images = [ex.convert("RGB") for ex in examples["image"]] labels = [ex for ex in examples["label"]] return {"images": images, "labels": labels} features = Features({ "images": Image(decode=True, id=None), "labels": ClassLabel(num_classes=num_labels, names=labels) }) decoded_dataset = dataset.map(preprocess_data, remove_columns=dataset.column_names, features=features, batched=True, batch_size=100) ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/50595514?v=4", "events_url": "https://api.github.com/users/chainyo/events{/privacy}", "followers_url": "https://api.github.com/users/chainyo/followers", "following_url": "https://api.github.com/users/chainyo/following{/other_user}", "gists_url": "https://api.github.com/users/chainyo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/chainyo", "id": 50595514, "login": "chainyo", "node_id": "MDQ6VXNlcjUwNTk1NTE0", "organizations_url": "https://api.github.com/users/chainyo/orgs", "received_events_url": "https://api.github.com/users/chainyo/received_events", "repos_url": "https://api.github.com/users/chainyo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/chainyo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chainyo/subscriptions", "type": "User", "url": "https://api.github.com/users/chainyo", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4112/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4112/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
15 days, 19:11:52
https://api.github.com/repos/huggingface/datasets/issues/4107
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4107/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4107/comments
https://api.github.com/repos/huggingface/datasets/issues/4107/events
https://github.com/huggingface/datasets/issues/4107
1,194,484,885
I_kwDODunzps5HMmSV
4,107
Unable to view the dataset and loading the same dataset throws the error - ArrowInvalid: Exceeded maximum rows
{ "avatar_url": "https://avatars.githubusercontent.com/u/23344465?v=4", "events_url": "https://api.github.com/users/Pavithree/events{/privacy}", "followers_url": "https://api.github.com/users/Pavithree/followers", "following_url": "https://api.github.com/users/Pavithree/following{/other_user}", "gists_url": "https://api.github.com/users/Pavithree/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Pavithree", "id": 23344465, "login": "Pavithree", "node_id": "MDQ6VXNlcjIzMzQ0NDY1", "organizations_url": "https://api.github.com/users/Pavithree/orgs", "received_events_url": "https://api.github.com/users/Pavithree/received_events", "repos_url": "https://api.github.com/users/Pavithree/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Pavithree/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Pavithree/subscriptions", "type": "User", "url": "https://api.github.com/users/Pavithree", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "Thanks for reporting. I'm looking at it", " It's not related to the dataset viewer in itself. I can replicate the error with:\r\n\r\n```\r\n>>> import datasets as ds\r\n>>> d = ds.load_dataset('Pavithree/explainLikeImFive')\r\nUsing custom data configuration Pavithree--explainLikeImFive-b68b6d8112cd8a51\r\nDownloading and preparing dataset json/Pavithree--explainLikeImFive to /home/slesage/.cache/huggingface/datasets/json/Pavithree--explainLikeImFive-b68b6d8112cd8a51/0.0.0/ac0ca5f5289a6cf108e706efcf040422dbbfa8e658dee6a819f20d76bb84d26b...\r\nDownloading data: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 305M/305M [00:03<00:00, 98.6MB/s]\r\nDownloading data: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 17.9M/17.9M [00:00<00:00, 75.7MB/s]\r\nDownloading data: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 11.9M/11.9M [00:00<00:00, 70.6MB/s]\r\nDownloading data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:05<00:00, 1.92s/it]\r\nExtracting data files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 1948.42it/s]\r\nFailed to read file '/home/slesage/.cache/huggingface/datasets/downloads/5fee9c8819754df277aee6f252e4db6897d785231c21938407b8862ca871d246' with error <class 'pyarrow.lib.ArrowInvalid'>: Exceeded maximum rows\r\nTraceback (most recent call last):\r\n File \"/home/slesage/hf/datasets/src/datasets/packaged_modules/json/json.py\", line 144, in _generate_tables\r\n dataset = json.load(f)\r\n File \"/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/json/__init__.py\", line 293, in load\r\n return loads(fp.read(),\r\n File \"/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/json/__init__.py\", line 357, in loads\r\n return _default_decoder.decode(s)\r\n File \"/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/json/decoder.py\", line 340, in decode\r\n raise JSONDecodeError(\"Extra data\", s, end)\r\njson.decoder.JSONDecodeError: Extra data: line 1 column 916 (char 915)\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets/src/datasets/load.py\", line 1691, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/home/slesage/hf/datasets/src/datasets/builder.py\", line 605, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/slesage/hf/datasets/src/datasets/builder.py\", line 694, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"/home/slesage/hf/datasets/src/datasets/builder.py\", line 1151, in _prepare_split\r\n for key, table in logging.tqdm(\r\n File \"/home/slesage/.pyenv/versions/datasets/lib/python3.8/site-packages/tqdm/std.py\", line 1168, in __iter__\r\n for obj in iterable:\r\n File \"/home/slesage/hf/datasets/src/datasets/packaged_modules/json/json.py\", line 146, in _generate_tables\r\n raise e\r\n File \"/home/slesage/hf/datasets/src/datasets/packaged_modules/json/json.py\", line 122, in _generate_tables\r\n pa_table = paj.read_json(\r\n File \"pyarrow/_json.pyx\", line 246, in pyarrow._json.read_json\r\n File \"pyarrow/error.pxi\", line 143, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow/error.pxi\", line 99, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowInvalid: Exceeded maximum rows\r\n```\r\n\r\ncc @lhoestq @albertvillanova @mariosasko ", "It seems that train.json is not a valid JSON Lines file: it has several JSON objects in the first line (the 915th character in the first line starts a new object, and there's no \"\\n\")\r\n\r\nYou need to have one JSON object per line", "I'm closing this issue.\r\n\r\n@Pavithree, please, feel free to re-open it if fixing the JSON file does not solve it.", "Thank you! that fixes the issue." ]
2022-04-06T11:37:15
2022-04-08T07:13:07
2022-04-06T14:39:55
NONE
null
null
null
null
## Dataset viewer issue - -ArrowInvalid: Exceeded maximum rows **Link:** *https://huggingface.co/datasets/Pavithree/explainLikeImFive* *This is the subset of original eli5 dataset https://huggingface.co/datasets/vblagoje/lfqa. I just filtered the data samples which belongs to one particular subreddit thread. However, the dataset preview for train split returns the below mentioned error: Status code: 400 Exception: ArrowInvalid Message: Exceeded maximum rows When I try to load the same dataset it returns ArrowInvalid: Exceeded maximum rows error* Am I the one who added this dataset ? Yes
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4107/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4107/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
3:02:40
https://api.github.com/repos/huggingface/datasets/issues/4105
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4105/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4105/comments
https://api.github.com/repos/huggingface/datasets/issues/4105/events
https://github.com/huggingface/datasets/issues/4105
1,194,297,119
I_kwDODunzps5HL4cf
4,105
push to hub fails with huggingface-hub 0.5.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/2518789?v=4", "events_url": "https://api.github.com/users/frascuchon/events{/privacy}", "followers_url": "https://api.github.com/users/frascuchon/followers", "following_url": "https://api.github.com/users/frascuchon/following{/other_user}", "gists_url": "https://api.github.com/users/frascuchon/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/frascuchon", "id": 2518789, "login": "frascuchon", "node_id": "MDQ6VXNlcjI1MTg3ODk=", "organizations_url": "https://api.github.com/users/frascuchon/orgs", "received_events_url": "https://api.github.com/users/frascuchon/received_events", "repos_url": "https://api.github.com/users/frascuchon/repos", "site_admin": false, "starred_url": "https://api.github.com/users/frascuchon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/frascuchon/subscriptions", "type": "User", "url": "https://api.github.com/users/frascuchon", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "Hi ! Indeed there was a breaking change in `huggingface_hub` 0.5.0 in `HfApi.create_repo`, which is called here in `datasets` by passing the org name in both the `repo_id` and the `organization` arguments:\r\n\r\nhttps://github.com/huggingface/datasets/blob/2230f7f7d7fbaf102cff356f5a8f3bd1561bea43/src/datasets/arrow_dataset.py#L3363-L3369\r\n\r\nI think we should fix that in `huggingface_hub`, will keep you posted. In the meantime please use `huggingface_hub` 0.4.0", "I'll be sending a fix for this later today on the `huggingface_hub` side.\r\n\r\nThe error would be converted to a `FutureWarning` if `datasets` uses kwargs instead of positional, for example here: \r\n\r\nhttps://github.com/huggingface/datasets/blob/2230f7f7d7fbaf102cff356f5a8f3bd1561bea43/src/datasets/arrow_dataset.py#L3363-L3369\r\n\r\nto be:\r\n\r\n``` python\r\n api.create_repo(\r\n name=dataset_name,\r\n token=token,\r\n repo_type=\"dataset\",\r\n organization=organization,\r\n private=private,\r\n )\r\n```\r\n\r\nBut `name` and `organization` are deprecated in `huggingface_hub=0.5`, and people should pass `repo_id='org/name` instead. Note that `repo_id` was introduced in 0.5 and if `datasets` wants to support older `huggingface_hub` versions (which I encourage it to do), there needs to be a helper function to do that. It can be something like:\r\n\r\n\r\n```python\r\ndef create_repo(\r\n client,\r\n name: str,\r\n token: Optional[str] = None,\r\n organization: Optional[str] = None,\r\n private: Optional[bool] = None,\r\n repo_type: Optional[str] = None,\r\n exist_ok: Optional[bool] = False,\r\n space_sdk: Optional[str] = None,\r\n) -> str:\r\n try:\r\n return client.create_repo(\r\n repo_id=f\"{organization}/{name}\",\r\n token=token,\r\n private=private,\r\n repo_type=repo_type,\r\n exist_ok=exist_ok,\r\n space_sdk=space_sdk,\r\n )\r\n except TypeError:\r\n return client.create_repo(\r\n name=name,\r\n organization=organization,\r\n token=token,\r\n private=private,\r\n repo_type=repo_type,\r\n exist_ok=exist_ok,\r\n space_sdk=space_sdk,\r\n )\r\n```\r\n\r\nin a `utils/_fixes.py` kinda file and and be used internally.\r\n\r\nI'll be sending a patch to `huggingface_hub` to convert the error reported in this issue to a `FutureWarning`.", "PR with the hotfix on the `huggingface_hub` side: https://github.com/huggingface/huggingface_hub/pull/822", "We can definitely change `push_to_hub` to use `repo_id` in `datasets` and require `huggingface_hub>=0.5.0`.\r\n\r\nLet me open a PR :)", "`huggingface_hub` 0.5.1 just got released with a fix, feel free to update `huggingface_hub` ;)" ]
2022-04-06T08:59:57
2022-04-13T14:30:47
2022-04-13T14:30:47
NONE
null
null
null
null
## Describe the bug `ds.push_to_hub` is failing when updating a dataset in the form "org_id/repo_id" ## Steps to reproduce the bug ```python from datasets import load_dataset ds = load_dataset("rubrix/news_test") ds.push_to_hub("<your-user>/news_test", token="<your-token>") ``` ## Expected results The dataset is successfully uploaded ## Actual results An error validation is raised: ```bash if repo_id and (name or organization): > raise ValueError( "Only pass `repo_id` and leave deprecated `name` and " "`organization` to be None." E ValueError: Only pass `repo_id` and leave deprecated `name` and `organization` to be None. ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.1 - `huggingface-hub`: 0.5 - Platform: macOS - Python version: 3.8.12 - PyArrow version: 6.0.0 cc @adrinjalali
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4105/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4105/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
7 days, 5:30:50
https://api.github.com/repos/huggingface/datasets/issues/4104
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4104/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4104/comments
https://api.github.com/repos/huggingface/datasets/issues/4104/events
https://github.com/huggingface/datasets/issues/4104
1,194,072,966
I_kwDODunzps5HLBuG
4,104
Add time series data - stock market
{ "avatar_url": "https://avatars.githubusercontent.com/u/45640029?v=4", "events_url": "https://api.github.com/users/rozeappletree/events{/privacy}", "followers_url": "https://api.github.com/users/rozeappletree/followers", "following_url": "https://api.github.com/users/rozeappletree/following{/other_user}", "gists_url": "https://api.github.com/users/rozeappletree/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/rozeappletree", "id": 45640029, "login": "rozeappletree", "node_id": "MDQ6VXNlcjQ1NjQwMDI5", "organizations_url": "https://api.github.com/users/rozeappletree/orgs", "received_events_url": "https://api.github.com/users/rozeappletree/received_events", "repos_url": "https://api.github.com/users/rozeappletree/repos", "site_admin": false, "starred_url": "https://api.github.com/users/rozeappletree/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rozeappletree/subscriptions", "type": "User", "url": "https://api.github.com/users/rozeappletree", "user_view_type": "public" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
open
false
null
[]
[ "Can I use instructions present in below link for time series dataset as well? \r\nhttps://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md ", "cc'ing @kashif and @NielsRogge for visibility!", "@INF800 happy to add this dataset! I will try to set a PR by the end of the day... if you can kindly point me to the dataset? Also, note we have a bunch of time series datasets checked in e.g. `electricity_load_diagrams` or `monash_tsf`, and ideally this dataset could also be in a similar format. ", "Thankyou. This is how raw data looks like before cleaning for an individual stocks:\r\n\r\n1. https://github.com/INF800/marktech/tree/raw-data/f/data/raw\r\n2. https://github.com/INF800/marktech/tree/raw-data/t/data/raw\r\n3. https://github.com/INF800/marktech/tree/raw-data/rdfn/data/raw\r\n4. https://github.com/INF800/marktech/tree/raw-data/irbt/data/raw\r\n5. https://github.com/INF800/marktech/tree/raw-data/hll/data/raw\r\n6. https://github.com/INF800/marktech/tree/raw-data/infy/data/raw\r\n7. https://github.com/INF800/marktech/tree/raw-data/reli/data/raw\r\n8. https://github.com/INF800/marktech/tree/raw-data/hdbk/data/raw\r\n\r\n> Scraping is automated using GitHub Actions. So, everyday we will see a new file added in the above links.\r\n\r\nI can rewrite the cleaning scripts to make sure it fits HF dataset standards. (P.S I am very much new to HF dataset)\r\n\r\nThe data set above can be converted into univariate regression / multivariate regression / sequence to sequence generation dataset etc. So, do we have some kind of transformation modules that will read the dataset as some type of dataset (`GenericTimeData`) and convert it to other possible dataset relating to a specific ML task. **By having this kind of transformation module, I only have to add data once** and use transformation module whenever necessary\r\n\r\nAdditionally, having some kind of versioning for the dataset will be really helpful because it will keep on updating - especially time series datasets ", "thanks @INF800 I'll have a look. I believe it should be possible to incorporate this into the time-series format.", "Referencing https://github.com/qingsongedu/time-series-transformers-review", "@INF800 yes I am aware of the review repository and paper which is more or less a collection of abstracts etc. I am working on a unified library of implementations of these papers together with datasets to be then able to compare/contrast and build upon the research etc. but I am not ready to share them publicly just yet.\r\n\r\nIn any case regarding your dataset at the moment its seems from looking at the csv files, its mixture of textual and numerical data, sometimes in the same column etc. As you know, for time series models we would need just numeric data so I would need your help in disambiguating the dataset you have collected and also perhaps starting with just numerical data to start with... \r\n\r\nDo you think you can make a version with just numerical data?", "> @INF800 yes I am aware of the review repository and paper which is more or less a collection of abstracts etc. I am working on a unified library of implementations of these papers together with datasets to be then able to compare/contrast and build upon the research etc. but I am not ready to share them publicly just yet.\r\n> \r\n> In any case regarding your dataset at the moment its seems from looking at the csv files, its mixture of textual and numerical data, sometimes in the same column etc. As you know, for time series models we would need just numeric data so I would need your help in disambiguating the dataset you have collected and also perhaps starting with just numerical data to start with...\r\n> \r\n> Do you think you can make a version with just numerical data?\r\n\r\nWill share the numeric data and conversion script within end of this week. \r\n\r\nI am on a business trip currently - it is in my desktop.", "thanks @INF800 kashif.rasul@gmail.com should work", "It should be in your inbox!\r\n\r\nOn Sun, 21 Jul, 2024, 9:44 pm Kashif Rasul, ***@***.***>\r\nwrote:\r\n\r\n> thanks @INF800 <https://github.com/INF800> ***@***.*** should\r\n> work\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/datasets/issues/4104#issuecomment-2241701256>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AK4GSXLHCOGNTU5ERJ6M3ITZNPM6TAVCNFSM6AAAAABLG65FLKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDENBRG4YDCMRVGY>\r\n> .\r\n> You are receiving this because you were mentioned.Message ID:\r\n> ***@***.***>\r\n>\r\n" ]
2022-04-06T05:46:58
2024-07-21T16:54:30
null
NONE
null
null
null
null
## Adding a Time Series Dataset - **Name:** 2min ticker data for stock market - **Description:** 8 stocks' data collected for 1month post ukraine-russia war. 4 NSE stocks and 4 NASDAQ stocks. Along with technical indicators (additional features) as shown in below image - **Data:** Collected by myself from investing.com - **Motivation:** Test applicability of transformer based model on stock market / time series problem ![image](https://user-images.githubusercontent.com/45640029/161904077-52fe97cb-3720-4e3f-98ee-7f6720a056e2.png)
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/4104/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4104/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
https://api.github.com/repos/huggingface/datasets/issues/4101
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4101/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4101/comments
https://api.github.com/repos/huggingface/datasets/issues/4101/events
https://github.com/huggingface/datasets/issues/4101
1,193,399,204
I_kwDODunzps5HIdOk
4,101
How can I download only the train and test split for full numbers using load_dataset()?
{ "avatar_url": "https://avatars.githubusercontent.com/u/64383902?v=4", "events_url": "https://api.github.com/users/Nakkhatra/events{/privacy}", "followers_url": "https://api.github.com/users/Nakkhatra/followers", "following_url": "https://api.github.com/users/Nakkhatra/following{/other_user}", "gists_url": "https://api.github.com/users/Nakkhatra/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Nakkhatra", "id": 64383902, "login": "Nakkhatra", "node_id": "MDQ6VXNlcjY0MzgzOTAy", "organizations_url": "https://api.github.com/users/Nakkhatra/orgs", "received_events_url": "https://api.github.com/users/Nakkhatra/received_events", "repos_url": "https://api.github.com/users/Nakkhatra/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Nakkhatra/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Nakkhatra/subscriptions", "type": "User", "url": "https://api.github.com/users/Nakkhatra", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
[ "Hi! Can you please specify the full name of the dataset? IIRC `full_numbers` is one of the configs of the `svhn` dataset, and its generation is slow due to data being stored in binary Matlab files. Even if you specify a specific split, `datasets` downloads all of them, but we plan to fix that soon and only download the requested split.\r\n\r\nIf you are in a hurry, download the `svhn` script [here](`https://huggingface.co/datasets/svhn/blob/main/svhn.py`), remove [this code](https://huggingface.co/datasets/svhn/blob/main/svhn.py#L155-L162), and run:\r\n```python\r\nfrom datasets import load_dataset\r\ndset = load_dataset(\"path/to/your/local/script.py\", \"full_numbers\")\r\n```\r\n\r\nAnd to make loading easier in Colab, you can create a dataset repo on the Hub and upload the script there. Or push the script to Google Drive and mount the drive in Colab." ]
2022-04-05T16:00:15
2022-04-06T13:09:01
null
NONE
null
null
null
null
How can I download only the train and test split for full numbers using load_dataset()? I do not need the extra split and it will take 40 mins just to download in Colab. I have very short time in hand. Please help.
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4101/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4101/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
https://api.github.com/repos/huggingface/datasets/issues/4099
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4099/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4099/comments
https://api.github.com/repos/huggingface/datasets/issues/4099/events
https://github.com/huggingface/datasets/issues/4099
1,193,253,768
I_kwDODunzps5HH5uI
4,099
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe5 in position 213: ordinal not in range(128)
{ "avatar_url": "https://avatars.githubusercontent.com/u/20210017?v=4", "events_url": "https://api.github.com/users/andreybond/events{/privacy}", "followers_url": "https://api.github.com/users/andreybond/followers", "following_url": "https://api.github.com/users/andreybond/following{/other_user}", "gists_url": "https://api.github.com/users/andreybond/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/andreybond", "id": 20210017, "login": "andreybond", "node_id": "MDQ6VXNlcjIwMjEwMDE3", "organizations_url": "https://api.github.com/users/andreybond/orgs", "received_events_url": "https://api.github.com/users/andreybond/received_events", "repos_url": "https://api.github.com/users/andreybond/repos", "site_admin": false, "starred_url": "https://api.github.com/users/andreybond/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/andreybond/subscriptions", "type": "User", "url": "https://api.github.com/users/andreybond", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
[ "Hi @andreybond, thanks for reporting.\r\n\r\nUnfortunately, I'm not able to able to reproduce your issue:\r\n```python\r\nIn [4]: from datasets import load_dataset\r\n ...: datasets = load_dataset(\"nielsr/XFUN\", \"xfun.ja\")\r\n\r\nIn [5]: datasets\r\nOut[5]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['id', 'input_ids', 'bbox', 'labels', 'image', 'entities', 'relations'],\r\n num_rows: 194\r\n })\r\n validation: Dataset({\r\n features: ['id', 'input_ids', 'bbox', 'labels', 'image', 'entities', 'relations'],\r\n num_rows: 71\r\n })\r\n})\r\n```\r\n\r\nThe only reason I can imagine this issue may arise is if your default encoding is not \"UTF-8\" (and it is ASCII instead). This is usually the case on Windows machines; but you say your environment is a Linux machine. Maybe you change your machine default encoding?\r\n\r\nCould you please check this?\r\n```python\r\nIn [6]: import sys\r\n\r\nIn [7]: sys.getdefaultencoding()\r\nOut[7]: 'utf-8'\r\n```", "I opened a PR in the original dataset loading script:\r\n- microsoft/unilm#677\r\n\r\nand fixed the corresponding dataset script on the Hub:\r\n- https://huggingface.co/datasets/nielsr/XFUN/commit/73ba5e026621e05fb756ae0f267eb49971f70ebd", "import sys\r\nsys.getdefaultencoding()\r\n\r\nreturned: 'utf-8'\r\n\r\n---------------------\r\n\r\nI've just cloned master branch - your fix works! Thank you!" ]
2022-04-05T14:42:38
2022-04-06T06:37:44
2022-04-06T06:35:54
NONE
null
null
null
null
## Describe the bug Error "UnicodeDecodeError: 'ascii' codec can't decode byte 0xe5 in position 213: ordinal not in range(128)" is thrown when downloading dataset. ## Steps to reproduce the bug ```python from datasets import load_dataset datasets = load_dataset("nielsr/XFUN", "xfun.ja") ``` ## Expected results Dataset should be downloaded without exceptions ## Actual results Stack trace (for the second-time execution): Downloading and preparing dataset xfun/xfun.ja to /root/.cache/huggingface/datasets/nielsr___xfun/xfun.ja/0.0.0/e06e948b673d1be9a390a83c05c10e49438bf03dd85ae9a4fe06f8747a724477... Downloading data files: 100% 2/2 [00:00<00:00, 88.48it/s] Extracting data files: 100% 2/2 [00:00<00:00, 79.60it/s] UnicodeDecodeErrorTraceback (most recent call last) <ipython-input-31-79c26bd1109c> in <module> 1 from datasets import load_dataset 2 ----> 3 datasets = load_dataset("nielsr/XFUN", "xfun.ja") /usr/local/lib/python3.6/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs) /usr/local/lib/python3.6/dist-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs) 604 ) 605 --> 606 # By default, return all splits 607 if split is None: 608 split = {s: s for s in self.info.splits} /usr/local/lib/python3.6/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos) /usr/local/lib/python3.6/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 692 Args: 693 split: `datasets.Split` which subset of the data to read. --> 694 695 Returns: 696 `Dataset` /usr/local/lib/python3.6/dist-packages/datasets/builder.py in _prepare_split(self, split_generator, check_duplicate_keys) /usr/local/lib/python3.6/dist-packages/tqdm/notebook.py in __iter__(self) 252 if not self.disable: 253 self.display(check_delay=False) --> 254 255 def __iter__(self): 256 try: /usr/local/lib/python3.6/dist-packages/tqdm/std.py in __iter__(self) 1183 for obj in iterable: 1184 yield obj -> 1185 return 1186 1187 mininterval = self.mininterval ~/.cache/huggingface/modules/datasets_modules/datasets/nielsr--XFUN/e06e948b673d1be9a390a83c05c10e49438bf03dd85ae9a4fe06f8747a724477/XFUN.py in _generate_examples(self, filepaths) 140 logger.info("Generating examples from = %s", filepath) 141 with open(filepath[0], "r") as f: --> 142 data = json.load(f) 143 144 for doc in data["documents"]: /usr/lib/python3.6/json/__init__.py in load(fp, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw) 294 295 """ --> 296 return loads(fp.read(), 297 cls=cls, object_hook=object_hook, 298 parse_float=parse_float, parse_int=parse_int, /usr/lib/python3.6/encodings/ascii.py in decode(self, input, final) 24 class IncrementalDecoder(codecs.IncrementalDecoder): 25 def decode(self, input, final=False): ---> 26 return codecs.ascii_decode(input, self.errors)[0] 27 28 class StreamWriter(Codec,codecs.StreamWriter): UnicodeDecodeError: 'ascii' codec can't decode byte 0xe5 in position 213: ordinal not in range(128) ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.0.0 (but reproduced with many previous versions) - Platform: Docker: Linux da5b74136d6b 5.3.0-1031-azure #32~18.04.1-Ubuntu SMP Mon Jun 22 15:27:23 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux ; Base docker image is : huggingface/transformers-pytorch-cpu - Python version: 3.6.9 - PyArrow version: 6.0.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4099/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4099/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
15:53:16
https://api.github.com/repos/huggingface/datasets/issues/4096
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4096/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4096/comments
https://api.github.com/repos/huggingface/datasets/issues/4096/events
https://github.com/huggingface/datasets/issues/4096
1,193,165,229
I_kwDODunzps5HHkGt
4,096
Add support for streaming Zarr stores for hosted datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/7170359?v=4", "events_url": "https://api.github.com/users/jacobbieker/events{/privacy}", "followers_url": "https://api.github.com/users/jacobbieker/followers", "following_url": "https://api.github.com/users/jacobbieker/following{/other_user}", "gists_url": "https://api.github.com/users/jacobbieker/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jacobbieker", "id": 7170359, "login": "jacobbieker", "node_id": "MDQ6VXNlcjcxNzAzNTk=", "organizations_url": "https://api.github.com/users/jacobbieker/orgs", "received_events_url": "https://api.github.com/users/jacobbieker/received_events", "repos_url": "https://api.github.com/users/jacobbieker/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jacobbieker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jacobbieker/subscriptions", "type": "User", "url": "https://api.github.com/users/jacobbieker", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
[ "Hi @jacobbieker, thanks for your request and study of possible alternatives.\r\n\r\nWe are very interested in finding a way to make `datasets` useful to you.\r\n\r\nLooking at the Zarr docs, I saw that among its storage alternatives, there is the ZIP file format: https://zarr.readthedocs.io/en/stable/api/storage.html#zarr.storage.ZipStore\r\n\r\nThis might be convenient for many reasons:\r\n- On the one hand, we avoid the Git issue with huge number of small files: chunks files are compressed into a single ZIP file\r\n- On the other hand, the ZIP file format is specially suited for streaming data because it allows random access to its component files (i.e. it supports random access to its chunks)\r\n\r\nAnyway, I think that a Python loading script will be necessary: you need to implement additional logic to select certain chunks (based on date or other criteria).\r\n\r\nPlease, let me know if this makes sense to you.", "Ah okay, I missed the option of zip files for zarr, I'll try that with our repos and see if it works! Thanks a lot!", "Hi @jacobbieker, does the Zarr ZipStore work for your use case?", "Hi,\r\n\r\nYes, it seems to! I got it working for https://huggingface.co/datasets/openclimatefix/mrms thanks for the help! ", "On behalf of the Zarr developers, let me say THANK YOU for working to support Zarr on HF! 🙏 Zarr is a 100% open-source and community driven project (fiscally sponsored by NumFocus). We see it as an ideal format for ML training datasets, particularly in scientific domains.\r\n\r\nI think the solution of zipping the Zarr store is a reasonable way to balance the constraints of Git LFS with the structure of Zarr.\r\n\r\nIt would be amazing to get something on the [Hugging Face Datasets Docs](https://huggingface.co/docs/datasets/index) about how to best work with Zarr. Let me know if there's a way I could help with that effort.", "Also just noting here that I was able to lazily open @jacobbieker's dataset over the internet from HF hub 🚀 !\r\n\r\n```python\r\nimport xarray as xr\r\nurl = \"https://huggingface.co/datasets/openclimatefix/mrms/resolve/main/data/2016_001.zarr.zip\"\r\nzip_url = 'zip:///::' + url\r\nds = xr.open_dataset(zip_url, engine='zarr', chunks={})\r\n```\r\n\r\n<img width=\"740\" alt=\"image\" src=\"https://user-images.githubusercontent.com/1197350/164508663-bc75cdc0-734d-44f4-9562-2877ecfdf433.png\">\r\n", "However, I wasn't able to get streaming working using the Datasets api:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\nds = load_dataset(\"openclimatefix/mrms\", streaming=True, split='train')\r\nitem = next(iter(ds))\r\n```\r\n\r\n<details>\r\n<summary>FileNotFoundError traceback</summary>\r\n\r\n```\r\nNo config specified, defaulting to: mrms/2021\r\nzip://::https://huggingface.co/datasets/openclimatefix/mrms/resolve/main/data/2016_001.zarr.zip\r\ndata/2016_001.zarr.zip\r\nzip://2016_001.zarr.zip::https://huggingface.co/datasets/openclimatefix/mrms/resolve/main/data/2016_001.zarr.zip\r\n---------------------------------------------------------------------------\r\nFileNotFoundError Traceback (most recent call last)\r\nInput In [1], in <cell line: 3>()\r\n 1 from datasets import load_dataset\r\n 2 ds = load_dataset(\"openclimatefix/mrms\", streaming=True, split='train')\r\n----> 3 item = next(iter(ds))\r\n\r\nFile /opt/miniconda3/envs/hugginface/lib/python3.9/site-packages/datasets/iterable_dataset.py:497, in IterableDataset.__iter__(self)\r\n 496 def __iter__(self):\r\n--> 497 for key, example in self._iter():\r\n 498 if self.features:\r\n 499 # we encode the example for ClassLabel feature types for example\r\n 500 encoded_example = self.features.encode_example(example)\r\n\r\nFile /opt/miniconda3/envs/hugginface/lib/python3.9/site-packages/datasets/iterable_dataset.py:494, in IterableDataset._iter(self)\r\n 492 else:\r\n 493 ex_iterable = self._ex_iterable\r\n--> 494 yield from ex_iterable\r\n\r\nFile /opt/miniconda3/envs/hugginface/lib/python3.9/site-packages/datasets/iterable_dataset.py:87, in ExamplesIterable.__iter__(self)\r\n 86 def __iter__(self):\r\n---> 87 yield from self.generate_examples_fn(**self.kwargs)\r\n\r\nFile ~/.cache/huggingface/modules/datasets_modules/datasets/openclimatefix--mrms/2a6f697014d7eb3caf586ca137d47ca38785ae2fe36248611b021f8248b59936/mrms.py:150, in MRMS._generate_examples(self, filepath, split)\r\n 147 filepath = \"[https://huggingface.co/datasets/openclimatefix/mrms/resolve/main/data/2016_001.zarr.zip](https://huggingface.co/datasets/openclimatefix/mrms/resolve/main/data/2016_001.zarr.zip%3C/span%3E%3Cspan) style=\"color:rgb(175,0,0)\">\"\r\n 148 # TODO: This method handles input defined in _split_generators to yield (key, example) tuples from the dataset.\r\n 149 # The `key` is for legacy reasons (tfds) and is not important in itself, but must be unique for each example.\r\n--> 150 with zarr.storage.FSStore(fsspec.open(\"zip::\" + filepath, mode='r'), mode='r') as store:\r\n 151 data = xr.open_zarr(store)\r\n 152 for key, row in enumerate(data[\"time\"].values):\r\n\r\nFile /opt/miniconda3/envs/hugginface/lib/python3.9/site-packages/zarr/storage.py:1120, in FSStore.__init__(self, url, normalize_keys, key_separator, mode, exceptions, dimension_separator, **storage_options)\r\n 1117 import fsspec\r\n 1118 self.normalize_keys = normalize_keys\r\n-> 1120 protocol, _ = fsspec.core.split_protocol(url)\r\n 1121 # set auto_mkdir to True for local file system\r\n 1122 if protocol in (None, \"file\") and not storage_options.get(\"auto_mkdir\"):\r\n\r\nFile /opt/miniconda3/envs/hugginface/lib/python3.9/site-packages/fsspec/core.py:514, in split_protocol(urlpath)\r\n 512 def split_protocol(urlpath):\r\n 513 \"\"\"Return protocol, path pair\"\"\"\r\n--> 514 urlpath = stringify_path(urlpath)\r\n 515 if \"://\" in urlpath:\r\n 516 protocol, path = urlpath.split(\"://\", 1)\r\n\r\nFile /opt/miniconda3/envs/hugginface/lib/python3.9/site-packages/fsspec/utils.py:315, in stringify_path(filepath)\r\n 313 return filepath\r\n 314 elif hasattr(filepath, \"__fspath__\"):\r\n--> 315 return filepath.__fspath__()\r\n 316 elif isinstance(filepath, pathlib.Path):\r\n 317 return str(filepath)\r\n\r\nFile /opt/miniconda3/envs/hugginface/lib/python3.9/site-packages/fsspec/core.py:98, in OpenFile.__fspath__(self)\r\n 96 def __fspath__(self):\r\n 97 # may raise if cannot be resolved to local file\r\n---> 98 return self.open().__fspath__()\r\n\r\nFile /opt/miniconda3/envs/hugginface/lib/python3.9/site-packages/fsspec/core.py:140, in OpenFile.open(self)\r\n 132 def open(self):\r\n 133 \"\"\"Materialise this as a real open file without context\r\n 134 \r\n 135 The file should be explicitly closed to avoid enclosed file\r\n (...)\r\n 138 been deleted; but a with-context is better style.\r\n 139 \"\"\"\r\n--> 140 out = self.__enter__()\r\n 141 closer = out.close\r\n 142 fobjects = self.fobjects.copy()[:-1]\r\n\r\nFile /opt/miniconda3/envs/hugginface/lib/python3.9/site-packages/fsspec/core.py:103, in OpenFile.__enter__(self)\r\n 100 def __enter__(self):\r\n 101 mode = self.mode.replace(\"t\", \"\").replace(\"b\", \"\") + \"b\"\r\n--> 103 f = self.fs.open(self.path, mode=mode)\r\n 105 self.fobjects = [f]\r\n 107 if self.compression is not None:\r\n\r\nFile /opt/miniconda3/envs/hugginface/lib/python3.9/site-packages/fsspec/spec.py:1009, in AbstractFileSystem.open(self, path, mode, block_size, cache_options, compression, **kwargs)\r\n 1007 else:\r\n 1008 ac = kwargs.pop(\"autocommit\", not self._intrans)\r\n-> 1009 f = self._open(\r\n 1010 path,\r\n 1011 mode=mode,\r\n 1012 block_size=block_size,\r\n 1013 autocommit=ac,\r\n 1014 cache_options=cache_options,\r\n 1015 **kwargs,\r\n 1016 )\r\n 1017 if compression is not None:\r\n 1018 from fsspec.compression import compr\r\n\r\nFile /opt/miniconda3/envs/hugginface/lib/python3.9/site-packages/fsspec/implementations/zip.py:96, in ZipFileSystem._open(self, path, mode, block_size, autocommit, cache_options, **kwargs)\r\n 94 if mode != \"rb\":\r\n 95 raise NotImplementedError\r\n---> 96 info = self.info(path)\r\n 97 out = self.zip.open(path, \"r\")\r\n 98 out.size = info[\"size\"]\r\n\r\nFile /opt/miniconda3/envs/hugginface/lib/python3.9/site-packages/fsspec/archive.py:42, in AbstractArchiveFileSystem.info(self, path, **kwargs)\r\n 40 return self.dir_cache[path + \"/\"]\r\n 41 else:\r\n---> 42 raise FileNotFoundError(path)\r\n\r\nFileNotFoundError:\r\n```\r\n\r\n</details>\r\n\r\nIs this a bug? Or am I just doing it wrong...", "I'm still messing around with that dataset, so the data might have moved. I currently have each year of MRMS precipitation rate data as it's own zarr, but as they are quite large (on order of 100GB each) I'm working to split them into single days, and as such they are still being moved around, I was just trying to get a proof of concept working originally. ", "I've mostly finished rearranging the data now and uploading some more, so this works now:\r\n```python\r\nimport datasets\r\nds = datasets.load_dataset(\"openclimatefix/mrms\", streaming=True, split=\"train\")\r\nitem = next(iter(ds))\r\nprint(item.keys())\r\nprint(item[\"timestamp\"])\r\n```\r\n\r\nThe MRMS data now goes most of 2016-2022, with quite a few gaps I'm working on filling in", "Hi @albertvillanova, I noticed there is now the [HFFileSystem](https://huggingface.co/docs/huggingface_hub/main/en/guides/hf_file_system), where the docs show an example of writing a Zarr store directly to the Hub, and no mention of having too many files. Is there still the restriction on lots of files in `datasets`? It would be more convenient to be able to have the geospatial data in one large Zarr store, rather than in multiple smaller ones, but happy to continue using zipped Zarrs if thats the recommended way.", "Hi @jacobbieker.\r\n\r\nThanks for coming back to this pending issue. \r\n\r\nIn fact, we are now using the `fsspec` API in our `HFFileSystem`, which was not the case when you created this issue.\r\nOn the other hand, I am not sure of the current limitations, both in terms of the number of files or performance when loading.\r\n- If I remember correctly, I think there is a limit in the maximum number of files per directory: 10k\r\n\r\nI think it would be best to try a POC again and discuss any issues that arise and whether we can fix them on our end (both `datasets` and the Hub).\r\nWe would really like to support the Zarr format 100% and that the Hub is really convenient for your use case. So do not hesitate to report any problem: you can ping me on the Hub as @albertvillanova" ]
2022-04-05T13:38:32
2023-12-07T09:01:49
2022-04-21T08:12:58
NONE
null
null
null
null
**Is your feature request related to a problem? Please describe.** Lots of geospatial data is stored in the Zarr format. This format works well for n-dimensional data and coordinates, and can have good compression. Unfortunately, HF datasets doesn't support streaming in data in Zarr format as far as I can tell. Zarr stores are designed to be easily streamed in from cloud storage, especially with xarray and fsspec. Since geospatial data tends to be very large, and on the order of TBs of data or 10's of TBs of data for a single dataset, it can be difficult to store the dataset locally for users. Just adding Zarr stores with HF git doesn't work well (see https://github.com/huggingface/datasets/issues/3823) as Zarr splits the data into lots of small chunks for fast loading, and that doesn't work well with git. I've somewhat gotten around that issue by tarring each Zarr store and uploading them as a single file, which seems to be working (see https://huggingface.co/datasets/openclimatefix/gfs-reforecast for example data files, although the script isn't written yet). This does mean that streaming doesn't quite work though. On the other hand, in https://huggingface.co/datasets/openclimatefix/eumetsat_uk_hrv we stream in a Zarr store from a public GCP bucket quite easily. **Describe the solution you'd like** A way to upload Zarr stores for hosted datasets so that we can stream it with xarray and fsspec. **Describe alternatives you've considered** Tarring each Zarr store individually and just extracting them in the dataset script -> Downside this is a lot of data that probably doesn't fit locally for a lot of potential users. Pre-prepare examples in a format like Parquet -> Would use a lot more storage, and a lot less flexibility, in the eumetsat_uk_hrv, we use the one Zarr store for multiple different configurations.
{ "avatar_url": "https://avatars.githubusercontent.com/u/7170359?v=4", "events_url": "https://api.github.com/users/jacobbieker/events{/privacy}", "followers_url": "https://api.github.com/users/jacobbieker/followers", "following_url": "https://api.github.com/users/jacobbieker/following{/other_user}", "gists_url": "https://api.github.com/users/jacobbieker/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jacobbieker", "id": 7170359, "login": "jacobbieker", "node_id": "MDQ6VXNlcjcxNzAzNTk=", "organizations_url": "https://api.github.com/users/jacobbieker/orgs", "received_events_url": "https://api.github.com/users/jacobbieker/received_events", "repos_url": "https://api.github.com/users/jacobbieker/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jacobbieker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jacobbieker/subscriptions", "type": "User", "url": "https://api.github.com/users/jacobbieker", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 4, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 4, "url": "https://api.github.com/repos/huggingface/datasets/issues/4096/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4096/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
15 days, 18:34:26
https://api.github.com/repos/huggingface/datasets/issues/4094
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4094/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4094/comments
https://api.github.com/repos/huggingface/datasets/issues/4094/events
https://github.com/huggingface/datasets/issues/4094
1,192,534,414
I_kwDODunzps5HFKGO
4,094
Helo Mayfrends
{ "avatar_url": "https://avatars.githubusercontent.com/u/102933353?v=4", "events_url": "https://api.github.com/users/Budigming/events{/privacy}", "followers_url": "https://api.github.com/users/Budigming/followers", "following_url": "https://api.github.com/users/Budigming/following{/other_user}", "gists_url": "https://api.github.com/users/Budigming/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Budigming", "id": 102933353, "login": "Budigming", "node_id": "U_kgDOBiKjaQ", "organizations_url": "https://api.github.com/users/Budigming/orgs", "received_events_url": "https://api.github.com/users/Budigming/received_events", "repos_url": "https://api.github.com/users/Budigming/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Budigming/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Budigming/subscriptions", "type": "User", "url": "https://api.github.com/users/Budigming", "user_view_type": "public" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
closed
false
null
[]
[]
2022-04-05T02:42:57
2022-04-05T07:16:42
2022-04-05T07:16:42
NONE
null
null
null
null
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons to have this dataset* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4094/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4094/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
4:33:45
https://api.github.com/repos/huggingface/datasets/issues/4093
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4093/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4093/comments
https://api.github.com/repos/huggingface/datasets/issues/4093/events
https://github.com/huggingface/datasets/issues/4093
1,192,523,161
I_kwDODunzps5HFHWZ
4,093
elena-soare/crawled-ecommerce: missing dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/17519354?v=4", "events_url": "https://api.github.com/users/seevaratnam/events{/privacy}", "followers_url": "https://api.github.com/users/seevaratnam/followers", "following_url": "https://api.github.com/users/seevaratnam/following{/other_user}", "gists_url": "https://api.github.com/users/seevaratnam/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/seevaratnam", "id": 17519354, "login": "seevaratnam", "node_id": "MDQ6VXNlcjE3NTE5MzU0", "organizations_url": "https://api.github.com/users/seevaratnam/orgs", "received_events_url": "https://api.github.com/users/seevaratnam/received_events", "repos_url": "https://api.github.com/users/seevaratnam/repos", "site_admin": false, "starred_url": "https://api.github.com/users/seevaratnam/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/seevaratnam/subscriptions", "type": "User", "url": "https://api.github.com/users/seevaratnam", "user_view_type": "public" }
[ { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo", "user_view_type": "public" } ]
[ "It's a bug! Thanks for reporting, I'm looking at it.", "By the way, the error on our part is due to the huge size of every row (~90MB). The dataset viewer does not support such big dataset rows for the moment.\r\nAnyway, we're working to give a hint about this in the dataset viewer.", "Fixed. See https://huggingface.co/datasets/elena-soare/crawled-ecommerce/viewer/elena-soare--crawled-ecommerce/train.\r\n\r\n<img width=\"1552\" alt=\"Capture d’écran 2022-04-12 à 11 23 51\" src=\"https://user-images.githubusercontent.com/1676121/162929722-2e2b80e2-154a-4b61-87bd-e341bd6c46e6.png\">\r\n\r\nThanks for reporting!" ]
2022-04-05T02:25:19
2022-04-12T09:34:53
2022-04-12T09:34:53
NONE
null
null
null
null
elena-soare/crawled-ecommerce **Link:** *link to the dataset viewer page* *short description of the issue* Am I the one who added this dataset ? Yes-No
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4093/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4093/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
7 days, 7:09:34
https://api.github.com/repos/huggingface/datasets/issues/4091
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4091/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4091/comments
https://api.github.com/repos/huggingface/datasets/issues/4091/events
https://github.com/huggingface/datasets/issues/4091
1,192,023,855
I_kwDODunzps5HDNcv
4,091
Build a Dataset One Example at a Time Without Loading All Data Into Memory
{ "avatar_url": "https://avatars.githubusercontent.com/u/99340348?v=4", "events_url": "https://api.github.com/users/aravind-tonita/events{/privacy}", "followers_url": "https://api.github.com/users/aravind-tonita/followers", "following_url": "https://api.github.com/users/aravind-tonita/following{/other_user}", "gists_url": "https://api.github.com/users/aravind-tonita/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/aravind-tonita", "id": 99340348, "login": "aravind-tonita", "node_id": "U_kgDOBevQPA", "organizations_url": "https://api.github.com/users/aravind-tonita/orgs", "received_events_url": "https://api.github.com/users/aravind-tonita/received_events", "repos_url": "https://api.github.com/users/aravind-tonita/repos", "site_admin": false, "starred_url": "https://api.github.com/users/aravind-tonita/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aravind-tonita/subscriptions", "type": "User", "url": "https://api.github.com/users/aravind-tonita", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
[ "Hi! Yes, the problem with `add_item` is that it keeps examples in memory, so you are left with these options:\r\n* writing a dataset loading script in which you iterate over `custom_example_dict_streamer` and yield the examples (in `_generate examples`)\r\n* storing the data in a JSON/CSV/Parquet/TXT file and using `Dataset.from_{format}`\r\n* using `add_item` + `save_to_disk` on smaller chunks: \r\n ```python\r\n from datasets import Dataset, concatenate_datasets\r\n MAX_SAMPLES_IN_MEMORY = 1000\r\n samples_in_dset = 0\r\n dset = Dataset.from_dict({\"col1\": [], \"col2\": []}) # empty dataset\r\n path_to_save_dir = \"path/to/save/dir\"\r\n num_chunks = 0\r\n for example_dict in custom_example_dict_streamer(\"/path/to/raw/data\"):\r\n dset = dset.add_item(example_dict)\r\n samples_in_dset += 1\r\n if samples_in_dset == MAX_SAMPLES_IN_MEMORY:\r\n samples_in_dset = 0\r\n dset.save_to_disk(f\"{path_to_save_dir}{num_chunks}\")\r\n num_chunks =+ 1\r\n dset = Dataset.from_dict({\"col1\": [], \"col2\": []}) # empty dataset\r\n if samples_in_dset > 0:\r\n dset.save_to_disk(f\"{path_to_save_dir}{num_chunks}\")\r\n num_chunks =+ 1\r\n loaded_dsets = [] # memory-mapped\r\n for chunk_num in range(num_chunks):\r\n dset = Dataset.load_from_disk(f\"{path_to_save_dir}{chunk_num}\") \r\n loaded_dsets.append(dset)\r\n final_dset = concatenate_datasets(dset)\r\n ```\r\n If you still have issues with this approach, you can try to delete unused datasets with `gc.collect()` to free some memory. ", "This is really elegant, thank you @mariosasko! I will try this." ]
2022-04-04T16:19:24
2022-04-20T14:31:00
2022-04-20T14:31:00
NONE
null
null
null
null
**Is your feature request related to a problem? Please describe.** I have a very large dataset stored on disk in a custom format. I have some custom code that reads one data example at a time and yields it in the form of a dictionary. I want to construct a `Dataset` with all examples, and then save it to disk. I later want to load the saved `Dataset` and use it like any other HuggingFace dataset, get splits, wrap it in a PyTorch `DataLoader`, etc. **Crucially, I do not ever want to materialize all the data in memory while building the dataset.** **Describe the solution you'd like** I would like to be able to do something like the following. Notice how each example is read and then immediately added to the dataset. We do not store all the data in memory when constructing the `Dataset`. If it helps, I will know the schema of my dataset before hand. ``` # Initialize an empty Dataset, possibly from a known schema. dataset = Dataset() # Read in examples one by one using a custom data streamer. for example_dict in custom_example_dict_streamer("/path/to/raw/data"): # Add this example to the dict but do not store it in memory. dataset.add_item(example_dict) # Save the final dataset to disk as an Arrow-backed dataset. dataset.save_to_disk("/path/to/dataset") ... # I'd like to be able to later `load_from_disk` and use the loaded Dataset # just like any other memory-mapped pyarrow-backed HuggingFace dataset... loaded_dataset = Dataset.load_from_disk("/path/to/dataset") loaded_dataset.set_format(type="torch", columnns=["foo", "bar", "baz"]) dataloader = torch.utils.data.DataLoader(loaded_dataset, batch_size=16) ... ``` **Describe alternatives you've considered** I initially tried to read all the data into memory, construct a Pandas DataFrame and then call `Dataset.from_pandas`. This would not work as it requires storing all the data in memory. It seems that there is an `add_item` method already -- I tried to implement something like the desired API written above, but I've not been able to initialize an empty `Dataset` (this seems to require several layers of constructing `datasets.table.Table` which requires constructing a `pyarrow.lib.Table`, etc). I also considered writing my data to multiple sharded CSV files or JSON files and then using `from_csv` or `from_json`. I'd prefer not to do this because (1) I'd prefer to avoid the intermediate step of creating these temp CSV/JSON files and (2) I'm not sure if `from_csv` and `from_json` use memory-mapping. Do you have any suggestions on how I'd be able to achieve this use case? Does something already exist to support this? Thank you very much in advance!
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4091/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4091/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
15 days, 22:11:36
https://api.github.com/repos/huggingface/datasets/issues/4086
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4086/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4086/comments
https://api.github.com/repos/huggingface/datasets/issues/4086/events
https://github.com/huggingface/datasets/issues/4086
1,191,373,374
I_kwDODunzps5HAuo-
4,086
Dataset viewer issue for McGill-NLP/feedbackQA
{ "avatar_url": "https://avatars.githubusercontent.com/u/54827718?v=4", "events_url": "https://api.github.com/users/cslizc/events{/privacy}", "followers_url": "https://api.github.com/users/cslizc/followers", "following_url": "https://api.github.com/users/cslizc/following{/other_user}", "gists_url": "https://api.github.com/users/cslizc/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cslizc", "id": 54827718, "login": "cslizc", "node_id": "MDQ6VXNlcjU0ODI3NzE4", "organizations_url": "https://api.github.com/users/cslizc/orgs", "received_events_url": "https://api.github.com/users/cslizc/received_events", "repos_url": "https://api.github.com/users/cslizc/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cslizc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cslizc/subscriptions", "type": "User", "url": "https://api.github.com/users/cslizc", "user_view_type": "public" }
[ { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
[ "Hi @cslizc, thanks for reporting.\r\n\r\nI have just forced the refresh of the corresponding cache and the preview is working now.", "thank you so much" ]
2022-04-04T07:27:20
2022-04-04T22:29:53
2022-04-04T08:01:45
NONE
null
null
null
null
## Dataset viewer issue for '*McGill-NLP/feedbackQA*' **Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/McGill-NLP/feedbackQA)* *short description of the issue* The dataset can be loaded correctly with `load_dataset` but the preview doesn't work. Error message: ``` Status code: 400 Exception: Status400Error Message: Not found. Maybe the cache is missing, or maybe the dataset does not exist. ``` Am I the one who added this dataset ? Yes
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4086/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4086/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
0:34:25
https://api.github.com/repos/huggingface/datasets/issues/4085
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4085/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4085/comments
https://api.github.com/repos/huggingface/datasets/issues/4085/events
https://github.com/huggingface/datasets/issues/4085
1,190,621,345
I_kwDODunzps5G93Ch
4,085
datasets.set_progress_bar_enabled(False) not working in datasets v2
{ "avatar_url": "https://avatars.githubusercontent.com/u/3381112?v=4", "events_url": "https://api.github.com/users/virilo/events{/privacy}", "followers_url": "https://api.github.com/users/virilo/followers", "following_url": "https://api.github.com/users/virilo/following{/other_user}", "gists_url": "https://api.github.com/users/virilo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/virilo", "id": 3381112, "login": "virilo", "node_id": "MDQ6VXNlcjMzODExMTI=", "organizations_url": "https://api.github.com/users/virilo/orgs", "received_events_url": "https://api.github.com/users/virilo/received_events", "repos_url": "https://api.github.com/users/virilo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/virilo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/virilo/subscriptions", "type": "User", "url": "https://api.github.com/users/virilo", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
[ "Now, I can't find any reference to set_progress_bar_enabled in the code.\r\n\r\nI think it have been deleted", "Hi @virilo,\r\n\r\nPlease note that since `datasets` version 2.0.0, we have aligned with `transformers` the management of the progress bar (among other things):\r\n- #3897\r\n\r\nNow, you should update your code to use `datasets.logging.disable_progress_bar`.\r\n\r\nYou have more info in our docs: [Logging methods](https://huggingface.co/docs/datasets/package_reference/logging_methods)", "One important thing for beginner like me is: from datasets.utils.logging import disable_progress_bar\r\nDo not forget the 'utils' or you will waste a long time like me...." ]
2022-04-02T12:40:10
2022-09-17T02:18:03
2022-04-04T06:44:34
NONE
null
null
null
null
## Describe the bug datasets.set_progress_bar_enabled(False) not working in datasets v2 ## Steps to reproduce the bug ```python datasets.set_progress_bar_enabled(False) ``` ## Expected results datasets not using any progress bar ## Actual results AttributeError: module 'datasets' has no attribute 'set_progress_bar_enabled ## Environment info datasets version 2
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4085/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4085/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
1 day, 18:04:24
https://api.github.com/repos/huggingface/datasets/issues/4084
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4084/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4084/comments
https://api.github.com/repos/huggingface/datasets/issues/4084/events
https://github.com/huggingface/datasets/issues/4084
1,190,060,415
I_kwDODunzps5G7uF_
4,084
Errors in `Train with Datasets` Tensorflow code section on Huggingface.co
{ "avatar_url": "https://avatars.githubusercontent.com/u/57095771?v=4", "events_url": "https://api.github.com/users/blackhat-coder/events{/privacy}", "followers_url": "https://api.github.com/users/blackhat-coder/followers", "following_url": "https://api.github.com/users/blackhat-coder/following{/other_user}", "gists_url": "https://api.github.com/users/blackhat-coder/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/blackhat-coder", "id": 57095771, "login": "blackhat-coder", "node_id": "MDQ6VXNlcjU3MDk1Nzcx", "organizations_url": "https://api.github.com/users/blackhat-coder/orgs", "received_events_url": "https://api.github.com/users/blackhat-coder/received_events", "repos_url": "https://api.github.com/users/blackhat-coder/repos", "site_admin": false, "starred_url": "https://api.github.com/users/blackhat-coder/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/blackhat-coder/subscriptions", "type": "User", "url": "https://api.github.com/users/blackhat-coder", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
[ "Hi @blackhat-coder, thanks for reporting.\r\n\r\nPlease note that the `transformers` library updated their data collators API last year (version 4.10.0):\r\n- huggingface/transformers#13105\r\n\r\nnow requiring to pass `return_tensors` argument at Data Collator instantiation.\r\n\r\nAnd therefore, we also updated in the `datasets` library documentation all the examples using `transformers` data collators.\r\n\r\nIf you would like to follow our examples, please update your installed `transformers` version:\r\n```\r\npip install -U transformers\r\n```" ]
2022-04-01T17:02:47
2022-04-04T07:24:37
2022-04-04T07:21:31
NONE
null
null
null
null
## Describe the bug Hi ### Error 1 Running the Tensforlow code on [Huggingface](https://huggingface.co/docs/datasets/use_dataset) gives a TypeError: __init__() got an unexpected keyword argument 'return_tensors' ### Error 2 `DataCollatorWithPadding` isn't imported ## Steps to reproduce the bug ```python import tensorflow as tf from datasets import load_dataset from transformers import AutoTokenizer dataset = load_dataset('glue', 'mrpc', split='train') tokenizer = AutoTokenizer.from_pretrained('bert-base-cased') dataset = dataset.map(lambda e: tokenizer(e['sentence1'], truncation=True, padding='max_length'), batched=True) data_collator = DataCollatorWithPadding(tokenizer=tokenizer, return_tensors="tf") train_dataset = dataset["train"].to_tf_dataset( columns=['input_ids', 'token_type_ids', 'attention_mask', 'label'], shuffle=True, batch_size=16, collate_fn=data_collator, ) ``` This is the same code on Huggingface.co ## Actual results TypeError: __init__() got an unexpected keyword argument 'return_tensors' ## Environment info - `datasets` version: 2.0.0 - Platform: Windows-10-10.0.19044-SP0 - Python version: 3.9.7 - PyArrow version: 6.0.0 - Pandas version: 1.4.1 >
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4084/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4084/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
2 days, 14:18:44
https://api.github.com/repos/huggingface/datasets/issues/4080
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4080/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4080/comments
https://api.github.com/repos/huggingface/datasets/issues/4080/events
https://github.com/huggingface/datasets/issues/4080
1,189,667,296
I_kwDODunzps5G6OHg
4,080
NonMatchingChecksumError for downloading conll2012_ontonotesv5 dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4", "events_url": "https://api.github.com/users/richarddwang/events{/privacy}", "followers_url": "https://api.github.com/users/richarddwang/followers", "following_url": "https://api.github.com/users/richarddwang/following{/other_user}", "gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/richarddwang", "id": 17963619, "login": "richarddwang", "node_id": "MDQ6VXNlcjE3OTYzNjE5", "organizations_url": "https://api.github.com/users/richarddwang/orgs", "received_events_url": "https://api.github.com/users/richarddwang/received_events", "repos_url": "https://api.github.com/users/richarddwang/repos", "site_admin": false, "starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions", "type": "User", "url": "https://api.github.com/users/richarddwang", "user_view_type": "public" }
[ { "color": "cfd3d7", "default": true, "description": "This issue or pull request already exists", "id": 1935892865, "name": "duplicate", "node_id": "MDU6TGFiZWwxOTM1ODkyODY1", "url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate" }, { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
[ "Hi @richarddwang,\r\n\r\n\r\nIndeed, we have recently updated the loading script of that dataset (and fixed that bug as well):\r\n- #4002\r\n\r\nThat fix will be available in our next `datasets` library release. In the meantime, you can incorporate that fix by:\r\n- installing `datasets` from our GitHub repo:\r\n```bash\r\npip install git+https://github.com/huggingface/datasets#egg=datasets\r\n```\r\n- forcing the data files to be redownloaded\r\n```python\r\nds = load_dataset('conll2012_ontonotesv5', 'english_v4', split=\"test\", download_mode=\"force_redownload\")\r\n```\r\n\r\nFeel free to re-open this issue if the problem persists. \r\n\r\nDuplicate of:\r\n- #4031" ]
2022-04-01T11:34:28
2022-04-01T13:59:10
2022-04-01T13:59:10
CONTRIBUTOR
null
null
null
null
## Steps to reproduce the bug ```python datasets.load_dataset("conll2012_ontonotesv5", "english_v12") ``` ## Actual results ``` Downloading builder script: 32.2kB [00:00, 9.72MB/s] Downloading metadata: 20.0kB [00:00, 10.4MB/s] Downloading and preparing dataset conll2012_ontonotesv5/english_v12 (download: 174.83 MiB, generated: 204.29 MiB, post-processed: Unknown size , total: 379.12 MiB) to ... Traceback (most recent call last): [315/390] File "/home/yisiang/lgtn/conll2012/run.py", line 86, in <module> train() File "/home/yisiang/lgtn/conll2012/run.py", line 65, in train trainer.fit(model, datamodule=dm) File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 740, in fit self._call_and_handle_interrupt( File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 685, in _call_and_handle_inte rrupt return trainer_fn(*args, **kwargs) File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 777, in _fit_impl self._run(model, ckpt_path=ckpt_path) File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1131, in _run self._data_connector.prepare_data() File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/pytorch_lightning/trainer/connectors/data_connector.py", line 154, in pre pare_data self.trainer.datamodule.prepare_data() File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/pytorch_lightning/core/datamodule.py", line 474, in wrapped_fn fn(*args, **kwargs) File "/home/yisiang/lgtn/_abstract_task/data.py", line 43, in prepare_data raw_dsets = datasets.load_dataset(**load_dataset_kwargs) File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/datasets/load.py", line 1687, in load_dataset builder_instance.download_and_prepare( File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/datasets/builder.py", line 605, in download_and_prepare self._download_and_prepare( File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/datasets/builder.py", line 1104, in _download_and_prepare super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos) File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/datasets/builder.py", line 676, in _download_and_prepare verify_checksums( File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/datasets/utils/info_utils.py", line 40, in verify_checksums raise NonMatchingChecksumError(error_msg + str(bad_urls)) datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://md-datasets-cache-zipfiles-prod.s3.eu-west-1.amazonaws.com/zmycy7t9h9-1.zip'] ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.0.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4080/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4080/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
2:24:42
https://api.github.com/repos/huggingface/datasets/issues/4077
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4077/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4077/comments
https://api.github.com/repos/huggingface/datasets/issues/4077/events
https://github.com/huggingface/datasets/issues/4077
1,189,467,585
I_kwDODunzps5G5dXB
4,077
ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file.
{ "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/NielsRogge", "id": 48327001, "login": "NielsRogge", "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "repos_url": "https://api.github.com/users/NielsRogge/repos", "site_admin": false, "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "type": "User", "url": "https://api.github.com/users/NielsRogge", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" } ]
[]
2022-04-01T08:49:13
2022-04-01T16:16:19
2022-04-01T16:16:19
CONTRIBUTOR
null
null
null
null
## Describe the bug When uploading a relatively large image dataset of > 1GB, reloading doesn't work for me, even though pushing to the hub went just fine. Basically, I do: ``` from datasets import load_dataset dataset = load_dataset("imagefolder", data_files="path_to_my_files") dataset.push_to_hub("dataset_name") # works fine, no errors reloaded_dataset = load_dataset("dataset_name") ``` and it returns: ``` /usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.check_status() ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file. ``` I created a Colab notebook to reproduce my error: https://colab.research.google.com/drive/141LJCcM2XyqprPY83nIQ-Zk3BbxWeahq?usp=sharing
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4077/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4077/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
7:27:06
https://api.github.com/repos/huggingface/datasets/issues/4075
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4075/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4075/comments
https://api.github.com/repos/huggingface/datasets/issues/4075/events
https://github.com/huggingface/datasets/issues/4075
1,188,462,162
I_kwDODunzps5G1n5S
4,075
Add CCAgT dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/20444345?v=4", "events_url": "https://api.github.com/users/johnnv1/events{/privacy}", "followers_url": "https://api.github.com/users/johnnv1/followers", "following_url": "https://api.github.com/users/johnnv1/following{/other_user}", "gists_url": "https://api.github.com/users/johnnv1/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/johnnv1", "id": 20444345, "login": "johnnv1", "node_id": "MDQ6VXNlcjIwNDQ0MzQ1", "organizations_url": "https://api.github.com/users/johnnv1/orgs", "received_events_url": "https://api.github.com/users/johnnv1/received_events", "repos_url": "https://api.github.com/users/johnnv1/repos", "site_admin": false, "starred_url": "https://api.github.com/users/johnnv1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/johnnv1/subscriptions", "type": "User", "url": "https://api.github.com/users/johnnv1", "user_view_type": "public" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" }, { "color": "bfdadc", "default": false, "description": "Vision datasets", "id": 3608941089, "name": "vision", "node_id": "LA_kwDODunzps7XHBIh", "url": "https://api.github.com/repos/huggingface/datasets/labels/vision" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/20444345?v=4", "events_url": "https://api.github.com/users/johnnv1/events{/privacy}", "followers_url": "https://api.github.com/users/johnnv1/followers", "following_url": "https://api.github.com/users/johnnv1/following{/other_user}", "gists_url": "https://api.github.com/users/johnnv1/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/johnnv1", "id": 20444345, "login": "johnnv1", "node_id": "MDQ6VXNlcjIwNDQ0MzQ1", "organizations_url": "https://api.github.com/users/johnnv1/orgs", "received_events_url": "https://api.github.com/users/johnnv1/received_events", "repos_url": "https://api.github.com/users/johnnv1/repos", "site_admin": false, "starred_url": "https://api.github.com/users/johnnv1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/johnnv1/subscriptions", "type": "User", "url": "https://api.github.com/users/johnnv1", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/20444345?v=4", "events_url": "https://api.github.com/users/johnnv1/events{/privacy}", "followers_url": "https://api.github.com/users/johnnv1/followers", "following_url": "https://api.github.com/users/johnnv1/following{/other_user}", "gists_url": "https://api.github.com/users/johnnv1/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/johnnv1", "id": 20444345, "login": "johnnv1", "node_id": "MDQ6VXNlcjIwNDQ0MzQ1", "organizations_url": "https://api.github.com/users/johnnv1/orgs", "received_events_url": "https://api.github.com/users/johnnv1/received_events", "repos_url": "https://api.github.com/users/johnnv1/repos", "site_admin": false, "starred_url": "https://api.github.com/users/johnnv1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/johnnv1/subscriptions", "type": "User", "url": "https://api.github.com/users/johnnv1", "user_view_type": "public" } ]
[ "Awesome ! Let us know if you have questions or if we can help ;) I'm assigning you\r\n\r\nPS: if possible, please try to not use Google Drive links in your dataset script, since Google Drive has download quotas and is not always reliable.", "HI, I was waiting to come out in the second version to do the implementation.\r\n\r\n- Paper: https://dx.doi.org/10.2139/ssrn.4126881\r\n- Data: [Data mendelay](http://doi.org/10.17632/wg4bpm33hj.2)", "Nice ! 🚀 ", "The link of CCAgT dataset is: https://huggingface.co/datasets/lapix/CCAgT" ]
2022-03-31T18:20:28
2022-07-06T19:03:42
2022-07-06T19:03:42
NONE
null
null
null
null
## Adding a Dataset - **Name:** CCAgT dataset: Images of Cervical Cells with AgNOR Stain Technique - **Description:** The dataset contains 2540 images (1600x1200 where each pixel is 0.111μm×0.111μm) from three different slides, having at least one nucleus per image. These images are from fields belonging to a sample cervical slide, colored with silver-stained, a method known as Argyrophilic Nucleolar Organizer Regions (AgNOR). - **Paper:** https://doi.org/10.1109/cbms49503.2020.00110 - **Data:** https://arquivos.ufsc.br/d/373be2177a33426a9e6c/ or https://drive.google.com/drive/u/4/folders/1TBpYCv6S1ydASLauSzcsvO7Wc5O-WUw0 - **Motivation:** This is a unique dataset (because of the stain), for a major health problem, cervical cancer, with real data. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). Hi, this is a public version of the dataset that I have been working on, soon we will have another version of this dataset. But until this new version goes out, I thought I would add this dataset here, if it makes sense for the repository. You can assign the task to me if possible
{ "avatar_url": "https://avatars.githubusercontent.com/u/20444345?v=4", "events_url": "https://api.github.com/users/johnnv1/events{/privacy}", "followers_url": "https://api.github.com/users/johnnv1/followers", "following_url": "https://api.github.com/users/johnnv1/following{/other_user}", "gists_url": "https://api.github.com/users/johnnv1/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/johnnv1", "id": 20444345, "login": "johnnv1", "node_id": "MDQ6VXNlcjIwNDQ0MzQ1", "organizations_url": "https://api.github.com/users/johnnv1/orgs", "received_events_url": "https://api.github.com/users/johnnv1/received_events", "repos_url": "https://api.github.com/users/johnnv1/repos", "site_admin": false, "starred_url": "https://api.github.com/users/johnnv1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/johnnv1/subscriptions", "type": "User", "url": "https://api.github.com/users/johnnv1", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4075/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4075/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
97 days, 0:43:14
https://api.github.com/repos/huggingface/datasets/issues/4074
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4074/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4074/comments
https://api.github.com/repos/huggingface/datasets/issues/4074/events
https://github.com/huggingface/datasets/issues/4074
1,188,449,142
I_kwDODunzps5G1kt2
4,074
Error in google/xtreme_s dataset card
{ "avatar_url": "https://avatars.githubusercontent.com/u/1048544?v=4", "events_url": "https://api.github.com/users/wranai/events{/privacy}", "followers_url": "https://api.github.com/users/wranai/followers", "following_url": "https://api.github.com/users/wranai/following{/other_user}", "gists_url": "https://api.github.com/users/wranai/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/wranai", "id": 1048544, "login": "wranai", "node_id": "MDQ6VXNlcjEwNDg1NDQ=", "organizations_url": "https://api.github.com/users/wranai/orgs", "received_events_url": "https://api.github.com/users/wranai/received_events", "repos_url": "https://api.github.com/users/wranai/repos", "site_admin": false, "starred_url": "https://api.github.com/users/wranai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wranai/subscriptions", "type": "User", "url": "https://api.github.com/users/wranai", "user_view_type": "public" }
[ { "color": "0075ca", "default": true, "description": "Improvements or additions to documentation", "id": 1935892861, "name": "documentation", "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation" }, { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
closed
false
null
[]
[ "Hi @wranai, thanks for reporting.\r\n\r\nPlease note that the information about language families and groups is taken form the original paper: [XTREME-S: Evaluating Cross-lingual Speech Representations](https://arxiv.org/abs/2203.10752).\r\n\r\nIf that information is wrong, feel free to contact the paper's authors to suggest that correction.\r\n\r\nJust note that Hungarian language (contrary to their geographically surrounding neighbor languages) belongs to the Uralic (languages) family, together with (among others) Finnish, Estonian, some other languages in northern regions of Scandinavia..." ]
2022-03-31T18:07:45
2022-04-01T08:12:56
2022-04-01T08:12:56
NONE
null
null
null
null
**Link:** https://huggingface.co/datasets/google/xtreme_s Not a big deal but Hungarian is considered an Eastern European language, together with Serbian, Slovak, Slovenian (all correctly categorized; Slovenia is mostly to the West of Hungary, by the way).
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4074/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4074/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
14:05:11
https://api.github.com/repos/huggingface/datasets/issues/4071
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4071/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4071/comments
https://api.github.com/repos/huggingface/datasets/issues/4071/events
https://github.com/huggingface/datasets/issues/4071
1,187,587,683
I_kwDODunzps5GySZj
4,071
Loading issue for xuyeliu/notebookCDG dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/46160972?v=4", "events_url": "https://api.github.com/users/Jun-jie-Huang/events{/privacy}", "followers_url": "https://api.github.com/users/Jun-jie-Huang/followers", "following_url": "https://api.github.com/users/Jun-jie-Huang/following{/other_user}", "gists_url": "https://api.github.com/users/Jun-jie-Huang/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Jun-jie-Huang", "id": 46160972, "login": "Jun-jie-Huang", "node_id": "MDQ6VXNlcjQ2MTYwOTcy", "organizations_url": "https://api.github.com/users/Jun-jie-Huang/orgs", "received_events_url": "https://api.github.com/users/Jun-jie-Huang/received_events", "repos_url": "https://api.github.com/users/Jun-jie-Huang/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Jun-jie-Huang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Jun-jie-Huang/subscriptions", "type": "User", "url": "https://api.github.com/users/Jun-jie-Huang", "user_view_type": "public" }
[ { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
closed
false
null
[]
[ "Hi @Jun-jie-Huang,\r\n\r\nAs the error message says, \".pkl\" data files are not supported.\r\n\r\nIf you would like to share your dataset on the Hub, you would need:\r\n- either to create a Python loading script, that loads the data in any format\r\n- or to transform your data files to one of the supported formats (listed in the error message above: CSV, JSON, Parquet, TXT,...)\r\n\r\nYou can find the details in our docs: \r\n- How to share a dataset: https://huggingface.co/docs/datasets/share\r\n- How to create a dataset loading script: https://huggingface.co/docs/datasets/dataset_script\r\n\r\nFeel free to re-open this issue and ping us if you need further assistance." ]
2022-03-31T06:36:29
2022-03-31T08:17:01
2022-03-31T08:16:16
NONE
null
null
null
null
## Dataset viewer issue for '*xuyeliu/notebookCDG*' **Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/xuyeliu/notebookCDG)* *Couldn't load the xuyeliu/notebookCDG with provided scripts: * ``` from datasets import load_dataset dataset = load_dataset("xuyeliu/notebookCDG/dataset_notebook.pkl") ``` I get an error message as follows: FileNotFoundError: Couldn't find a dataset script at /home/code_documentation/code/xuyeliu/notebookCDG/notebookCDG.py or any data file in the same directory. Couldn't find 'xuyeliu/notebookCDG' on the Hugging Face Hub either: FileNotFoundError: Unable to resolve any data file that matches ['**train*'] in dataset repository xuyeliu/notebookCDG with any supported extension ['csv', 'tsv', 'json', 'jsonl', 'parquet', 'txt', 'blp', 'bmp', 'dib', 'bufr', 'cur', 'pcx', 'dcx', 'dds', 'ps', 'eps', 'fit', 'fits', 'fli', 'flc', 'ftc', 'ftu', 'gbr', 'gif', 'grib', 'h5', 'hdf', 'png', 'apng', 'jp2', 'j2k', 'jpc', 'jpf', 'jpx', 'j2c', 'icns', 'ico', 'im', 'iim', 'tif', 'tiff', 'jfif', 'jpe', 'jpg', 'jpeg', 'mpg', 'mpeg', 'msp', 'pcd', 'pxr', 'pbm', 'pgm', 'ppm', 'pnm', 'psd', 'bw', 'rgb', 'rgba', 'sgi', 'ras', 'tga', 'icb', 'vda', 'vst', 'webp', 'wmf', 'emf', 'xbm', 'xpm', 'zip'] Am I the one who added this dataset ? No
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4071/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4071/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
1:39:47
https://api.github.com/repos/huggingface/datasets/issues/4062
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4062/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4062/comments
https://api.github.com/repos/huggingface/datasets/issues/4062/events
https://github.com/huggingface/datasets/issues/4062
1,186,330,732
I_kwDODunzps5Gtfhs
4,062
Loading mozilla-foundation/common_voice_7_0 dataset failed
{ "avatar_url": "https://avatars.githubusercontent.com/u/19529125?v=4", "events_url": "https://api.github.com/users/aapot/events{/privacy}", "followers_url": "https://api.github.com/users/aapot/followers", "following_url": "https://api.github.com/users/aapot/following{/other_user}", "gists_url": "https://api.github.com/users/aapot/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/aapot", "id": 19529125, "login": "aapot", "node_id": "MDQ6VXNlcjE5NTI5MTI1", "organizations_url": "https://api.github.com/users/aapot/orgs", "received_events_url": "https://api.github.com/users/aapot/received_events", "repos_url": "https://api.github.com/users/aapot/repos", "site_admin": false, "starred_url": "https://api.github.com/users/aapot/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aapot/subscriptions", "type": "User", "url": "https://api.github.com/users/aapot", "user_view_type": "public" }
[ { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
[ "Hi @aapot, thanks for reporting.\r\n\r\nWe are investigating the cause of this issue. We will keep you informed. ", "When making HTTP request from code line:\r\n```\r\nresponse = requests.get(f\"{_API_URL}/bucket/dataset/{path}/{use_cdn}\", timeout=10.0).json()\r\n```\r\nit cannot be decoded to JSON because it raises a 404 Not Found error.\r\n\r\nThe request is fixed if removing the `/{use_cdn}` from the URL.\r\n\r\nMaybe there was a change in the Common Voice API?\r\n\r\nCC: @anton-l @patrickvonplaten @polinaeterna ", "We have contacted by email the data owners of the Common Voice dataset.", "Hotfix: https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0/commit/17b237961e4f7f84a2a0aea645abe5428a9d568e", "I have also made the hotfix for all the rest of Common Voice script versions: 8.0, 6.1, 6.0,..., 1.0", "Hey, is there anything new?\r\nI could not load the dataset.", "cc @lhoestq @polinaeterna ", "Hi @ngoquanghuy99! The dataset should load fine if you go through the following steps:\r\n\r\n1. Go to https://huggingface.co/datasets/mozilla-foundation/common_voice_9_0 and click \"Access repository\" if you see a message about sharing your contact information with Mozilla Foundation at the top of the page. If you've already done that then skip to step 2.\r\n2. Run the command `huggingface-cli login` in your terminal or notebook to authenticate your machine.\r\n3. Load the dataset with `use_auth_token=True`:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"mozilla-foundation/common_voice_9_0\", \"ab\", use_auth_token=True)\r\n```", "Thanks @anton-l \r\nI could load the dataset now, but in another way.\r\nThanks anyways!", "> Thanks @anton-l I could load the dataset now, but in another way. Thanks anyways!\r\n\r\nCan you share the \"another way\" please?" ]
2022-03-30T11:39:41
2024-06-09T12:12:46
2022-03-31T08:18:04
NONE
null
null
null
null
## Describe the bug I wanted to load `mozilla-foundation/common_voice_7_0` dataset with `fi` language and `test` split from datasets on Colab/Kaggle notebook, but I am getting an error `JSONDecodeError: [Errno Expecting value] Not Found: 0` while loading it. The bug seems to affect other languages and splits too than just the `fi` and `test` split. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("mozilla-foundation/common_voice_7_0", "fi", split="test", use_auth_token="YOUR TOKEN") ``` ## Expected results load `mozilla-foundation/common_voice_7_0` dataset succesfully ## Actual results ``` JSONDecodeError Traceback (most recent call last) /opt/conda/lib/python3.7/site-packages/requests/models.py in json(self, **kwargs) 909 try: --> 910 return complexjson.loads(self.text, **kwargs) 911 except JSONDecodeError as e: /opt/conda/lib/python3.7/site-packages/simplejson/__init__.py in loads(s, encoding, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, use_decimal, **kw) 524 and not use_decimal and not kw): --> 525 return _default_decoder.decode(s) 526 if cls is None: /opt/conda/lib/python3.7/site-packages/simplejson/decoder.py in decode(self, s, _w, _PY3) 369 s = str(s, self.encoding) --> 370 obj, end = self.raw_decode(s) 371 end = _w(s, end).end() /opt/conda/lib/python3.7/site-packages/simplejson/decoder.py in raw_decode(self, s, idx, _w, _PY3) 399 idx += 3 --> 400 return self.scan_once(s, idx=_w(s, idx).end()) JSONDecodeError: Expecting value: line 1 column 1 (char 0) During handling of the above exception, another exception occurred: JSONDecodeError Traceback (most recent call last) /tmp/ipykernel_358/370980805.py in <module> 1 # load Common Voice 7.0 dataset from Huggingface with Finnish "test" split ----> 2 test_dataset = load_dataset("mozilla-foundation/common_voice_7_0", "fi", split="test", use_auth_token=True) /opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs) 1690 ignore_verifications=ignore_verifications, 1691 try_from_hf_gcs=try_from_hf_gcs, -> 1692 use_auth_token=use_auth_token, 1693 ) 1694 /opt/conda/lib/python3.7/site-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs) 604 if not downloaded_from_gcs: 605 self._download_and_prepare( --> 606 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 607 ) 608 # Sync info /opt/conda/lib/python3.7/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos) 1102 1103 def _download_and_prepare(self, dl_manager, verify_infos): -> 1104 super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos) 1105 1106 def _get_examples_iterable_for_split(self, split_generator: SplitGenerator) -> ExamplesIterable: /opt/conda/lib/python3.7/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 670 split_dict = SplitDict(dataset_name=self.name) 671 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs) --> 672 split_generators = self._split_generators(dl_manager, **split_generators_kwargs) 673 674 # Checksums verification ~/.cache/huggingface/modules/datasets_modules/datasets/mozilla-foundation--common_voice_7_0/fe20cac47c166e25b1f096ab661832e3da7cf298ed4a91dcaa1343ad972d175b/common_voice_7_0.py in _split_generators(self, dl_manager) 151 152 self._log_download(self.config.name, bundle_version, hf_auth_token) --> 153 archive = dl_manager.download(self._get_bundle_url(self.config.name, bundle_url_template)) 154 155 if self.config.version < datasets.Version("5.0.0"): ~/.cache/huggingface/modules/datasets_modules/datasets/mozilla-foundation--common_voice_7_0/fe20cac47c166e25b1f096ab661832e3da7cf298ed4a91dcaa1343ad972d175b/common_voice_7_0.py in _get_bundle_url(self, locale, url_template) 130 path = urllib.parse.quote(path.encode("utf-8"), safe="~()*!.'") 131 use_cdn = self.config.size_bytes < 20 * 1024 * 1024 * 1024 --> 132 response = requests.get(f"{_API_URL}/bucket/dataset/{path}/{use_cdn}", timeout=10.0).json() 133 return response["url"] 134 /opt/conda/lib/python3.7/site-packages/requests/models.py in json(self, **kwargs) 915 raise RequestsJSONDecodeError(e.message) 916 else: --> 917 raise RequestsJSONDecodeError(e.msg, e.doc, e.pos) 918 919 @property JSONDecodeError: [Errno Expecting value] Not Found: 0 ``` ## Environment info - `datasets` version: 2.0.0 - Platform: Linux-5.10.90+-x86_64-with-debian-bullseye-sid - Python version: 3.7.12 - PyArrow version: 5.0.0 - Pandas version: 1.3.5
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4062/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4062/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
20:38:23
https://api.github.com/repos/huggingface/datasets/issues/4061
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4061/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4061/comments
https://api.github.com/repos/huggingface/datasets/issues/4061/events
https://github.com/huggingface/datasets/issues/4061
1,186,317,071
I_kwDODunzps5GtcMP
4,061
Loading cnn_dailymail dataset failed
{ "avatar_url": "https://avatars.githubusercontent.com/u/68355048?v=4", "events_url": "https://api.github.com/users/Arij-Aladel/events{/privacy}", "followers_url": "https://api.github.com/users/Arij-Aladel/followers", "following_url": "https://api.github.com/users/Arij-Aladel/following{/other_user}", "gists_url": "https://api.github.com/users/Arij-Aladel/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Arij-Aladel", "id": 68355048, "login": "Arij-Aladel", "node_id": "MDQ6VXNlcjY4MzU1MDQ4", "organizations_url": "https://api.github.com/users/Arij-Aladel/orgs", "received_events_url": "https://api.github.com/users/Arij-Aladel/received_events", "repos_url": "https://api.github.com/users/Arij-Aladel/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Arij-Aladel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Arij-Aladel/subscriptions", "type": "User", "url": "https://api.github.com/users/Arij-Aladel", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" }, { "color": "cfd3d7", "default": true, "description": "This issue or pull request already exists", "id": 1935892865, "name": "duplicate", "node_id": "MDU6TGFiZWwxOTM1ODkyODY1", "url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
[ "Hi @Arij-Aladel, thanks for reporting.\r\n\r\nThis issue was already reported \r\n- #3784\r\n\r\nand its root cause is a change in the Google Drive service. See:\r\n- #3786 \r\n\r\nWe have already fixed it in our 2.0.0 release. See:\r\n- #3787 \r\n\r\nPlease, update your `datasets` version:\r\n```\r\npip install -U datasets\r\n```\r\nand retry loading the dataset by forcing its redownload:\r\n```python\r\ndataset = load_dataset(\"cnn_dailymail\", \"3.0.0\", download_mode=\"force_redownload\")\r\n```" ]
2022-03-30T11:29:02
2022-03-30T13:36:14
2022-03-30T13:36:14
NONE
null
null
null
null
## Describe the bug I wanted to load cnn_dailymail dataset from huggingface datasets on jupyter lab, but I am getting an error ` NotADirectoryError:[Errno20] Not a directory ` while loading it. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset('cnn_dailymail', '3.0.0') ``` ## Expected results load `cnn_dailymail` dataset succesfully ## Actual results failed to load and get error > NotADirectoryError: [Errno 20] Not a directory ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` 1.8.0: - Platform: Ubuntu-20.04 - Python version: 3.9.10 - PyArrow version: 3.0.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4061/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4061/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
2:07:12
https://api.github.com/repos/huggingface/datasets/issues/4057
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4057/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4057/comments
https://api.github.com/repos/huggingface/datasets/issues/4057/events
https://github.com/huggingface/datasets/issues/4057
1,185,442,001
I_kwDODunzps5GqGjR
4,057
`load_dataset` consumes too much memory for audio + tar archives
{ "avatar_url": "https://avatars.githubusercontent.com/u/50839826?v=4", "events_url": "https://api.github.com/users/JFCeron/events{/privacy}", "followers_url": "https://api.github.com/users/JFCeron/followers", "following_url": "https://api.github.com/users/JFCeron/following{/other_user}", "gists_url": "https://api.github.com/users/JFCeron/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/JFCeron", "id": 50839826, "login": "JFCeron", "node_id": "MDQ6VXNlcjUwODM5ODI2", "organizations_url": "https://api.github.com/users/JFCeron/orgs", "received_events_url": "https://api.github.com/users/JFCeron/received_events", "repos_url": "https://api.github.com/users/JFCeron/repos", "site_admin": false, "starred_url": "https://api.github.com/users/JFCeron/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JFCeron/subscriptions", "type": "User", "url": "https://api.github.com/users/JFCeron", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "Hi ! Could it be because you need to free the memory used by `tarfile` by emptying the tar `members` by any chance ?\r\n```python\r\n yield key, {\"audio\": {\"path\": audio_name, \"bytes\": audio_file_obj.read()}}\r\n audio_tarfile.members = [] # free memory\r\n key += 1\r\n```\r\n\r\nand then you can set `DEFAULT_WRITER_BATCH_SIZE` to whatever value makes more sense for your dataset.\r\n\r\nLet me know if the issue persists (which could happen, given that you managed to run your generator without RAM issues and using os.walk didn't solve the issue)", "Thanks for your reply! Tried it but the issue persists. ", "I also run out of memory when loading `mozilla-foundation/common_voice_8_0` that also uses `tarfile` via `dl_manager.iter_archive`. There seems to be some data files that stay in memory somewhere\r\n\r\nI don't have the issue with other compression formats like gzipped files", "I'm facing a similar memory leak issue when loading cv8. As you said @lhoestq \r\n\r\n`load_dataset(\"mozilla-foundation/common_voice_8_0\", \"en\", use_auth_token=True, writer_batch_size=1)`\r\n\r\nThis issue is happening on a 32GB RAM machine. \r\n\r\nAny updates on how to fix this?", "I've run a memory profiler to see where's the leak comes from:\r\n\r\n![image](https://user-images.githubusercontent.com/5097052/165101712-e7060ae5-77b2-4f6a-92bd-2996dbd60b36.png)\r\n\r\n... it seems that it's related to the tarfile lib buffer reader. But I don't know why it's only happening on the huggingface script", "I have the same problem when loading video into numpy. \r\n```\r\nyield id,{ \r\n \"video\": imageio.v3.imread(video_path),\r\n \"label\": int(label)\r\n}\r\n```\r\nSince video files are heavy, it can only processes a dozen samples before OOM.", "For video datasets I think you can just define the max number of video that can stay in memory by adding this class attribute to your dataset builer:\r\n```py\r\nDEFAULT_WRITER_BATCH_SIZE = 8 # only 8 videos at a time in memory before flushing the dataset writer\r\n```", "same thing happens for me with `load_dataset(\"mozilla-foundation/common_voice_8_0\", \"en\", use_auth_token=True, writer_batch_size=1)` on azure ml. seems to fill up `tmp` and not release that memory until OOM", "I'll add that I'm encountering the same issue with\r\n`load_dataset('wikipedia', 'ceb', runner='DirectRunner', split='train')`.\r\nSame for `'es'` in place of `'ceb'`.", "> I'll add that I'm encountering the same issue with\r\n> load_dataset('wikipedia', 'ceb', runner='DirectRunner', split='train').\r\n> Same for 'es' in place of 'ceb'.\r\n\r\nThis is because the Apache Beam `DirectRunner` runs with the full data in memory unfortunately. Optimizing the `DirectRunner` is not in the scope of the `datasets` library, but rather in the Apache Beam project I believe. If you have memory issues with the `DirectRunner`, please consider switching to a machine with more RAM, or to distributed processing runtimes like Spark, Flink or DataFlow. There is a bit of documentation here: https://huggingface.co/docs/datasets/beam", "> > I'll add that I'm encountering the same issue with\r\n> > `load_dataset('wikipedia', 'ceb', runner='DirectRunner', split='train')`.\r\n> > Same for `'es'` in place of `'ceb'`.\r\n> \r\n> This is because the Apache Beam `DirectRunner` runs with the full data in memory unfortunately. Optimizing the `DirectRunner` is not in the scope of the `datasets` library, but rather in the Apache Beam project I believe. If you have memory issues with the `DirectRunner`, please consider switching to a machine with more RAM, or to distributed processing runtimes like Spark, Flink or DataFlow. There is a bit of documentation here: https://huggingface.co/docs/datasets/beam\r\n\r\nFair enough, but this line of code crashed an AWS instance with 1024GB of RAM! I have also tried with `Runner='Flink'` on an environment with 51GB of RAM, which also failed.\r\n\r\nApache Beam has tons of open tickets already - is it worth submitting one to them over this?", "> Fair enough, but this line of code crashed an AWS instance with 1024GB of RAM!\r\n\r\nWhat, wikipedia is not even bigger than 20GB\r\n\r\ncc @albertvillanova", "> > Fair enough, but this line of code crashed an AWS instance with 1024GB of RAM!\r\n> \r\n> What, wikipedia is not even bigger than 20GB\r\n> \r\n> cc @albertvillanova\r\n\r\nLuckily, on Colab you can watch the call stack at the bottom of the screen - much of the time and space complexity seems to come from `_parse_and_clean_wikicode()` rather than the actual download process. As far as I can tell, the script is loading the full dataset and then cleaning it all at once, which is consuming a lot of memory.", "I think we are mixing many different bugs in this Issue page:\r\n- TAR archive with audio files\r\n- video file\r\n- distributed parsing of Wikipedia using Apache Beam\r\n\r\n@dan-the-meme-man may I ask you to open a separate Issue for your problem? Then I will address it. It is important to fix it because we are currently working on a Datasets enhancement to be able to provide all Wikipedias already preprocessed.\r\n\r\nOn the other hand, I think we could keep this Issue page for the original problem: TAR archive with audio files. That is not fixed yet either.", "Is there an update on the TAR archive issue with audio files? Happy to lend a hand in fixing this :)", "I found the issue with Common Voice 8 and opened a PR to fix it: https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0/discussions/2\r\n\r\nBasically the `metadata` dict that contains the transcripts per audio file was continuously getting filled with bytes from `f.read()` because of this code:\r\n```python\r\nresult = metadata[path]\r\nresult[\"audio\"] = {\"path\": path, \"bytes\": f.read()}\r\n```\r\ncopying the result with `result = dict(metadata[path])` fixes it: the bytes are no longer added to `metadata`\r\n\r\nI also opened PRs to the other CV datasets", "Amazing, that's a great find! Thanks @lhoestq!", "I'm closing this one for now, but feel free to reopen if you encounter other memory issues with audio datasets" ]
2022-03-29T21:38:55
2022-08-16T10:22:55
2022-08-16T10:22:55
NONE
null
null
null
null
## Description `load_dataset` consumes more and more memory until it's killed, even though it's made with a generator. I'm adding a loading script for a new dataset, made up of ~15s audio coming from a tar file. Tried setting `DEFAULT_WRITER_BATCH_SIZE = 1` as per the discussion in #741 but the problem persists. ## Steps to reproduce the bug Here's my implementation of `_generate_examples`: ```python class MyDatasetBuilder(datasets.GeneratorBasedBuilder): DEFAULT_WRITER_BATCH_SIZE = 1 ... def _split_generators(self, dl_manager): archive_path = dl_manager.download(_DL_URLS[self.config.name]) return [ datasets.SplitGenerator( name=datasets.Split.TRAIN, gen_kwargs={ "audio_tarfile_path": archive_path["audio_tarfile"] }, ), ] def _generate_examples(self, audio_tarfile_path): key = 0 with tarfile.open(audio_tarfile_path, mode="r|") as audio_tarfile: for audio_tarinfo in audio_tarfile: audio_name = audio_tarinfo.name audio_file_obj = audio_tarfile.extractfile(audio_tarinfo) yield key, {"audio": {"path": audio_name, "bytes": audio_file_obj.read()}} key += 1 ``` I then try to load via `ds = load_dataset('./datasets/my_new_dataset', writer_batch_size=1)`, and memory usage grows until all 8GB of my machine are taken and process is killed (`Killed`). Also tried an untarred version of this using `os.walk` but the same happened. I created a script to confirm that one can safely go through such a generator, which runs just fine with memory <500MB at all times. ```python import tarfile def generate_examples(): audio_tarfile = tarfile.open("audios.tar", mode="r|") key = 0 for audio_tarinfo in audio_tarfile: audio_name = audio_tarinfo.name audio_file_obj = audio_tarfile.extractfile(audio_tarinfo) yield key, {"audio": {"path": audio_name, "bytes": audio_file_obj.read()}} key += 1 if __name__ == "__main__": examples = generate_examples() for example in examples: pass ``` ## Expected results Memory consumption should be similar to the non-huggingface script. ## Actual results Process is killed after consuming too much memory. ## Environment info - `datasets` version: 2.0.1.dev0 - Platform: Linux-4.19.0-20-cloud-amd64-x86_64-with-debian-10.12 - Python version: 3.7.12 - PyArrow version: 6.0.1 - Pandas version: 1.3.5
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/4057/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4057/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
139 days, 12:44:00
https://api.github.com/repos/huggingface/datasets/issues/4056
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4056/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4056/comments
https://api.github.com/repos/huggingface/datasets/issues/4056/events
https://github.com/huggingface/datasets/issues/4056
1,185,155,775
I_kwDODunzps5GpAq_
4,056
Unexpected behavior of _TempDirWithCustomCleanup
{ "avatar_url": "https://avatars.githubusercontent.com/u/22680696?v=4", "events_url": "https://api.github.com/users/JonasGeiping/events{/privacy}", "followers_url": "https://api.github.com/users/JonasGeiping/followers", "following_url": "https://api.github.com/users/JonasGeiping/following{/other_user}", "gists_url": "https://api.github.com/users/JonasGeiping/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/JonasGeiping", "id": 22680696, "login": "JonasGeiping", "node_id": "MDQ6VXNlcjIyNjgwNjk2", "organizations_url": "https://api.github.com/users/JonasGeiping/orgs", "received_events_url": "https://api.github.com/users/JonasGeiping/received_events", "repos_url": "https://api.github.com/users/JonasGeiping/repos", "site_admin": false, "starred_url": "https://api.github.com/users/JonasGeiping/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JonasGeiping/subscriptions", "type": "User", "url": "https://api.github.com/users/JonasGeiping", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
[]
[ "Hi ! Would setting TMPDIR at the beginning of your python script/session work ? I mean, even before importing transformers, datasets, etc. and using them ? I think this would be the most robust solution given any library that uses `tempfile`. I don't think we aim to support environment variables to be changed at run time", "Hi, yeah setting the environment variable before the imports / as environment variable outside is another way to fix this. I am just arguing that `datasets` already uses its own global variable to track temporary files: `_TEMP_DIR_FOR_TEMP_CACHE_FILES`, and the creation of this global variable should respect TMPDIR instead of relying on tempfile to do so." ]
2022-03-29T16:58:22
2022-03-30T15:08:04
null
NONE
null
null
null
null
## Describe the bug This is not 100% a bug in `datasets`, but behavior that surprised me and I think this could be made more robust on the `datasets`side. When using `datasets.disable_caching()`, cache files are written to a temporary directory. This directory should be based on the environment variable TMPDIR. I want to set TMPDIR at runtime using os.ENVIRON["TMPDIR"] = something, but depending on other imported modules this can fail to take effect. ## Steps to reproduce the bug `_TempDirWithCustomCleanup` relies on `tempfile` to generate a path to a temporary directory. However, `tempfile` generates the path only once. This can be a problem when trying to set TMPDIR at runtime whenever other code imports `tempfile` first and does something unexpected. For example (after too much trial and error) I found out that a different part of the code base I work with defines a class `PatchedDataCollatorForLanguageModeling(transformers.DataCollatorForLanguageModeling)` based on a `transformers` class. This import is enough to trigger `tempfile` to generate `tempfile` to generate a temporary path and leading to the wrong path being cached in `tempfile.tempdir`. ## Suggestion: I could file this also as bug with `transformers`, but I think fixing this on the datasets would be much more robust: Datasets could recompute the temporary path once (technically possible via `tempfile._get_default_tempdir` or resetting the global variable `tempfile.tmpdir` to None) before setting its own global `_TEMP_DIR_FOR_TEMP_CACHE_FILES`.
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4056/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4056/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
https://api.github.com/repos/huggingface/datasets/issues/4053
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4053/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4053/comments
https://api.github.com/repos/huggingface/datasets/issues/4053/events
https://github.com/huggingface/datasets/issues/4053
1,184,500,378
I_kwDODunzps5Gmgqa
4,053
Modify datatype from `int32` to `float` for pearsonr, spearmanr.
{ "avatar_url": "https://avatars.githubusercontent.com/u/86637320?v=4", "events_url": "https://api.github.com/users/woodywarhol9/events{/privacy}", "followers_url": "https://api.github.com/users/woodywarhol9/followers", "following_url": "https://api.github.com/users/woodywarhol9/following{/other_user}", "gists_url": "https://api.github.com/users/woodywarhol9/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/woodywarhol9", "id": 86637320, "login": "woodywarhol9", "node_id": "MDQ6VXNlcjg2NjM3MzIw", "organizations_url": "https://api.github.com/users/woodywarhol9/orgs", "received_events_url": "https://api.github.com/users/woodywarhol9/received_events", "repos_url": "https://api.github.com/users/woodywarhol9/repos", "site_admin": false, "starred_url": "https://api.github.com/users/woodywarhol9/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/woodywarhol9/subscriptions", "type": "User", "url": "https://api.github.com/users/woodywarhol9", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
[ "@Woodywarhol9 good catch, thanks for reporting.\r\n\r\nWe are fixing this." ]
2022-03-29T08:27:41
2022-03-29T14:02:20
2022-03-29T14:02:20
NONE
null
null
null
null
**Is your feature request related to a problem? Please describe.** - Now [Pearsonr](https://github.com/huggingface/datasets/blob/master/metrics/pearsonr/pearsonr.py) and [Spearmanr](https://github.com/huggingface/datasets/blob/master/metrics/spearmanr/spearmanr.py) both get input data as 'int32'. **Describe the solution you'd like** - Considering that those metrics are widely used for the STS task(labels are in 'float' data type), it would be better to modify datatype from 'int32' to 'float' for getting exact values of similarity.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4053/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4053/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
5:34:39
https://api.github.com/repos/huggingface/datasets/issues/4052
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4052/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4052/comments
https://api.github.com/repos/huggingface/datasets/issues/4052/events
https://github.com/huggingface/datasets/issues/4052
1,184,447,977
I_kwDODunzps5GmT3p
4,052
metric = metric_cls( TypeError: 'NoneType' object is not callable
{ "avatar_url": "https://avatars.githubusercontent.com/u/39409233?v=4", "events_url": "https://api.github.com/users/klyuhang9/events{/privacy}", "followers_url": "https://api.github.com/users/klyuhang9/followers", "following_url": "https://api.github.com/users/klyuhang9/following{/other_user}", "gists_url": "https://api.github.com/users/klyuhang9/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/klyuhang9", "id": 39409233, "login": "klyuhang9", "node_id": "MDQ6VXNlcjM5NDA5MjMz", "organizations_url": "https://api.github.com/users/klyuhang9/orgs", "received_events_url": "https://api.github.com/users/klyuhang9/received_events", "repos_url": "https://api.github.com/users/klyuhang9/repos", "site_admin": false, "starred_url": "https://api.github.com/users/klyuhang9/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/klyuhang9/subscriptions", "type": "User", "url": "https://api.github.com/users/klyuhang9", "user_view_type": "public" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
[ "Hi @klyuhang9,\r\n\r\nI'm sorry but I can't reproduce your problem:\r\n```python\r\nIn [2]: metric = load_metric('glue', 'rte')\r\nDownloading builder script: 5.76kB [00:00, 2.40MB/s]\r\n```\r\n\r\nCould you please, retry to load the metric? Sometimes there are temporary connectivity issues.\r\n\r\nFeel free to re-open this issue of the problem persists." ]
2022-03-29T07:43:08
2022-03-29T14:06:01
2022-03-29T14:06:01
NONE
null
null
null
null
Hi, friend. I meet a problem. When I run the code: `metric = load_metric('glue', 'rte')` There is a problem raising: `metric = metric_cls( TypeError: 'NoneType' object is not callable ` I don't know why. Thanks for your help!
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4052/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4052/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
6:22:53
https://api.github.com/repos/huggingface/datasets/issues/4051
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4051/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4051/comments
https://api.github.com/repos/huggingface/datasets/issues/4051/events
https://github.com/huggingface/datasets/issues/4051
1,184,400,179
I_kwDODunzps5GmIMz
4,051
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.0.0/datasets/glue/glue.py
{ "avatar_url": "https://avatars.githubusercontent.com/u/39409233?v=4", "events_url": "https://api.github.com/users/klyuhang9/events{/privacy}", "followers_url": "https://api.github.com/users/klyuhang9/followers", "following_url": "https://api.github.com/users/klyuhang9/following{/other_user}", "gists_url": "https://api.github.com/users/klyuhang9/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/klyuhang9", "id": 39409233, "login": "klyuhang9", "node_id": "MDQ6VXNlcjM5NDA5MjMz", "organizations_url": "https://api.github.com/users/klyuhang9/orgs", "received_events_url": "https://api.github.com/users/klyuhang9/received_events", "repos_url": "https://api.github.com/users/klyuhang9/repos", "site_admin": false, "starred_url": "https://api.github.com/users/klyuhang9/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/klyuhang9/subscriptions", "type": "User", "url": "https://api.github.com/users/klyuhang9", "user_view_type": "public" }
[]
closed
false
null
[]
[ "Hi @klyuhang9,\r\n\r\nI'm sorry but I can't reproduce your problem:\r\n```python\r\nIn [4]: ds = load_dataset(\"glue\", \"sst2\", download_mode=\"force_redownload\")\r\nDownloading builder script: 28.8kB [00:00, 9.15MB/s] \r\nDownloading metadata: 28.7kB [00:00, 10.7MB/s] \r\nDownloading and preparing dataset glue/sst2 (download: 7.09 MiB, generated: 4.78 MiB, post-processed: Unknown size, total: 11.88 MiB) to .../.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad...\r\nDownloading data: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7.44M/7.44M [00:01<00:00, 4.12MB/s]\r\nDataset glue downloaded and prepared to .../.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad. Subsequent calls will reuse this data. \r\n100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 1047.96it/s]\r\n\r\nIn [5]: ds\r\nOut[5]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['sentence', 'label', 'idx'],\r\n num_rows: 67349\r\n })\r\n validation: Dataset({\r\n features: ['sentence', 'label', 'idx'],\r\n num_rows: 872\r\n })\r\n test: Dataset({\r\n features: ['sentence', 'label', 'idx'],\r\n num_rows: 1821\r\n })\r\n})\r\n```\r\n\r\nPlease, note that sometimes GitHub has some temporary connectivity issues. Feel free to retry and re-open this issue if the problem persists.", "Maybe it's because we are in China.", "Are you able to access the URL in your web browser?", "> Are you able to access the URL in your web browser?\r\n\r\nYes, with or without a VPN, we (people in China) can access the URL. And we can even use wget to download these files. We can download the pretrained language model automatically with the code.\r\nHowever, we CANNOT access glue.py & metric.py automatically. Every time, it will raise ConnectionError, and we have to download datasets manually (SQuAD is extremely hard to preprocess) and replace metric.py with scipy.metrics. If this problem is solved, many Chinese will save a lot of time.", "> ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.0.0/datasets/glue/glue.py\r\n> \r\n> I don't know why; it is ok when I use\r\n\r\nIf you would query the question `ConnectionError: Couldn't reach` in www.baidu.com (Chinese Google, Google is banned and some people cannot access it), you will find that there are so many questions about accessing `https://raw.githubusercontent.com`. There are some solutions like adding `185.199.108.133 raw.githubusercontent.com` to `C:/windows/systen32/drives/etc/hosts`, but it is time-consuming, hard for green-hand, and invalid sometimes." ]
2022-03-29T07:00:31
2022-05-08T07:27:32
2022-03-29T08:29:25
NONE
null
null
null
null
Hi, I meet a problem. When I run the code: `dataset = load_dataset('glue','sst2')` There is a issue raising: ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.0.0/datasets/glue/glue.py I don't know why; it is ok when I use Google Chrome to view this url. Thanks for your help!
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4051/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4051/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
1:28:54
https://api.github.com/repos/huggingface/datasets/issues/4048
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4048/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4048/comments
https://api.github.com/repos/huggingface/datasets/issues/4048/events
https://github.com/huggingface/datasets/issues/4048
1,183,804,576
I_kwDODunzps5Gj2yg
4,048
Split size error on `amazon_us_reviews` / `PC_v1_00` dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/191985?v=4", "events_url": "https://api.github.com/users/trentonstrong/events{/privacy}", "followers_url": "https://api.github.com/users/trentonstrong/followers", "following_url": "https://api.github.com/users/trentonstrong/following{/other_user}", "gists_url": "https://api.github.com/users/trentonstrong/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/trentonstrong", "id": 191985, "login": "trentonstrong", "node_id": "MDQ6VXNlcjE5MTk4NQ==", "organizations_url": "https://api.github.com/users/trentonstrong/orgs", "received_events_url": "https://api.github.com/users/trentonstrong/received_events", "repos_url": "https://api.github.com/users/trentonstrong/repos", "site_admin": false, "starred_url": "https://api.github.com/users/trentonstrong/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/trentonstrong/subscriptions", "type": "User", "url": "https://api.github.com/users/trentonstrong", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" }, { "color": "7057ff", "default": true, "description": "Good for newcomers", "id": 1935892877, "name": "good first issue", "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/191985?v=4", "events_url": "https://api.github.com/users/trentonstrong/events{/privacy}", "followers_url": "https://api.github.com/users/trentonstrong/followers", "following_url": "https://api.github.com/users/trentonstrong/following{/other_user}", "gists_url": "https://api.github.com/users/trentonstrong/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/trentonstrong", "id": 191985, "login": "trentonstrong", "node_id": "MDQ6VXNlcjE5MTk4NQ==", "organizations_url": "https://api.github.com/users/trentonstrong/orgs", "received_events_url": "https://api.github.com/users/trentonstrong/received_events", "repos_url": "https://api.github.com/users/trentonstrong/repos", "site_admin": false, "starred_url": "https://api.github.com/users/trentonstrong/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/trentonstrong/subscriptions", "type": "User", "url": "https://api.github.com/users/trentonstrong", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/191985?v=4", "events_url": "https://api.github.com/users/trentonstrong/events{/privacy}", "followers_url": "https://api.github.com/users/trentonstrong/followers", "following_url": "https://api.github.com/users/trentonstrong/following{/other_user}", "gists_url": "https://api.github.com/users/trentonstrong/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/trentonstrong", "id": 191985, "login": "trentonstrong", "node_id": "MDQ6VXNlcjE5MTk4NQ==", "organizations_url": "https://api.github.com/users/trentonstrong/orgs", "received_events_url": "https://api.github.com/users/trentonstrong/received_events", "repos_url": "https://api.github.com/users/trentonstrong/repos", "site_admin": false, "starred_url": "https://api.github.com/users/trentonstrong/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/trentonstrong/subscriptions", "type": "User", "url": "https://api.github.com/users/trentonstrong", "user_view_type": "public" } ]
[ "Follow-up: I have confirmed there are no duplicate lines via `sort amazon_reviews_us_PC_v1_00.tsv | uniq -cd` after extracting the raw file.", "Hi @trentonstrong, thanks for reporting!\r\n\r\nI confirm that loading this dataset configuration throws a `NonMatchingSplitsSizesError`:\r\n```\r\nNonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=350242049, num_examples=785730, dataset_name='amazon_us_reviews'), 'recorded': SplitInfo(name='train', num_bytes=3982712078, num_examples=6908554, dataset_name='amazon_us_reviews')}]\r\n```\r\n\r\nAlso thank you for your offer to fix this. You can find information about how to update the metadata JSON file here: https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#automatically-add-code-metadata\r\n```shell\r\ndatasets-cli test datasets/amazon_us_reviews --save_infos --all_configs\r\n```\r\nPlease, feel free to open a PR with this fix. And do not hesitate to ping me if you need any help.", "No sweat. Will get it patched up ASAP." ]
2022-03-28T18:12:04
2022-04-08T12:29:30
2022-04-08T12:29:30
CONTRIBUTOR
null
null
null
null
## Describe the bug When downloading this subset as of 3-28-2022 you will encounter a split size error after the dataset is extracted. The extracted dataset has roughly ~6m rows while the split expects <1m. Upon digging a little deeper, I downloaded the raw files from `https://s3.amazonaws.com/amazon-reviews-pds/tsv/amazon_reviews_us_PC_v1_00.tsv.gz` and extracted them. A line count via `wc -l` confirms the ~6m number that we see and the data looks valid at a glance (I did not check for duplicate rows). My guess is this file has either been updated in place or there is a bug in the dataset metadata. Happy to submit a PR and fix this up if turns out to be a metadata issue but wanted to get some other :eyes: on it first. ## Steps to reproduce the bug ```python load_dataset('amazon_us_reviews', 'PC_v1_00') ``` ## Expected results Dataset is downloaded and extracted successfully. ## Actual results An split size exception is thrown. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.0.0 - Platform: Linux-5.10.16.3-microsoft-standard-WSL2-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 7.0.0 - Pandas version: 1.4.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4048/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4048/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
10 days, 18:17:26
https://api.github.com/repos/huggingface/datasets/issues/4047
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4047/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4047/comments
https://api.github.com/repos/huggingface/datasets/issues/4047/events
https://github.com/huggingface/datasets/issues/4047
1,183,789,237
I_kwDODunzps5GjzC1
4,047
Dataset.unique(column: str) -> ArrowNotImplementedError
{ "avatar_url": "https://avatars.githubusercontent.com/u/1461936?v=4", "events_url": "https://api.github.com/users/orkenstein/events{/privacy}", "followers_url": "https://api.github.com/users/orkenstein/followers", "following_url": "https://api.github.com/users/orkenstein/following{/other_user}", "gists_url": "https://api.github.com/users/orkenstein/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/orkenstein", "id": 1461936, "login": "orkenstein", "node_id": "MDQ6VXNlcjE0NjE5MzY=", "organizations_url": "https://api.github.com/users/orkenstein/orgs", "received_events_url": "https://api.github.com/users/orkenstein/received_events", "repos_url": "https://api.github.com/users/orkenstein/repos", "site_admin": false, "starred_url": "https://api.github.com/users/orkenstein/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/orkenstein/subscriptions", "type": "User", "url": "https://api.github.com/users/orkenstein", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "Hi @orkenstein, thanks for reporting.\r\n\r\nPlease note that for this case, our `datasets` library uses under the hood the Apache Arrow `unique` function: https://arrow.apache.org/docs/python/generated/pyarrow.compute.unique.html#pyarrow.compute.unique\r\n\r\nAnd currently the Apache Arrow `unique` function is only implemented for these input types (see info in their [docs](https://arrow.apache.org/docs/cpp/compute.html#array-wise-vector-functions)): Boolean, Null, Numeric, Temporal, Binary- and String-like.\r\n\r\nHowever, the data types of the `wikiann` dataset are all `list<item: string>` (see its [dataset card](https://huggingface.co/datasets/wikiann#data-fields)), and thus, not yet supported by the Apache Arrow `unique` function.", "As a workaround solution you can use pandas:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset('wikiann', 'en', split='train')\r\ndf = dataset.to_pandas()\r\nunique_df = df[~df.tokens.apply(tuple).duplicated()] # from https://stackoverflow.com/a/46958336/17517845\r\n```\r\n\r\nNote that pandas loads the dataset in memory (this one is small so it's fine).", "@lhoestq thank you! I will fall back to this method for now" ]
2022-03-28T17:59:32
2022-04-01T18:24:57
2022-04-01T18:24:57
NONE
null
null
null
null
## Describe the bug I'm trying to use `unique()` function, but it fails ## Steps to reproduce the bug 1. Get dataset 2. Call `unique` 3. Error # Sample code to reproduce the bug ```python !pip show datasets from datasets import load_dataset dataset = load_dataset('wikiann', 'en') dataset['train'].column_names dataset['train'].unique(dataset['train'].column_names[0]) ``` ## Expected results It would be nice to actually see unique items ## Actual results Error: ```python --------------------------------------------------------------------------- ArrowNotImplementedError Traceback (most recent call last) [<ipython-input-10-5e0de07ed42c>](https://s0qyv2vjaji-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20220324-060046-RC00_436956229#) in <module>() 6 7 dataset['train'].column_names ----> 8 dataset['train'].unique(dataset['train'].column_names[0]) 5 frames /usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.check_status() ArrowNotImplementedError: Function unique has no kernel matching input types (array[list<item: string>]) ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.0.0 - Platform: Google Collab - Python version: 3.7.13 - PyArrow version: 6.0.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/1461936?v=4", "events_url": "https://api.github.com/users/orkenstein/events{/privacy}", "followers_url": "https://api.github.com/users/orkenstein/followers", "following_url": "https://api.github.com/users/orkenstein/following{/other_user}", "gists_url": "https://api.github.com/users/orkenstein/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/orkenstein", "id": 1461936, "login": "orkenstein", "node_id": "MDQ6VXNlcjE0NjE5MzY=", "organizations_url": "https://api.github.com/users/orkenstein/orgs", "received_events_url": "https://api.github.com/users/orkenstein/received_events", "repos_url": "https://api.github.com/users/orkenstein/repos", "site_admin": false, "starred_url": "https://api.github.com/users/orkenstein/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/orkenstein/subscriptions", "type": "User", "url": "https://api.github.com/users/orkenstein", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4047/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4047/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
4 days, 0:25:25
https://api.github.com/repos/huggingface/datasets/issues/4044
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4044/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4044/comments
https://api.github.com/repos/huggingface/datasets/issues/4044/events
https://github.com/huggingface/datasets/issues/4044
1,183,658,942
I_kwDODunzps5GjTO-
4,044
CLI dummy data generation is broken
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
[]
2022-03-28T16:07:37
2022-03-31T14:59:06
2022-03-31T14:59:06
MEMBER
null
null
null
null
## Describe the bug We get a TypeError when running CLI dummy data generation: ```shell datasets-cli dummy_data datasets/<your-dataset-folder> --auto_generate ``` gives: ``` File ".../huggingface/datasets/src/datasets/commands/dummy_data.py", line 361, in _autogenerate_dummy_data dataset_builder._prepare_split(split_generator) TypeError: _prepare_split() missing 1 required positional argument: 'check_duplicate_keys' ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4044/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4044/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
2 days, 22:51:29
https://api.github.com/repos/huggingface/datasets/issues/4041
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4041/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4041/comments
https://api.github.com/repos/huggingface/datasets/issues/4041/events
https://github.com/huggingface/datasets/issues/4041
1,183,599,461
I_kwDODunzps5GjEtl
4,041
Add support for IIIF in datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/8995957?v=4", "events_url": "https://api.github.com/users/davanstrien/events{/privacy}", "followers_url": "https://api.github.com/users/davanstrien/followers", "following_url": "https://api.github.com/users/davanstrien/following{/other_user}", "gists_url": "https://api.github.com/users/davanstrien/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/davanstrien", "id": 8995957, "login": "davanstrien", "node_id": "MDQ6VXNlcjg5OTU5NTc=", "organizations_url": "https://api.github.com/users/davanstrien/orgs", "received_events_url": "https://api.github.com/users/davanstrien/received_events", "repos_url": "https://api.github.com/users/davanstrien/repos", "site_admin": false, "starred_url": "https://api.github.com/users/davanstrien/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/davanstrien/subscriptions", "type": "User", "url": "https://api.github.com/users/davanstrien", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
[ "Hi! Thanks for the detailed analysis of adding IIIF support. I like the idea of \"using IIIF through datasets scripts\" due to its ease of use. Another approach that I like is yielding image ids and using the `piffle` library (which offers a bit more flexibility) + `map` to download + cache images. We can handle bad URLs in `map` by returning `None`. Plus, we can add a `Dataset Preprocessing` section with the code that explains this approach to the card of such datasets. WDYT?\r\n\r\n> currently, IIIF is mainly used by cultural heritage organizations (museums, archives etc.) The adoption of IIIF in this sector has been growing but it's possible that adoption won't be extended to other industries which may also be a source of image data for training ML models.\r\n\r\nThis is why (currently) adding a new feature type would be overkill, IMO.\r\n" ]
2022-03-28T15:19:25
2022-04-05T18:20:53
null
MEMBER
null
null
null
null
This is a feature request for support for IIIF in `datasets`. Apologies for the long issue. I have also used a different format to the usual feature request since I think that makes more sense but happy to use the standard template if preferred. ## What is [IIIF](https://iiif.io/)? IIIF (International Image Interoperability Framework) > is a set of open standards for delivering high-quality, attributed digital objects online at scale. It’s also an international community developing and implementing the IIIF APIs. IIIF is backed by a consortium of leading cultural institutions. The tl;dr is that IIIF provides various specifications for implementing useful functionality for: - Institutions to make available images for various use cases - Users to have a consistent way of interacting/requesting these images - For developers to have a common standard for developing tools for working with IIIF images that will work across all institutions that implement a particular IIIF standard (for example the image viewer for the BNF can also work for the Library of Congress if they both use IIIF). Some institutions that various levels of support IIF include: The British Library, Internet Archive, Library of Congress, Wikidata. There are also many smaller institutions that have IIIF support. An incomplete list can be found here: https://iiif.io/guides/finding_resources/ ## IIIF APIs IIIF consists of a number of APIs which could be integrated with datasets. I think the most obvious candidate for inclusion would be the [Image API](https://iiif.io/api/image/3.0/) ### IIIF Image API The Image API https://iiif.io/api/image/3.0/ is likely the most suitable first candidate for integration with datasets. The Image API offers a consistent protocol for requesting images via a URL: ```{scheme}://{server}{/prefix}/{identifier}/{region}/{size}/{rotation}/{quality}.{format}``` A concrete example of this: ```https://stacks.stanford.edu/image/iiif/hg676jb4964%2F0380_796-44/full/full/0/default.jpg``` As you can see the scheme offers a number of options that can be specified in the URL, for example, size. Using the example URL we return: ![](https://stacks.stanford.edu/image/iiif/hg676jb4964%2F0380_796-44/full/full/0/default.jpg) We can change the size to request a size of 250 by 250, this is done by changing the size from `full` to `250,250` i.e. switching the URL to `https://stacks.stanford.edu/image/iiif/hg676jb4964%2F0380_796-44/full/250,250/0/default.jpg` ![](https://stacks.stanford.edu/image/iiif/hg676jb4964%2F0380_796-44/full/250,250/0/default.jpg) We can also request the image with max width 250, max height 250 whilst maintaining the aspect ratio using `!w,h`. i.e. change the url to `https://stacks.stanford.edu/image/iiif/hg676jb4964%2F0380_796-44/full/!250,250/0/default.jpg` ![](https://stacks.stanford.edu/image/iiif/hg676jb4964%2F0380_796-44/full/!250,250/0/default.jpg) A full overview of the options for size can be found here: https://iiif.io/api/image/3.0/#42-size ## Why would/could this be useful for datasets? There are a few reasons why support for the IIIF Image API could be useful. Broadly the ability to have more control over how an image is returned from a server is useful for many ML workflows: - images can be requested in the right size, this prevents having to download/stream large images when the actual desired size is much smaller - can select a subset of an image: it is possible to select a sub-region of an image, this could be useful for example when you already have a bounding box for a subset of an image and then want to use this subset of an image for another task. For example, https://github.com/Living-with-machines/nnanno uses IIIF to request parts of a newspaper image that have been detected as 'photograph', 'illustration' etc for downstream use. - options for quality, rotation, the format can all be encoded in the URL request. These may become particularly useful when pre-training models on large image datasets where the cost of downloading images with 1600 pixel width when you actually want 240 has a larger impact. ## What could this look like in datasets? I think there are various ways in which support for IIIF could potentially be included in `datasets`. These suggestions aren't fully fleshed out but hopefully, give a sense of possible approaches that match existing `datasets` methods in their approach. ### Use through datasets scripts Loading images via URL is already supported. There are a few possible 'extras' that could be included when using IIIF. One option is to leverage the IIIF protocol in datasets scripts, i.e. the dataset script can expose the IIIF options via the dataset script: ```python ds = load_dataset("iiif_dataset", image_size="250,250", fmt="jpg") ``` This is already possible. The approach to parsing the IIIF URLs would be left to the person creating the dataset script. ### Support through dataset scripts (with some datasets support) This is similar to the above but `datasets` would offer some way of saying this is a iiif URL and then expose the options associated with IIIF images automatically. i.e. if you did something like: ```python features = {"label": ClassLabel(names=['dog','cat']), "url": datasets.IIIFURL()} ``` inside your loading script, you would automatically have exposed `size`, `fmt` etc. options when loading the dataset. ### Other possible integrations Some other possible pseudocode ways that a user could interact with IIIF URLs: The ability to cast to an `IIIFImage` feature type: ``` ds.cast_column('url', IIIFImage, download=False) ``` The ability to specify some options associated with IIIF urls. ``` ds = ds.set_iiif_options(column='url', size="250,250") ``` I think all of these would rely on having an `IIIFImage` feature type - this would be a little bit of a Frankenstein between a `string` and `datasets.Image`. I think most of the actual image behaviour would be exactly the same as `datasets.Image`, the difference would be that the underlying URL could be modified in various ways. ## prerequisite requirements There are a few pre-requisites that I can anticipate. This doesn't cover a full implementation of IIIF support which would have different requirements depending on the approach taken to implementing IIIF. Some of these features would be useful independently of adding IIIF support: ### support for handling failed images loaded via a URL (or a specific IIIFImage feature). Working with images via web requests will inevitably return the odd failed request. If these images are then requests and don't return it would be useful to have a `None` returned instead of an error. For example, when using `push_to_hub` `datasets` will try and include the image but currently fails with bad URLs. ```python from datasets import Dataset import datasets urls = ['https://stacks.stanford.edu/image/iiif/hg676jb4964%2F0380_796-44/full/!250,250/0/default.jpg']*3 urls.append("badurl.com/image.jpg") data = {"url":urls} ds = Dataset.from_dict(data) ds = ds.cast_column('url', datasets.Image()) ds[3]['url'] ``` returns a `FileNotFoundError`, for streaming large datasets of images using their URLs it could be useful to have `None` returned instead. This has implications for the actual training loop i.e. you now need to somehow skip those examples because of this it might not be desirable to support this. ### Caching support Since IIIF requests images via a URL it would be great to have a way of not requesting the images multiple times. This is tracked in https://github.com/huggingface/datasets/issues/3142 and I think this would also be very desirable to have here particularly as one of the primary use cases of IIIF may be to do unsupervised pre-training on large datasets of IIIF URLs. ### Support for Parsing IIIF URLs This gets closer to the actual implementation. Here the requirement would be some way for `datasets` to parse a URL that the users specify is an IIIF URL. An example of a Python library that does this: https://github.com/Princeton-CDH/piffle. I also have a rough version that uses `dataclasses` which I can share. ## Why it might not be worthwhile/suitable for datasets There are some reasons that this might not be worth implementing: - currently, IIIF is mainly used by cultural heritage organizations (museums, archives etc.) The adoption of IIIF in this sector has been growing but it's possible that adoption won't be extended to other industries which may also be a source of image data for training ML models. - It may end up being better to leave this to the user. It would for example be possible for someone to write map functions to change an IIIF URL to the correct size etc. Adding direct support for IIIF in datasets may potentially not be worth the trouble. - The impact of different approaches to doing image scaling can impact the downstream model's performance, see: https://twitter.com/wightmanr/status/1479528581466243073?s=20. Since different IIIF image servers may implement different approaches to resizing images this could have a downstream impact on model performance. think this is something that could be flagged to the end-user in the documentation. This probably also falls into general "gotchas" that probably aren't the `datasets` libraries' role to protect users from. Some of the requirements outlined above would be useful for images anyway. These could be implemented prior to a final decision about whether IIIF support could/should be added to datasets. ## Suggested next steps: I realise this is a long and slightly open-ended issue. I am happy to clarify/answer questions on IIIF and possible integrations. If the prerequisite requirements seem worth exploring/are better explored in their own issues let me know and I can open new issues for those.
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 2, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/4041/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4041/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
https://api.github.com/repos/huggingface/datasets/issues/4037
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4037/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4037/comments
https://api.github.com/repos/huggingface/datasets/issues/4037/events
https://github.com/huggingface/datasets/issues/4037
1,183,144,486
I_kwDODunzps5GhVom
4,037
Error while building documentation
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
[ "After some investigation, maybe the bug is in `doc-builder`.\r\n\r\nI've opened an issue there:\r\n- huggingface/doc-builder#160", "Fixed by @lewtun (thank you):\r\n- huggingface/doc-builder@31fe6c8bc7225810e281c2f6c6cd32f38828c504" ]
2022-03-28T09:22:44
2022-03-28T10:01:52
2022-03-28T10:00:48
MEMBER
null
null
null
null
## Describe the bug Documentation building is failing: - https://github.com/huggingface/datasets/runs/5716300989?check_suite_focus=true ``` ValueError: There was an error when converting ../datasets/docs/source/package_reference/main_classes.mdx to the MDX format. Unable to find datasets.filesystems.S3FileSystem in datasets. Make sure the path to that object is correct. ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4037/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4037/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
0:38:04
https://api.github.com/repos/huggingface/datasets/issues/4032
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4032/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4032/comments
https://api.github.com/repos/huggingface/datasets/issues/4032/events
https://github.com/huggingface/datasets/issues/4032
1,182,595,697
I_kwDODunzps5GfPpx
4,032
can't download cats_vs_dogs dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/74569835?v=4", "events_url": "https://api.github.com/users/RRaphaell/events{/privacy}", "followers_url": "https://api.github.com/users/RRaphaell/followers", "following_url": "https://api.github.com/users/RRaphaell/following{/other_user}", "gists_url": "https://api.github.com/users/RRaphaell/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/RRaphaell", "id": 74569835, "login": "RRaphaell", "node_id": "MDQ6VXNlcjc0NTY5ODM1", "organizations_url": "https://api.github.com/users/RRaphaell/orgs", "received_events_url": "https://api.github.com/users/RRaphaell/received_events", "repos_url": "https://api.github.com/users/RRaphaell/repos", "site_admin": false, "starred_url": "https://api.github.com/users/RRaphaell/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RRaphaell/subscriptions", "type": "User", "url": "https://api.github.com/users/RRaphaell", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
[ "Thnaks for reporting @RRaphaell.\r\n\r\nWe are fixing it. " ]
2022-03-27T17:05:39
2022-03-28T07:44:24
2022-03-28T07:44:24
NONE
null
null
null
null
## Describe the bug can't download cats_vs_dogs dataset. error: Checksums didn't match for dataset source files ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("cats_vs_dogs") ``` ## Expected results loaded successfully. ## Actual results NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://download.microsoft.com/download/3/E/1/3E1C3F21-ECDB-4869-8368-6DEBA77B919F/kagglecatsanddogs_3367a.zip'] ## Environment info fresh google colab notebook
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4032/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4032/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
14:38:45
https://api.github.com/repos/huggingface/datasets/issues/4031
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4031/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4031/comments
https://api.github.com/repos/huggingface/datasets/issues/4031/events
https://github.com/huggingface/datasets/issues/4031
1,182,415,124
I_kwDODunzps5GejkU
4,031
Cannot load the dataset conll2012_ontonotesv5
{ "avatar_url": "https://avatars.githubusercontent.com/u/8326473?v=4", "events_url": "https://api.github.com/users/cathyxl/events{/privacy}", "followers_url": "https://api.github.com/users/cathyxl/followers", "following_url": "https://api.github.com/users/cathyxl/following{/other_user}", "gists_url": "https://api.github.com/users/cathyxl/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cathyxl", "id": 8326473, "login": "cathyxl", "node_id": "MDQ6VXNlcjgzMjY0NzM=", "organizations_url": "https://api.github.com/users/cathyxl/orgs", "received_events_url": "https://api.github.com/users/cathyxl/received_events", "repos_url": "https://api.github.com/users/cathyxl/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cathyxl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cathyxl/subscriptions", "type": "User", "url": "https://api.github.com/users/cathyxl", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
[ "Hi @cathyxl, thanks for reporting.\r\n\r\nIndeed, we have recently updated the loading script of that dataset (and fixed that bug as well):\r\n- #4002\r\n\r\nThat fix will be available in our next `datasets` library release. In the meantime, you can incorporate that fix by:\r\n- installing `datasets` from our GitHub repo:\r\n```bash\r\npip install git+https://github.com/huggingface/datasets#egg=datasets\r\n```\r\n- forcing the data files to be redownloaded\r\n```python\r\nds = load_dataset('conll2012_ontonotesv5', 'english_v4', split=\"test\", download_mode=\"force_redownload\")\r\n```\r\n\r\nFeel free to re-open this issue if the problem persists." ]
2022-03-27T07:38:23
2022-03-28T06:58:31
2022-03-28T06:31:18
NONE
null
null
null
null
## Describe the bug Cannot load the dataset conll2012_ontonotesv5 ## Steps to reproduce the bug ```python # Sample code to reproduce the bug from datasets import load_dataset dataset = load_dataset('conll2012_ontonotesv5', 'english_v4', split="test") print(dataset) ``` ## Expected results The datasets should be downloaded successfully ## Actual results raise NonMatchingChecksumError(error_msg + str(bad_urls)) datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://md-datasets-cache-zipfiles-prod.s3.eu-west-1.amazonaws.com/zmycy7t9h9-1.zip'] ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.0.0 - Platform: Linux-5.4.0-88-generic-x86_64-with-glibc2.31 - Python version: 3.9.7 - PyArrow version: 7.0.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4031/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4031/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
22:52:55
https://api.github.com/repos/huggingface/datasets/issues/4029
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4029/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4029/comments
https://api.github.com/repos/huggingface/datasets/issues/4029/events
https://github.com/huggingface/datasets/issues/4029
1,181,057,011
I_kwDODunzps5GZX_z
4,029
Add FAISS .range_search() method for retrieving all texts from dataset above similarity threshold
{ "avatar_url": "https://avatars.githubusercontent.com/u/41862082?v=4", "events_url": "https://api.github.com/users/MoritzLaurer/events{/privacy}", "followers_url": "https://api.github.com/users/MoritzLaurer/followers", "following_url": "https://api.github.com/users/MoritzLaurer/following{/other_user}", "gists_url": "https://api.github.com/users/MoritzLaurer/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/MoritzLaurer", "id": 41862082, "login": "MoritzLaurer", "node_id": "MDQ6VXNlcjQxODYyMDgy", "organizations_url": "https://api.github.com/users/MoritzLaurer/orgs", "received_events_url": "https://api.github.com/users/MoritzLaurer/received_events", "repos_url": "https://api.github.com/users/MoritzLaurer/repos", "site_admin": false, "starred_url": "https://api.github.com/users/MoritzLaurer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MoritzLaurer/subscriptions", "type": "User", "url": "https://api.github.com/users/MoritzLaurer", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
[ "Hi ! You can access the faiss index with\r\n```python\r\nfaiss_index = my_dataset.get_index(\"my_index_name\").faiss_index\r\n```\r\nand then do whatever you want with it, e.g. query it using range_search:\r\n```python\r\nthreshold = 0.95\r\nlimits, distances, indices = faiss_index.range_search(x=xq, thresh=threshold)\r\n\r\ntexts = dataset[indices]\r\n```", "wow, that's great, thank you for the explanation. (if that's not already in the documentation, could be worth adding it)\r\n\r\nwhich type of faiss index is Datasets using? I looked into faiss recently and I understand that there are several different types of indexes and the choice is important, e.g. regarding which distance metric you use (euclidian vs. cosine/dot product), the size of my dataset etc. can I chose the type of index somehow as well?", "`Dataset.add_faiss_index` has a `string_factory` parameter, used to set the type of index (see the faiss documentation about [index factory](https://github.com/facebookresearch/faiss/wiki/The-index-factory)). Alternatively, you can pass an index you've defined yourself using faiss with the `custom_index` parameter of `Dataset.add_faiss_index` \r\n\r\nHere is the full documentation of `Dataset.add_faiss_index`: https://huggingface.co/docs/datasets/v2.0.0/en/package_reference/main_classes#datasets.Dataset.add_faiss_index", "great thanks, I will try it out" ]
2022-03-25T17:31:33
2022-05-06T08:35:52
2022-05-06T08:35:52
NONE
null
null
null
null
**Is your feature request related to a problem? Please describe.** I would like to retrieve all texts from a dataset, which are semantically similar to a specific input text (query), above a certain (cosine) similarity threshold. My dataset is very large (Wikipedia), so I need to use Datasets and FAISS for this. I would like to be able to repeat many different queries on the dataset quickly. **Describe the solution you'd like** dataset objects currently have the .get_nearest_examples() method for text retrieval via FAISS. But this only allows retrieving a specific number of K texts instead of everything above a specified similarity threshold. It would be great if HF Datasets would also support the FAISS method .range_search() for retrieving texts above a certain similarity threshold. see details here: https://github.com/facebookresearch/faiss/issues/1273 **Describe alternatives you've considered** I've considered using native FAISS, but doing this via HF datasets would be better. My assumption is that Dataset features like dataset streaming make it easier to work with large datasets **Additional context** The concrete use-case is: I have a large dataset (wikipedia) and I would like to retrieve all paragraphs which are similar to a query. I will use sentence-transformers for encoding the texts.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4029/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4029/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
41 days, 15:04:19
https://api.github.com/repos/huggingface/datasets/issues/4027
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4027/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4027/comments
https://api.github.com/repos/huggingface/datasets/issues/4027/events
https://github.com/huggingface/datasets/issues/4027
1,180,991,344
I_kwDODunzps5GZH9w
4,027
ElasticSearch Indexing example: TypeError: __init__() missing 1 required positional argument: 'scheme'
{ "avatar_url": "https://avatars.githubusercontent.com/u/41862082?v=4", "events_url": "https://api.github.com/users/MoritzLaurer/events{/privacy}", "followers_url": "https://api.github.com/users/MoritzLaurer/followers", "following_url": "https://api.github.com/users/MoritzLaurer/following{/other_user}", "gists_url": "https://api.github.com/users/MoritzLaurer/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/MoritzLaurer", "id": 41862082, "login": "MoritzLaurer", "node_id": "MDQ6VXNlcjQxODYyMDgy", "organizations_url": "https://api.github.com/users/MoritzLaurer/orgs", "received_events_url": "https://api.github.com/users/MoritzLaurer/received_events", "repos_url": "https://api.github.com/users/MoritzLaurer/repos", "site_admin": false, "starred_url": "https://api.github.com/users/MoritzLaurer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MoritzLaurer/subscriptions", "type": "User", "url": "https://api.github.com/users/MoritzLaurer", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" }, { "color": "cfd3d7", "default": true, "description": "This issue or pull request already exists", "id": 1935892865, "name": "duplicate", "node_id": "MDU6TGFiZWwxOTM1ODkyODY1", "url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
[ "Hi, @MoritzLaurer, thanks for reporting.\r\n\r\nNormally this is due to a mismatch between the versions of your Elasticsearch client and server:\r\n- your ES client is passing only keyword arguments to your ES server\r\n- whereas your ES server expects a positional argument called 'scheme'\r\n\r\nIn order to fix this, you should align the major versions of both Elasticsearch client and server.\r\n\r\nYou can have more info:\r\n- on this other issue page: https://github.com/huggingface/datasets/issues/3956#issuecomment-1072115173\r\n- Elasticsearch client docs: https://www.elastic.co/guide/en/elasticsearch/client/python-api/current/overview.html#_compatibility\r\n\r\nFeel free to re-open this issue if the problem persists.\r\n\r\nDuplicate of:\r\n- #3956", "1. Check elasticsearch version\r\n```\r\nimport elasticsearch\r\nprint(elasticsearch.__version__)\r\n```\r\nEx: 7.9.1\r\n2. Uninstall current elasticsearch package\r\n`pip uninstall elasticsearch`\r\n3. Install elasticsearch 7.9.1 package\r\n`pip install elasticsearch==7.9.1`" ]
2022-03-25T16:22:28
2022-04-07T10:29:52
2022-03-28T07:58:56
NONE
null
null
null
null
## Describe the bug I am following the example in the documentation for elastic search step by step (on google colab): https://huggingface.co/docs/datasets/faiss_es#elasticsearch ``` from datasets import load_dataset squad = load_dataset('crime_and_punish', split='train[:1000]') ``` When I run the line: `squad.add_elasticsearch_index("context", host="localhost", port="9200")` I get the error: `TypeError: __init__() missing 1 required positional argument: 'scheme'` ## Expected results No error message ## Actual results ``` TypeError Traceback (most recent call last) [<ipython-input-23-9205593edef3>](https://localhost:8080/#) in <module>() 1 import elasticsearch ----> 2 squad.add_elasticsearch_index("text", host="localhost", port="9200") 6 frames [/usr/local/lib/python3.7/dist-packages/elasticsearch/_sync/client/utils.py](https://localhost:8080/#) in host_mapping_to_node_config(host) 209 options["path_prefix"] = options.pop("url_prefix") 210 --> 211 return NodeConfig(**options) # type: ignore 212 213 TypeError: __init__() missing 1 required positional argument: 'scheme' ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.2.0 - Platform: Linux, Google Colab - Python version: Google Colab (probably 3.7) - PyArrow version: ?
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4027/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4027/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
2 days, 15:36:28
https://api.github.com/repos/huggingface/datasets/issues/4025
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4025/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4025/comments
https://api.github.com/repos/huggingface/datasets/issues/4025/events
https://github.com/huggingface/datasets/issues/4025
1,180,963,105
I_kwDODunzps5GZBEh
4,025
Missing argument in precision/recall
{ "avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4", "events_url": "https://api.github.com/users/Dref360/events{/privacy}", "followers_url": "https://api.github.com/users/Dref360/followers", "following_url": "https://api.github.com/users/Dref360/following{/other_user}", "gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Dref360", "id": 8976546, "login": "Dref360", "node_id": "MDQ6VXNlcjg5NzY1NDY=", "organizations_url": "https://api.github.com/users/Dref360/orgs", "received_events_url": "https://api.github.com/users/Dref360/received_events", "repos_url": "https://api.github.com/users/Dref360/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Dref360/subscriptions", "type": "User", "url": "https://api.github.com/users/Dref360", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
[ "Thanks for the suggestion, @Dref360.\r\n\r\nWe are adding that argument. " ]
2022-03-25T15:55:52
2022-03-28T09:53:06
2022-03-28T09:53:06
CONTRIBUTOR
null
null
null
null
**Is your feature request related to a problem? Please describe.** [`sklearn.metrics.precision_score`](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.precision_score.html) accepts an argument `zero_division`, but it is not available in [precision Metric](https://github.com/huggingface/datasets/blob/master/metrics/precision/precision.py#L117) Same issue is present for Recall. **Describe the solution you'd like** Support for **kwargs or adding a new field for `zero_division`. **Describe alternatives you've considered** I could filter the warnings myself, but that is not ideal. **Additional context** I can make the requested changes if this is approved.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4025/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4025/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
2 days, 17:57:14
https://api.github.com/repos/huggingface/datasets/issues/4015
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4015/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4015/comments
https://api.github.com/repos/huggingface/datasets/issues/4015/events
https://github.com/huggingface/datasets/issues/4015
1,180,510,856
I_kwDODunzps5GXSqI
4,015
Can not correctly parse the classes with imagefolder
{ "avatar_url": "https://avatars.githubusercontent.com/u/21264909?v=4", "events_url": "https://api.github.com/users/YiSyuanChen/events{/privacy}", "followers_url": "https://api.github.com/users/YiSyuanChen/followers", "following_url": "https://api.github.com/users/YiSyuanChen/following{/other_user}", "gists_url": "https://api.github.com/users/YiSyuanChen/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/YiSyuanChen", "id": 21264909, "login": "YiSyuanChen", "node_id": "MDQ6VXNlcjIxMjY0OTA5", "organizations_url": "https://api.github.com/users/YiSyuanChen/orgs", "received_events_url": "https://api.github.com/users/YiSyuanChen/received_events", "repos_url": "https://api.github.com/users/YiSyuanChen/repos", "site_admin": false, "starred_url": "https://api.github.com/users/YiSyuanChen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/YiSyuanChen/subscriptions", "type": "User", "url": "https://api.github.com/users/YiSyuanChen", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "I found that the problem arises because the image files in my folder are actually symbolic links (for my own reasons). After modifications, the classes can now be correctly parsed. Therefore, I close this issue.", "HI, I have a question. How much time did you load the ImageNet data files? " ]
2022-03-25T08:51:17
2022-03-28T01:02:03
2022-03-25T09:27:56
NONE
null
null
null
null
## Describe the bug I try to load my own image dataset with imagefolder, but the parsing of classes is incorrect. ## Steps to reproduce the bug I organized my dataset (ImageNet) in the following structure: ``` - imagenet/ - train/ - n01440764/ - ILSVRC2012_val_00000293.jpg - ...... - n01695060/ - ...... - val/ - n01440764/ - n01695060/ - ...... ``` At first, I followed the instructions from the Huggingface [example](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification#using-your-own-data) to load my data as: ``` from datasets import load_dataset data_files = {'train': 'imagenet/train', 'val': 'imagenet/val'} ds = load_dataset("nateraw/image-folder", data_files=data_files, task="image-classification") ``` but it resulted following error (I mask my personal path as <PERSONAL_PATH>): ``` FileNotFoundError: Unable to find 'https://huggingface.co/datasets/nateraw/image-folder/resolve/main/imagenet/train' at <PERSONAL_PATH>/ImageNet/https:/huggingface.co/datasets/nateraw/image-folder/resolve/main ``` Next, I followed a recent issue #3960 to load data as: ``` from datasets import load_dataset data_files = {'train': ['imagenet/train/**'], 'val': ['imagenet/val/**']} ds = load_dataset("imagefolder", data_files=data_files, task="image-classification") ``` and the data can be loaded without error as: (I copy val folder to train folder for illustration) ``` >>> ds DatasetDict({ train: Dataset({ features: ['image', 'labels'], num_rows: 50000 }) val: Dataset({ features: ['image', 'labels'], num_rows: 50000 }) }) ``` However, the parsed classes is wrong (should be 1000 classes): ``` >>> ds["train"].features {'image': Image(decode=True, id=None), 'labels': ClassLabel(num_classes=1, names=['val'], id=None)} ``` ## Expected results I expect that the "labels" in ds["train"].features should contain 1000 classes. ## Actual results The "labels" in ds["train"].features contains only 1 wrong class. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.0.0 - Platform: Ubuntu 18.04 - Python version: Python 3.7.12 - PyArrow version: 7.0.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/21264909?v=4", "events_url": "https://api.github.com/users/YiSyuanChen/events{/privacy}", "followers_url": "https://api.github.com/users/YiSyuanChen/followers", "following_url": "https://api.github.com/users/YiSyuanChen/following{/other_user}", "gists_url": "https://api.github.com/users/YiSyuanChen/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/YiSyuanChen", "id": 21264909, "login": "YiSyuanChen", "node_id": "MDQ6VXNlcjIxMjY0OTA5", "organizations_url": "https://api.github.com/users/YiSyuanChen/orgs", "received_events_url": "https://api.github.com/users/YiSyuanChen/received_events", "repos_url": "https://api.github.com/users/YiSyuanChen/repos", "site_admin": false, "starred_url": "https://api.github.com/users/YiSyuanChen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/YiSyuanChen/subscriptions", "type": "User", "url": "https://api.github.com/users/YiSyuanChen", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4015/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4015/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
0:36:39
https://api.github.com/repos/huggingface/datasets/issues/4013
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4013/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4013/comments
https://api.github.com/repos/huggingface/datasets/issues/4013/events
https://github.com/huggingface/datasets/issues/4013
1,180,427,174
I_kwDODunzps5GW-Om
4,013
Cannot preview "hazal/Turkish-Biomedical-corpus-trM"
{ "avatar_url": "https://avatars.githubusercontent.com/u/42860397?v=4", "events_url": "https://api.github.com/users/hazalturkmen/events{/privacy}", "followers_url": "https://api.github.com/users/hazalturkmen/followers", "following_url": "https://api.github.com/users/hazalturkmen/following{/other_user}", "gists_url": "https://api.github.com/users/hazalturkmen/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/hazalturkmen", "id": 42860397, "login": "hazalturkmen", "node_id": "MDQ6VXNlcjQyODYwMzk3", "organizations_url": "https://api.github.com/users/hazalturkmen/orgs", "received_events_url": "https://api.github.com/users/hazalturkmen/received_events", "repos_url": "https://api.github.com/users/hazalturkmen/repos", "site_admin": false, "starred_url": "https://api.github.com/users/hazalturkmen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hazalturkmen/subscriptions", "type": "User", "url": "https://api.github.com/users/hazalturkmen", "user_view_type": "public" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
[ "Hi @hazalturkmen, thanks for reporting.\r\n\r\nNote that your dataset repository does not contain any loading script; it only contains a data file named `tr_article_2`.\r\n\r\nWhen there is no loading script but only data files, the `datasets` library tries to infer how to load the data by looking at the data file extensions. However, your data file does not have any extension.\r\n\r\nNote that current supported data file extensions are: 'csv', 'tsv', 'json', 'jsonl', 'parquet', 'txt', 'blp', 'bmp', 'dib', 'bufr', 'cur', 'pcx', 'dcx', 'dds', 'ps', 'eps', 'fit', 'fits', 'fli', 'flc', 'ftc', 'ftu', 'gbr', 'gif', 'grib', 'h5', 'hdf', 'png', 'apng', 'jp2', 'j2k', 'jpc', 'jpf', 'jpx', 'j2c', 'icns', 'ico', 'im', 'iim', 'tif', 'tiff', 'jfif', 'jpe', 'jpg', 'jpeg', 'mpg', 'mpeg', 'msp', 'pcd', 'pxr', 'pbm', 'pgm', 'ppm', 'pnm', 'psd', 'bw', 'rgb', 'rgba', 'sgi', 'ras', 'tga', 'icb', 'vda', 'vst', 'webp', 'wmf', 'emf', 'xbm', 'xpm', 'zip'.\r\n\r\nYou have more info on our docs: [How to share a dataset](https://huggingface.co/docs/datasets/share).", "thanks for reply :)" ]
2022-03-25T07:12:02
2022-04-04T08:05:01
2022-03-25T14:16:11
NONE
null
null
null
null
## Dataset viewer issue for '*hazal/Turkish-Biomedical-corpus-trM' **Link:** *https://huggingface.co/datasets/hazal/Turkish-Biomedical-corpus-trM* *I cannot see the dataset preview.* ``` Server Error Status code: 400 Exception: HTTPError Message: 403 Client Error: Forbidden for url: https://huggingface.co/api/datasets/hazal/Turkish-Biomedical-corpus-trM?full=true ``` Am I the one who added this dataset ? Yes
{ "avatar_url": "https://avatars.githubusercontent.com/u/42860397?v=4", "events_url": "https://api.github.com/users/hazalturkmen/events{/privacy}", "followers_url": "https://api.github.com/users/hazalturkmen/followers", "following_url": "https://api.github.com/users/hazalturkmen/following{/other_user}", "gists_url": "https://api.github.com/users/hazalturkmen/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/hazalturkmen", "id": 42860397, "login": "hazalturkmen", "node_id": "MDQ6VXNlcjQyODYwMzk3", "organizations_url": "https://api.github.com/users/hazalturkmen/orgs", "received_events_url": "https://api.github.com/users/hazalturkmen/received_events", "repos_url": "https://api.github.com/users/hazalturkmen/repos", "site_admin": false, "starred_url": "https://api.github.com/users/hazalturkmen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hazalturkmen/subscriptions", "type": "User", "url": "https://api.github.com/users/hazalturkmen", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4013/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4013/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
7:04:09
https://api.github.com/repos/huggingface/datasets/issues/4009
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4009/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4009/comments
https://api.github.com/repos/huggingface/datasets/issues/4009/events
https://github.com/huggingface/datasets/issues/4009
1,179,658,611
I_kwDODunzps5GUClz
4,009
AMI load_dataset error: sndfile library not found
{ "avatar_url": "https://avatars.githubusercontent.com/u/102043285?v=4", "events_url": "https://api.github.com/users/i-am-neo/events{/privacy}", "followers_url": "https://api.github.com/users/i-am-neo/followers", "following_url": "https://api.github.com/users/i-am-neo/following{/other_user}", "gists_url": "https://api.github.com/users/i-am-neo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/i-am-neo", "id": 102043285, "login": "i-am-neo", "node_id": "U_kgDOBhUOlQ", "organizations_url": "https://api.github.com/users/i-am-neo/orgs", "received_events_url": "https://api.github.com/users/i-am-neo/received_events", "repos_url": "https://api.github.com/users/i-am-neo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/i-am-neo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/i-am-neo/subscriptions", "type": "User", "url": "https://api.github.com/users/i-am-neo", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "Issue unresolved, see [4000](https://github.com/huggingface/datasets/issues/4009#issue-1179658611)" ]
2022-03-24T15:13:38
2022-03-24T15:46:38
2022-03-24T15:17:29
NONE
null
null
null
null
## Describe the bug Getting error message when loading AMI dataset. ## Steps to reproduce the bug `python3 -c "from datasets import load_dataset; print(load_dataset('ami', 'headset-single', split='validation')[0])" ` ## Expected results A clear and concise description of the expected results. ## Actual results Traceback (most recent call last): File "<string>", line 1, in <module> File "/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/load.py", line 1707, in load_dataset use_auth_token=use_auth_token, File "/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/builder.py", line 595, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/builder.py", line 690, in _download_and_prepare ) from None OSError: Cannot find data file. Original error: sndfile library not found ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.3 - Platform: Linux-4.19.0-18-cloud-amd64-x86_64-with-debian-10.11 - Python version: 3.7.3 - PyArrow version: 7.0.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/102043285?v=4", "events_url": "https://api.github.com/users/i-am-neo/events{/privacy}", "followers_url": "https://api.github.com/users/i-am-neo/followers", "following_url": "https://api.github.com/users/i-am-neo/following{/other_user}", "gists_url": "https://api.github.com/users/i-am-neo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/i-am-neo", "id": 102043285, "login": "i-am-neo", "node_id": "U_kgDOBhUOlQ", "organizations_url": "https://api.github.com/users/i-am-neo/orgs", "received_events_url": "https://api.github.com/users/i-am-neo/received_events", "repos_url": "https://api.github.com/users/i-am-neo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/i-am-neo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/i-am-neo/subscriptions", "type": "User", "url": "https://api.github.com/users/i-am-neo", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4009/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4009/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
0:03:51
https://api.github.com/repos/huggingface/datasets/issues/4007
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4007/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4007/comments
https://api.github.com/repos/huggingface/datasets/issues/4007/events
https://github.com/huggingface/datasets/issues/4007
1,179,381,021
I_kwDODunzps5GS-0d
4,007
set_format does not work with multi dimension tensor
{ "avatar_url": "https://avatars.githubusercontent.com/u/5902432?v=4", "events_url": "https://api.github.com/users/phihung/events{/privacy}", "followers_url": "https://api.github.com/users/phihung/followers", "following_url": "https://api.github.com/users/phihung/following{/other_user}", "gists_url": "https://api.github.com/users/phihung/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/phihung", "id": 5902432, "login": "phihung", "node_id": "MDQ6VXNlcjU5MDI0MzI=", "organizations_url": "https://api.github.com/users/phihung/orgs", "received_events_url": "https://api.github.com/users/phihung/received_events", "repos_url": "https://api.github.com/users/phihung/repos", "site_admin": false, "starred_url": "https://api.github.com/users/phihung/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/phihung/subscriptions", "type": "User", "url": "https://api.github.com/users/phihung", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "Hi! Use the `ArrayXD` feature type (where X is the number of dimensions) to get correctly formated tensors. So in your case, define the dataset as follows :\r\n```python\r\nds = Dataset.from_dict({\"A\": [torch.rand((2, 2))]}, features=Features({\"A\": Array2D(shape=(2, 2), dtype=\"float32\")}))\r\n```\r\n", "Hi @mariosasko I'm facing the same issue and the only work around I've found so far is to convert my `DatasetDict` to a dictionary and then create new objects with `Dataset.from_dict`.\r\n```\r\ndataset = load_dataset(\"my_dataset.py\")\r\ndataset = dataset.map(lambda example: blabla(example))\r\ndict_dataset_test = dataset[\"test\"].to_dict()\r\n...\r\ndataset_test = Dataset.from_dict(dict_dataset_test, features=Features(features))\r\n```\r\nHowever, converting a `Dataset` object to a dict takes quite a lot of time and memory... Is there a way to directly create an `Array2D` without having to transform the original `Dataset` to a dict?", "Hi! Yes, you can directly pass the `Features` dictionary as `features` in `map` to cast the column to `Array2D`:\r\n```python\r\ndataset = dataset.map(lambda example: blabla(example), features=Features(features))\r\n```\r\nOr you can use `cast` after `map` to do that:\r\n```python\r\ndataset = dataset.map(lambda example: blabla(example))\r\ndataset = dataset.cast(Features(features))\r\n```", "Fantastic thank you @mariosasko\r\nThe first option you suggested is indeed way faster 😃 " ]
2022-03-24T11:27:43
2022-03-30T07:28:57
2022-03-24T14:39:29
NONE
null
null
null
null
## Describe the bug set_format only transforms the last dimension of a multi-dimension list to tensor ## Steps to reproduce the bug ```python import torch from datasets import Dataset ds = Dataset.from_dict({"A": [torch.rand((2, 2))]}) # ds = Dataset.from_dict({"A": [np.random.rand(2, 2)]}) # => same result ds = ds.with_format("torch") print(ds[0]) ``` ## Expected results ``` {'A': [tensor([[0.6689, 0.1516], [0.1403, 0.5567]])]} ``` ## Actual results ``` {'A': [tensor([0.6689, 0.1516]), tensor([0.1403, 0.5567])]} ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - datasets version: 2.0.0 - Platform: Mac OSX - Python version: 3.8.12 - PyArrow version: 7.0.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/5902432?v=4", "events_url": "https://api.github.com/users/phihung/events{/privacy}", "followers_url": "https://api.github.com/users/phihung/followers", "following_url": "https://api.github.com/users/phihung/following{/other_user}", "gists_url": "https://api.github.com/users/phihung/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/phihung", "id": 5902432, "login": "phihung", "node_id": "MDQ6VXNlcjU5MDI0MzI=", "organizations_url": "https://api.github.com/users/phihung/orgs", "received_events_url": "https://api.github.com/users/phihung/received_events", "repos_url": "https://api.github.com/users/phihung/repos", "site_admin": false, "starred_url": "https://api.github.com/users/phihung/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/phihung/subscriptions", "type": "User", "url": "https://api.github.com/users/phihung", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4007/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4007/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
3:11:46
https://api.github.com/repos/huggingface/datasets/issues/4005
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4005/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4005/comments
https://api.github.com/repos/huggingface/datasets/issues/4005/events
https://github.com/huggingface/datasets/issues/4005
1,179,365,663
I_kwDODunzps5GS7Ef
4,005
Yelp not working
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten", "user_view_type": "public" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" } ]
[ "I don't think it's an issue with the dataset-viewer. Maybe @lhoestq or @albertvillanova could confirm.\r\n\r\n```python\r\n>>> from datasets import load_dataset, DownloadMode\r\n>>> import itertools\r\n>>> # without streaming\r\n>>> dataset = load_dataset(\"yelp_review_full\", name=\"yelp_review_full\", split=\"train\", download_mode=DownloadMode.FORCE_REDOWNLOAD)\r\n\r\nDownloading builder script: 4.39kB [00:00, 5.97MB/s]\r\nDownloading metadata: 2.13kB [00:00, 3.14MB/s]\r\nDownloading and preparing dataset yelp_review_full/yelp_review_full (download: 187.06 MiB, generated: 496.94 MiB, post-processed: Unknown size, total: 684.00 MiB) to /home/slesage/.cache/huggingface/datasets/yelp_review_full/yelp_review_full/1.0.0/13c31a618ba62568ec8572a222a283dfc29a6517776a3ac5945fb508877dde43...\r\nDownloading data: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.10k/1.10k [00:00<00:00, 1.39MB/s]\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets/src/datasets/load.py\", line 1687, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/home/slesage/hf/datasets/src/datasets/builder.py\", line 605, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/slesage/hf/datasets/src/datasets/builder.py\", line 1104, in _download_and_prepare\r\n super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)\r\n File \"/home/slesage/hf/datasets/src/datasets/builder.py\", line 676, in _download_and_prepare\r\n verify_checksums(\r\n File \"/home/slesage/hf/datasets/src/datasets/utils/info_utils.py\", line 40, in verify_checksums\r\n raise NonMatchingChecksumError(error_msg + str(bad_urls))\r\ndatasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https://drive.google.com/uc?export=download&id=0Bz8a_Dbh9QhbZlU4dXhHTFhZQU0']\r\n\r\n>>> # with streaming\r\n>>> dataset = load_dataset(\"yelp_review_full\", name=\"yelp_review_full\", split=\"train\", download_mode=DownloadMode.FORCE_REDOWNLOAD, streaming=True)\r\n\r\nDownloading builder script: 4.39kB [00:00, 5.53MB/s]\r\nDownloading metadata: 2.13kB [00:00, 3.14MB/s]\r\nTraceback (most recent call last):\r\n File \"/home/slesage/.pyenv/versions/datasets/lib/python3.8/site-packages/fsspec/implementations/http.py\", line 375, in _info\r\n await _file_info(\r\n File \"/home/slesage/.pyenv/versions/datasets/lib/python3.8/site-packages/fsspec/implementations/http.py\", line 736, in _file_info\r\n r.raise_for_status()\r\n File \"/home/slesage/.pyenv/versions/datasets/lib/python3.8/site-packages/aiohttp/client_reqrep.py\", line 1000, in raise_for_status\r\n raise ClientResponseError(\r\naiohttp.client_exceptions.ClientResponseError: 403, message='Forbidden', url=URL('https://doc-0g-bs-docs.googleusercontent.com/docs/securesc/ha0ro937gcuc7l7deffksulhg5h7mbp1/gklhpdq1arj8v15qrg7ces34a8c3413d/1648144575000/07511006523564980941/*/0Bz8a_Dbh9QhbZlU4dXhHTFhZQU0?e=download')\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets/src/datasets/load.py\", line 1677, in load_dataset\r\n return builder_instance.as_streaming_dataset(\r\n File \"/home/slesage/hf/datasets/src/datasets/builder.py\", line 906, in as_streaming_dataset\r\n splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)}\r\n File \"/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/yelp_review_full/13c31a618ba62568ec8572a222a283dfc29a6517776a3ac5945fb508877dde43/yelp_review_full.py\", line 102, in _split_generators\r\n data_dir = dl_manager.download_and_extract(my_urls)\r\n File \"/home/slesage/hf/datasets/src/datasets/utils/streaming_download_manager.py\", line 800, in download_and_extract\r\n return self.extract(self.download(url_or_urls))\r\n File \"/home/slesage/hf/datasets/src/datasets/utils/streaming_download_manager.py\", line 778, in extract\r\n urlpaths = map_nested(self._extract, path_or_paths, map_tuple=True)\r\n File \"/home/slesage/hf/datasets/src/datasets/utils/py_utils.py\", line 306, in map_nested\r\n return function(data_struct)\r\n File \"/home/slesage/hf/datasets/src/datasets/utils/streaming_download_manager.py\", line 783, in _extract\r\n protocol = _get_extraction_protocol(urlpath, use_auth_token=self.download_config.use_auth_token)\r\n File \"/home/slesage/hf/datasets/src/datasets/utils/streaming_download_manager.py\", line 372, in _get_extraction_protocol\r\n with fsspec.open(urlpath, **kwargs) as f:\r\n File \"/home/slesage/.pyenv/versions/datasets/lib/python3.8/site-packages/fsspec/core.py\", line 102, in __enter__\r\n f = self.fs.open(self.path, mode=mode)\r\n File \"/home/slesage/.pyenv/versions/datasets/lib/python3.8/site-packages/fsspec/spec.py\", line 978, in open\r\n f = self._open(\r\n File \"/home/slesage/.pyenv/versions/datasets/lib/python3.8/site-packages/fsspec/implementations/http.py\", line 335, in _open\r\n size = size or self.info(path, **kwargs)[\"size\"]\r\n File \"/home/slesage/.pyenv/versions/datasets/lib/python3.8/site-packages/fsspec/asyn.py\", line 88, in wrapper\r\n return sync(self.loop, func, *args, **kwargs)\r\n File \"/home/slesage/.pyenv/versions/datasets/lib/python3.8/site-packages/fsspec/asyn.py\", line 69, in sync\r\n raise result[0]\r\n File \"/home/slesage/.pyenv/versions/datasets/lib/python3.8/site-packages/fsspec/asyn.py\", line 25, in _runner\r\n result[0] = await coro\r\n File \"/home/slesage/.pyenv/versions/datasets/lib/python3.8/site-packages/fsspec/implementations/http.py\", line 388, in _info\r\n raise FileNotFoundError(url) from exc\r\nFileNotFoundError: https://drive.google.com/uc?export=download&id=0Bz8a_Dbh9QhbZlU4dXhHTFhZQU0&confirm=t\r\n```\r\n\r\nAnd this is before even trying to access the rows with\r\n\r\n```python\r\n>>> rows = list(itertools.islice(dataset, 100))\r\n>>> rows = list(dataset.take(100))\r\n```", "Yet another issue related to google drive not being nice. Most likely your IP has been banned from using their API programmatically. Do you know if we are allowed to host and redistribute the data ourselves ?", "Hi,\r\n\r\nFacing the same issue while loading the dataset: \r\n\r\n`Error: {NonMatchingChecksumError}Checksums didn't match for dataset source files`\r\n\r\nThanks", "> Facing the same issue while loading the dataset:\r\n> \r\n> Error: {NonMatchingChecksumError}Checksums didn't match for dataset source files\r\n\r\nThanks for reporting. I think this is the same issue. Feel free to try again later, once Google Drive stopped blocking you. You can retry by passing `download_mode=\"force_redownload\"` to `load_dataset`", "I noticed that FastAI hosts the Yelp dataset at https://s3.amazonaws.com/fast-ai-nlp/yelp_review_full_csv.tgz (from their catalog [here](https://course.fast.ai/datasets))\r\n\r\nLet's update the yelp dataset script to download from there instead of Google Drive", "I updated the link to not use Google Drive anymore, we will do a release early next week with the updated download url of the dataset :)" ]
2022-03-24T11:14:00
2022-03-25T14:59:57
2022-03-25T14:56:10
CONTRIBUTOR
null
null
null
null
## Dataset viewer issue for '*name of the dataset*' **Link:** https://huggingface.co/datasets/yelp_review_full/viewer/yelp_review_full/train Doesn't work: ``` Server error Status code: 400 Exception: Error Message: line contains NULL ``` Am I the one who added this dataset ? No A seamingly copy of the dataset: https://huggingface.co/datasets/SetFit/yelp_review_full works . The original one: https://huggingface.co/datasets/yelp_review_full has > 20K downloads.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4005/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4005/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
1 day, 3:42:10
https://api.github.com/repos/huggingface/datasets/issues/4003
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4003/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4003/comments
https://api.github.com/repos/huggingface/datasets/issues/4003/events
https://github.com/huggingface/datasets/issues/4003
1,179,286,877
I_kwDODunzps5GSn1d
4,003
ASSIN2 dataset checksum bug
{ "avatar_url": "https://avatars.githubusercontent.com/u/14352388?v=4", "events_url": "https://api.github.com/users/ruanchaves/events{/privacy}", "followers_url": "https://api.github.com/users/ruanchaves/followers", "following_url": "https://api.github.com/users/ruanchaves/following{/other_user}", "gists_url": "https://api.github.com/users/ruanchaves/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ruanchaves", "id": 14352388, "login": "ruanchaves", "node_id": "MDQ6VXNlcjE0MzUyMzg4", "organizations_url": "https://api.github.com/users/ruanchaves/orgs", "received_events_url": "https://api.github.com/users/ruanchaves/received_events", "repos_url": "https://api.github.com/users/ruanchaves/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ruanchaves/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ruanchaves/subscriptions", "type": "User", "url": "https://api.github.com/users/ruanchaves", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "Using latest code, I am still facing the issue.\r\n\r\n```python\r\n(base) vimos@vimosmu ➜ ~ ipython\r\nPython 3.6.7 | packaged by conda-forge | (default, Nov 6 2019, 16:19:42) \r\nType 'copyright', 'credits' or 'license' for more information\r\nIPython 7.11.1 -- An enhanced Interactive Python. Type '?' for help.\r\n\r\nIn [1]: from datasets import load_dataset\r\n\r\nIn [2]: load_dataset(\"assin2\")\r\nDownloading builder script: 4.24kB [00:00, 244kB/s]\r\nDownloading metadata: 2.58kB [00:00, 2.19MB/s]\r\nUsing custom data configuration default\r\nDownloading and preparing dataset assin2/default (download: 2.02 MiB, generated: 1.21 MiB, post-processed: Unknown size, total: 3.23 MiB) to /home/vimos/.cache/huggingface/datasets/assin2/default/1.0.0/8467f7acbda82f62ab960ca869dc1e96350e0e103a1ef7eaa43bbee530b80061...\r\nDownloading data: 1.51MB [00:00, 102MB/s]\r\nDownloading data: 116kB [00:00, 63.6MB/s]\r\nDownloading data: 493kB [00:00, 95.8MB/s] \r\nDownloading data files: 100%|██████████████████████████████████████████| 3/3 [00:00<00:00, 8.27it/s]\r\n---------------------------------------------------------------------------\r\nExpectedMoreDownloadedFiles Traceback (most recent call last)\r\n<ipython-input-2-b367d1ffd68e> in <module>\r\n----> 1 load_dataset(\"assin2\")\r\n\r\n~/anaconda3/lib/python3.6/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)\r\n 1694 ignore_verifications=ignore_verifications,\r\n 1695 try_from_hf_gcs=try_from_hf_gcs,\r\n-> 1696 use_auth_token=use_auth_token,\r\n 1697 )\r\n 1698\r\n\r\n~/anaconda3/lib/python3.6/site-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)\r\n 604 if not downloaded_from_gcs:\r\n 605 self._download_and_prepare(\r\n--> 606 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n 607 )\r\n 608 # Sync info\r\n\r\n~/anaconda3/lib/python3.6/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos)\r\n 1102\r\n 1103 def _download_and_prepare(self, dl_manager, verify_infos):\r\n-> 1104 super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)\r\n 1105\r\n 1106 def _get_examples_iterable_for_split(self, split_generator: SplitGenerator) -> ExamplesIterable:\r\n\r\n~/anaconda3/lib/python3.6/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 675 if verify_infos:\r\n 676 verify_checksums(\r\n--> 677 self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), \"dataset source files\"\r\n 678 )\r\n 679\r\n\r\n~/anaconda3/lib/python3.6/site-packages/datasets/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name)\r\n 31 return\r\n 32 if len(set(expected_checksums) - set(recorded_checksums)) > 0:\r\n---> 33 raise ExpectedMoreDownloadedFiles(str(set(expected_checksums) - set(recorded_checksums)))\r\n 34 if len(set(recorded_checksums) - set(expected_checksums)) > 0:\r\n 35 raise UnexpectedDownloadedFile(str(set(recorded_checksums) - set(expected_checksums)))\r\n\r\nExpectedMoreDownloadedFiles: {'https://drive.google.com/u/0/uc?id=1kb7xq6Mb3eaqe9cOAo70BaG9ypwkIqEU&export=download', 'https://drive.google.com/u/0/uc?id=1J3FpQaHxpM-FDfBUyooh-sZF-B-bM_lU&export=download', 'https://drive.google.com/u/0/uc?id=1Q9j1a83CuKzsHCGaNulSkNxBm7Dkn7Ln&export=download'}\r\n```", "That's true. Steps to reproduce the bug on Google Colab:\r\n\r\n```\r\ngit clone https://github.com/huggingface/datasets.git\r\ncd datasets\r\npip install -e .\r\npython -c \"from datasets import load_dataset; print(load_dataset('assin2')['train'][0])\"\r\n```\r\n\r\nHowever the dataset will load without any problems if you just install version 2.0.0:\r\n\r\n ```\r\npip install datasets\r\npython -c \"from datasets import load_dataset; print(load_dataset('assin2')['train'][0])\"\r\n```\r\n\r\nAny thoughts @lhoestq ?", "Right indeed ! Let me open a PR to fix this.\r\nThe dataset_infos.json file that stores some metadata about the dataset to download (and is used to verify it was correctly downloaded) hasn't been updated correctly", "Not sure what the status of this is, but personally I am still getting this error, with glue.", "Can you open a new issue if you got an error with glue please ?", "Have posted at #4241" ]
2022-03-24T10:08:50
2022-04-27T14:14:45
2022-03-28T13:56:39
CONTRIBUTOR
null
null
null
null
## Describe the bug Checksum error after trying to load the [ASSIN 2 dataset](https://huggingface.co/datasets/assin2). `NonMatchingChecksumError` triggered by calling `load_dataset("assin2")`. Similar to #3952 , #3942 , #3941 , etc. ``` --------------------------------------------------------------------------- NonMatchingChecksumError Traceback (most recent call last) [<ipython-input-13-c664a92ad5e7>](https://localhost:8080/#) in <module>() ----> 1 load_dataset('assin2') 4 frames [/usr/local/lib/python3.7/dist-packages/datasets/utils/info_utils.py](https://localhost:8080/#) in verify_checksums(expected_checksums, recorded_checksums, verification_name) 38 if len(bad_urls) > 0: 39 error_msg = "Checksums didn't match" + for_verification_name + ":\n" ---> 40 raise NonMatchingChecksumError(error_msg + str(bad_urls)) 41 logger.info("All the checksums matched successfully" + for_verification_name) 42 NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://drive.google.com/u/0/uc?id=1Q9j1a83CuKzsHCGaNulSkNxBm7Dkn7Ln&export=download'] ``` ## Steps to reproduce the bug ```python from datasets import load_dataset load_dataset("assin2") ``` ## Expected results Load the dataset. ## Actual results The dataset won't load. ## Environment info - `datasets` version: 2.0.1.dev0 - Platform: Google Colab - Python version: 3.7.12 - PyArrow version: 6.0.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4003/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4003/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
4 days, 3:47:49
https://api.github.com/repos/huggingface/datasets/issues/4001
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4001/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4001/comments
https://api.github.com/repos/huggingface/datasets/issues/4001/events
https://github.com/huggingface/datasets/issues/4001
1,179,231,418
I_kwDODunzps5GSaS6
4,001
How to use generate this multitask dataset for SQUAD? I am getting a value error.
{ "avatar_url": "https://avatars.githubusercontent.com/u/1963097?v=4", "events_url": "https://api.github.com/users/gsk1692/events{/privacy}", "followers_url": "https://api.github.com/users/gsk1692/followers", "following_url": "https://api.github.com/users/gsk1692/following{/other_user}", "gists_url": "https://api.github.com/users/gsk1692/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/gsk1692", "id": 1963097, "login": "gsk1692", "node_id": "MDQ6VXNlcjE5NjMwOTc=", "organizations_url": "https://api.github.com/users/gsk1692/orgs", "received_events_url": "https://api.github.com/users/gsk1692/received_events", "repos_url": "https://api.github.com/users/gsk1692/repos", "site_admin": false, "starred_url": "https://api.github.com/users/gsk1692/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gsk1692/subscriptions", "type": "User", "url": "https://api.github.com/users/gsk1692", "user_view_type": "public" }
[]
closed
false
null
[]
[ "Hi! Replacing `nlp.<obj>` with `datasets.<obj>` in the script should fix the problem. `nlp` has been renamed to `datasets` more than a year ago, so please use `datasets` instead to avoid weird issues.", "Thank You! Was able to solve with the help of this.", "But I request you to please fix the same in the dataset hub explorer as well...", "May I ask how to get this dataset?" ]
2022-03-24T09:21:51
2022-03-26T09:48:21
2022-03-26T03:35:43
NONE
null
null
null
null
## Dataset viewer issue for 'squad_multitask*' **Link:** https://huggingface.co/datasets/vershasaxena91/squad_multitask *short description of the issue* I am trying to generate the multitask dataset for squad dataset. However, gives the error in dataset explorer as well as my local machine. I tried the command: dataset = load_dataset("vershasaxena91/squad_multitask", 'highlight_qg_format') Error: Status code: 400 Exception: TypeError Message: argument of type 'Value' is not iterable Kindly advice.
{ "avatar_url": "https://avatars.githubusercontent.com/u/1963097?v=4", "events_url": "https://api.github.com/users/gsk1692/events{/privacy}", "followers_url": "https://api.github.com/users/gsk1692/followers", "following_url": "https://api.github.com/users/gsk1692/following{/other_user}", "gists_url": "https://api.github.com/users/gsk1692/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/gsk1692", "id": 1963097, "login": "gsk1692", "node_id": "MDQ6VXNlcjE5NjMwOTc=", "organizations_url": "https://api.github.com/users/gsk1692/orgs", "received_events_url": "https://api.github.com/users/gsk1692/received_events", "repos_url": "https://api.github.com/users/gsk1692/repos", "site_admin": false, "starred_url": "https://api.github.com/users/gsk1692/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gsk1692/subscriptions", "type": "User", "url": "https://api.github.com/users/gsk1692", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4001/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4001/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
1 day, 18:13:52
https://api.github.com/repos/huggingface/datasets/issues/4000
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4000/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4000/comments
https://api.github.com/repos/huggingface/datasets/issues/4000/events
https://github.com/huggingface/datasets/issues/4000
1,178,844,616
I_kwDODunzps5GQ73I
4,000
load_dataset error: sndfile library not found
{ "avatar_url": "https://avatars.githubusercontent.com/u/102043285?v=4", "events_url": "https://api.github.com/users/i-am-neo/events{/privacy}", "followers_url": "https://api.github.com/users/i-am-neo/followers", "following_url": "https://api.github.com/users/i-am-neo/following{/other_user}", "gists_url": "https://api.github.com/users/i-am-neo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/i-am-neo", "id": 102043285, "login": "i-am-neo", "node_id": "U_kgDOBhUOlQ", "organizations_url": "https://api.github.com/users/i-am-neo/orgs", "received_events_url": "https://api.github.com/users/i-am-neo/received_events", "repos_url": "https://api.github.com/users/i-am-neo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/i-am-neo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/i-am-neo/subscriptions", "type": "User", "url": "https://api.github.com/users/i-am-neo", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "Hi @i-am-neo,\r\n\r\nThe audio support is an extra feature of `datasets` and therefore it must be installed as an additional optional dependency:\r\n```shell\r\npip install datasets[audio]\r\n```\r\nAdditionally, for specific MP3 support (which is not the case for AMI dataset, that contains WAV audio files), there is another third-party dependency on `torchaudio`.\r\n\r\nYou have all the information in our docs: https://huggingface.co/docs/datasets/audio_process#installation", "Thanks @albertvillanova . Unfortunately the error persists after installing ```datasets[audio]```. Can you direct towards a solution?\r\n\r\n```\r\npip3 install datasets[audio]\r\n```\r\n### log\r\nRequirement already satisfied: datasets[audio] in ./.virtualenvs/hubert/lib/python3.7/site-packages (1.18.3)\r\nRequirement already satisfied: numpy>=1.17 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (1.21.5)\r\nRequirement already satisfied: xxhash in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (3.0.0)\r\nRequirement already satisfied: fsspec[http]>=2021.05.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (2022.2.0)\r\nRequirement already satisfied: dill in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (0.3.4)\r\nRequirement already satisfied: pandas in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (1.3.5)\r\nRequirement already satisfied: huggingface-hub<1.0.0,>=0.1.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (0.4.0)\r\nRequirement already satisfied: packaging in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (21.3)\r\nRequirement already satisfied: multiprocess in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (0.70.12.2)\r\nRequirement already satisfied: pyarrow!=4.0.0,>=3.0.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (7.0.0)\r\nRequirement already satisfied: tqdm>=4.62.1 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (4.63.1)\r\nRequirement already satisfied: aiohttp in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (3.8.1)\r\nRequirement already satisfied: importlib-metadata in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (4.11.3)\r\nRequirement already satisfied: requests>=2.19.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (2.27.1)\r\nRequirement already satisfied: librosa in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (0.9.1)\r\nRequirement already satisfied: pyyaml in ./.virtualenvs/hubert/lib/python3.7/site-packages (from huggingface-hub<1.0.0,>=0.1.0->datasets[audio]) (6.0)\r\nRequirement already satisfied: typing-extensions>=3.7.4.3 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from huggingface-hub<1.0.0,>=0.1.0->datasets[audio]) (4.1.1)\r\nRequirement already satisfied: filelock in ./.virtualenvs/hubert/lib/python3.7/site-packages (from huggingface-hub<1.0.0,>=0.1.0->datasets[audio]) (3.6.0)\r\nRequirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from packaging->datasets[audio]) (3.0.7)\r\nRequirement already satisfied: idna<4,>=2.5 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from requests>=2.19.0->datasets[audio]) (3.3)\r\nRequirement already satisfied: certifi>=2017.4.17 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from requests>=2.19.0->datasets[audio]) (2021.10.8)\r\nRequirement already satisfied: charset-normalizer~=2.0.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from requests>=2.19.0->datasets[audio]) (2.0.12)\r\nRequirement already satisfied: urllib3<1.27,>=1.21.1 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from requests>=2.19.0->datasets[audio]) (1.26.9)\r\nRequirement already satisfied: attrs>=17.3.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from aiohttp->datasets[audio]) (21.4.0)\r\nRequirement already satisfied: frozenlist>=1.1.1 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from aiohttp->datasets[audio]) (1.3.0)\r\nRequirement already satisfied: aiosignal>=1.1.2 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from aiohttp->datasets[audio]) (1.2.0)\r\nRequirement already satisfied: yarl<2.0,>=1.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from aiohttp->datasets[audio]) (1.7.2)\r\nRequirement already satisfied: asynctest==0.13.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from aiohttp->datasets[audio]) (0.13.0)\r\nRequirement already satisfied: multidict<7.0,>=4.5 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from aiohttp->datasets[audio]) (6.0.2)\r\nRequirement already satisfied: async-timeout<5.0,>=4.0.0a3 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from aiohttp->datasets[audio]) (4.0.2)\r\nRequirement already satisfied: zipp>=0.5 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from importlib-metadata->datasets[audio]) (3.7.0)\r\nRequirement already satisfied: decorator>=4.0.10 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from librosa->datasets[audio]) (5.1.1)\r\nRequirement already satisfied: soundfile>=0.10.2 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from librosa->datasets[audio]) (0.10.3.post1)\r\nRequirement already satisfied: numba>=0.45.1 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from librosa->datasets[audio]) (0.55.1)\r\nRequirement already satisfied: pooch>=1.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from librosa->datasets[audio]) (1.6.0)\r\nRequirement already satisfied: resampy>=0.2.2 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from librosa->datasets[audio]) (0.2.2)\r\nRequirement already satisfied: audioread>=2.1.5 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from librosa->datasets[audio]) (2.1.9)\r\nRequirement already satisfied: joblib>=0.14 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from librosa->datasets[audio]) (1.1.0)\r\nRequirement already satisfied: scipy>=1.2.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from librosa->datasets[audio]) (1.7.3)\r\nRequirement already satisfied: scikit-learn>=0.19.1 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from librosa->datasets[audio]) (1.0.2)\r\nRequirement already satisfied: python-dateutil>=2.7.3 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from pandas->datasets[audio]) (2.8.2)\r\nRequirement already satisfied: pytz>=2017.3 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from pandas->datasets[audio]) (2022.1)\r\nRequirement already satisfied: setuptools in ./.virtualenvs/hubert/lib/python3.7/site-packages (from numba>=0.45.1->librosa->datasets[audio]) (60.10.0)\r\nRequirement already satisfied: llvmlite<0.39,>=0.38.0rc1 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from numba>=0.45.1->librosa->datasets[audio]) (0.38.0)\r\nRequirement already satisfied: appdirs>=1.3.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from pooch>=1.0->librosa->datasets[audio]) (1.4.4)\r\nRequirement already satisfied: six>=1.5 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from python-dateutil>=2.7.3->pandas->datasets[audio]) (1.16.0)\r\nRequirement already satisfied: threadpoolctl>=2.0.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from scikit-learn>=0.19.1->librosa->datasets[audio]) (3.1.0)\r\nRequirement already satisfied: cffi>=1.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from soundfile>=0.10.2->librosa->datasets[audio]) (1.15.0)\r\nRequirement already satisfied: pycparser in ./.virtualenvs/hubert/lib/python3.7/site-packages (from cffi>=1.0->soundfile>=0.10.2->librosa->datasets[audio]) (2.21)\r\n\r\n### reload\r\n```\r\npython3 -c \"from datasets import load_dataset; print(load_dataset('ami', 'headset-single', split='validation')[0])\"\r\n```\r\n\r\n### log\r\nDownloading and preparing dataset ami/headset-single (download: 10.71 GiB, generated: 49.99 MiB, post-processed: Unknown size, total: 10.76 GiB) to /home/neo/.cache/huggingface/datasets/ami/headset-single/1.6.2/2accdf810f7c0585f78f4bcfa47684fbb980e35d29ecf126e6906dbecb872d9e...\r\nAMI corpus cannot be downloaded using multi-processing. Setting number of downloaded processes `num_proc` to 1. \r\n100%|██████████████████████████████████████████████████████| 136/136 [00:00<00:00, 33542.59it/s]\r\n100%|█████████████████████████████████████████████████████████| 136/136 [00:06<00:00, 22.28it/s]\r\n100%|████████████████████████████████████████████████████████| 18/18 [00:00<00:00, 21558.39it/s]\r\n100%|█████████████████████████████████████████████████████████| 18/18 [00:00<00:00, 2996.41it/s]\r\n100%|████████████████████████████████████████████████████████| 16/16 [00:00<00:00, 23431.87it/s]\r\n100%|█████████████████████████████████████████████████████████| 16/16 [00:00<00:00, 2697.52it/s]\r\nTraceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\n File \"/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/load.py\", line 1707, in load_dataset\r\n use_auth_token=use_auth_token,\r\n File \"/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/builder.py\", line 595, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/builder.py\", line 690, in _download_and_prepare\r\n ) from None\r\nOSError: Cannot find data file. \r\nOriginal error:\r\nsndfile library not found\r\n\r\n### just to double-check as per your docs\r\n```\r\npip3 install librosa torchaudio\r\n```\r\n\r\n### logs\r\nRequirement already satisfied: librosa in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (0.9.1)\r\nRequirement already satisfied: torchaudio in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (0.11.0+cu113)\r\nRequirement already satisfied: audioread>=2.1.5 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from librosa) (2.1.9)\r\nRequirement already satisfied: joblib>=0.14 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from librosa) (1.1.0)\r\nRequirement already satisfied: packaging>=20.0 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from librosa) (21.3)\r\nRequirement already satisfied: scikit-learn>=0.19.1 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from librosa) (1.0.2)\r\nRequirement already satisfied: scipy>=1.2.0 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from librosa) (1.7.3)\r\nRequirement already satisfied: decorator>=4.0.10 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from librosa) (5.1.1)\r\nRequirement already satisfied: resampy>=0.2.2 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from librosa) (0.2.2)\r\nRequirement already satisfied: pooch>=1.0 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from librosa) (1.6.0)\r\nRequirement already satisfied: numpy>=1.17.0 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from librosa) (1.21.5)\r\nRequirement already satisfied: soundfile>=0.10.2 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from librosa) (0.10.3.post1)\r\nRequirement already satisfied: numba>=0.45.1 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from librosa) (0.55.1)\r\nRequirement already satisfied: torch==1.11.0 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from torchaudio) (1.11.0+cu113)\r\nRequirement already satisfied: typing-extensions in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from torch==1.11.0->torchaudio) (4.1.1)\r\nRequirement already satisfied: setuptools in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from numba>=0.45.1->librosa) (60.10.0)\r\nRequirement already satisfied: llvmlite<0.39,>=0.38.0rc1 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from numba>=0.45.1->librosa) (0.38.0)\r\nRequirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from packaging>=20.0->librosa) (3.0.7)\r\nRequirement already satisfied: requests>=2.19.0 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from pooch>=1.0->librosa) (2.27.1)\r\nRequirement already satisfied: appdirs>=1.3.0 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from pooch>=1.0->librosa) (1.4.4)\r\nRequirement already satisfied: six>=1.3 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from resampy>=0.2.2->librosa) (1.16.0)\r\nRequirement already satisfied: threadpoolctl>=2.0.0 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from scikit-learn>=0.19.1->librosa) (3.1.0)\r\nRequirement already satisfied: cffi>=1.0 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from soundfile>=0.10.2->librosa) (1.15.0)\r\nRequirement already satisfied: pycparser in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from cffi>=1.0->soundfile>=0.10.2->librosa) (2.21)\r\nRequirement already satisfied: charset-normalizer~=2.0.0 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from requests>=2.19.0->pooch>=1.0->librosa) (2.0.12)\r\nRequirement already satisfied: certifi>=2017.4.17 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from requests>=2.19.0->pooch>=1.0->librosa) (2021.10.8)\r\nRequirement already satisfied: urllib3<1.27,>=1.21.1 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from requests>=2.19.0->pooch>=1.0->librosa) (1.26.9)\r\nRequirement already satisfied: idna<4,>=2.5 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from requests>=2.19.0->pooch>=1.0->librosa) (3.3)\r\n\r\n### try loading again\r\n```\r\npython3 -c \"from datasets import load_dataset; print(load_dataset('ami', 'headset-single', split='validation')[0])\"\r\n```\r\n\r\n### same error\r\nDownloading and preparing dataset ami/headset-single (download: 10.71 GiB, generated: 49.99 MiB, post-processed: Unknown size, total: 10.76 GiB) to /home/neo/.cache/huggingface/datasets/ami/headset-single/1.6.2/2accdf810f7c0585f78f4bcfa47684fbb980e35d29ecf126e6906dbecb872d9e...\r\nAMI corpus cannot be downloaded using multi-processing. Setting number of downloaded processes `num_proc` to 1. \r\n100%|██████████████████████████████████████████████████████| 136/136 [00:00<00:00, 33542.59it/s]\r\n100%|█████████████████████████████████████████████████████████| 136/136 [00:06<00:00, 22.28it/s]\r\n100%|████████████████████████████████████████████████████████| 18/18 [00:00<00:00, 21558.39it/s]\r\n100%|█████████████████████████████████████████████████████████| 18/18 [00:00<00:00, 2996.41it/s]\r\n100%|████████████████████████████████████████████████████████| 16/16 [00:00<00:00, 23431.87it/s]\r\n100%|█████████████████████████████████████████████████████████| 16/16 [00:00<00:00, 2697.52it/s]\r\nTraceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\n File \"/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/load.py\", line 1707, in load_dataset\r\n use_auth_token=use_auth_token,\r\n File \"/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/builder.py\", line 595, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/builder.py\", line 690, in _download_and_prepare\r\n ) from None\r\nOSError: Cannot find data file. \r\nOriginal error:\r\nsndfile library not found\r\n", "Hi @i-am-neo, thanks again for your detailed report.\r\n\r\nOur `datasets` library support for audio relies on a third-party Python library called `librosa`, which is installed when you do:\r\n```shell\r\npip install datasets[audio]\r\n```\r\n\r\nHowever, the `librosa` library has a dependency on `soundfile`; and `soundfile` depends on a non-Python package called `sndfile`. \r\n\r\nOn Linux (which is your case), this must be installed manually using your operating system package manager, for example:\r\n```shell\r\nsudo apt-get install libsndfile1\r\n```\r\n\r\nPlease, let me know if this works and if so, I will update our docs with all this information.", "@albertvillanova thanks, all good. The key is ```libsndfile1``` - it may help others to note that in your docs. I had installed libsndfile previously." ]
2022-03-24T01:52:32
2022-03-25T17:53:33
2022-03-25T17:53:33
NONE
null
null
null
null
## Describe the bug Can't load ami dataset ## Steps to reproduce the bug ``` python3 -c "from datasets import load_dataset; print(load_dataset('ami', 'headset-single', split='validation')[0])" ``` ## Expected results ## Actual results Downloading and preparing dataset ami/headset-single (download: 10.71 GiB, generated: 49.99 MiB, post-processed: Unknown size, total: 10.76 GiB) to /home/neo/.cache/huggingface/datasets/ami/headset-single/1.6.2/2accdf810f7c0585f78f4bcfa47684fbb980e35d29ecf126e6906dbecb872d9e... AMI corpus cannot be downloaded using multi-processing. Setting number of downloaded processes `num_proc` to 1. 100%|██████████████████████████████████████████████████████| 136/136 [00:00<00:00, 36004.88it/s] 100%|█████████████████████████████████████████████████████████| 136/136 [00:01<00:00, 79.10it/s] 100%|████████████████████████████████████████████████████████| 18/18 [00:00<00:00, 25343.23it/s] 100%|█████████████████████████████████████████████████████████| 18/18 [00:00<00:00, 2874.78it/s] 100%|████████████████████████████████████████████████████████| 16/16 [00:00<00:00, 27950.38it/s] 100%|█████████████████████████████████████████████████████████| 16/16 [00:00<00:00, 2892.25it/s] Traceback (most recent call last): File "<string>", line 1, in <module> File "/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/load.py", line 1707, in load_dataset use_auth_token=use_auth_token, File "/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/builder.py", line 595, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/builder.py", line 690, in _download_and_prepare ) from None OSError: Cannot find data file. Original error: sndfile library not found ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.3 - Platform: Linux-4.19.0-18-cloud-amd64-x86_64-with-debian-10.11 - Python version: 3.7.3 - PyArrow version: 7.0.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/102043285?v=4", "events_url": "https://api.github.com/users/i-am-neo/events{/privacy}", "followers_url": "https://api.github.com/users/i-am-neo/followers", "following_url": "https://api.github.com/users/i-am-neo/following{/other_user}", "gists_url": "https://api.github.com/users/i-am-neo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/i-am-neo", "id": 102043285, "login": "i-am-neo", "node_id": "U_kgDOBhUOlQ", "organizations_url": "https://api.github.com/users/i-am-neo/orgs", "received_events_url": "https://api.github.com/users/i-am-neo/received_events", "repos_url": "https://api.github.com/users/i-am-neo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/i-am-neo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/i-am-neo/subscriptions", "type": "User", "url": "https://api.github.com/users/i-am-neo", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4000/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4000/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
1 day, 16:01:01
https://api.github.com/repos/huggingface/datasets/issues/3996
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3996/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3996/comments
https://api.github.com/repos/huggingface/datasets/issues/3996/events
https://github.com/huggingface/datasets/issues/3996
1,178,415,905
I_kwDODunzps5GPTMh
3,996
Audio.encode_example() throws an error when writing example from array
{ "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/polinaeterna", "id": 16348744, "login": "polinaeterna", "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "repos_url": "https://api.github.com/users/polinaeterna/repos", "site_admin": false, "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "type": "User", "url": "https://api.github.com/users/polinaeterna", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/polinaeterna", "id": 16348744, "login": "polinaeterna", "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "repos_url": "https://api.github.com/users/polinaeterna/repos", "site_admin": false, "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "type": "User", "url": "https://api.github.com/users/polinaeterna", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/polinaeterna", "id": 16348744, "login": "polinaeterna", "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "repos_url": "https://api.github.com/users/polinaeterna/repos", "site_admin": false, "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "type": "User", "url": "https://api.github.com/users/polinaeterna", "user_view_type": "public" } ]
[ "Good catch ! Yes I think passing `format=\"wav\"` is the right thing to do", "Thanks @polinaeterna for reporting this issue.\r\n\r\nIn relation to the decoding of MP3 audio files without torchaudio, I remember Patrick made some tests and these had quite bad performance. That is why he proposed to support MP3 files only with torchaudio. But yes, nice to give an alternative to non-torchaudio users (with a big warning on performance).", "> I remember Patrick made some tests and these had quite bad performance. That is why he proposed to support MP3 files only with torchaudio.\r\n\r\nYeah, I know, but as far as I understand, some users just categorically don't want to have torchaudio in their environment. Anyway, it's just a more or less random example, they can use any library they like following the same logic (I'm just not a big expert in decoding utils so if you can give me some presentation / resources about that I would really appreciate it 🤗)" ]
2022-03-23T17:11:47
2022-03-29T14:16:13
2022-03-29T14:16:13
CONTRIBUTOR
null
null
null
null
## Describe the bug When trying to do `Audio().encode_example()` with preexisting array (see [this line](https://github.com/huggingface/datasets/blob/master/src/datasets/features/audio.py#L73)), `sf.write()` throws you an error: `TypeError: No format specified and unable to get format from file extension: <_io.BytesIO object at 0x7f4218c0db30>` ## Steps to reproduce the bug ### Sample code to reproduce the bug ```python # download sample file !wget https://huggingface.co/datasets/polinaeterna/test_encode_example/resolve/main/common_voice_vi_21824030.mp3 arr, sr = librosa.load("common_voice_vi_21824030.mp3") Audio().encode_example({ "path": "common_voice_vi_21824030.mp3", "array": arr, "sampling_rate":sr }) ``` ## Expected results An encoded example (`{"bytes": b'....', "path": 'path'}`) ## Actual results ```python TypeError Traceback (most recent call last) Input In [3], in <module> 1 arr, sr = librosa.load("common_voice_vi_21824030.mp3") ----> 3 Audio().encode_example({ 4 "path": "common_voice_vi_21824030.mp3", 5 "array": arr, 6 "sampling_rate":sr 7 }) File ~/workspace/datasets/src/datasets/features/audio.py:75, in Audio.encode_example(self, value) 73 elif isinstance(value, dict) and "array" in value: 74 buffer = BytesIO() ---> 75 sf.write(buffer, value["array"], value["sampling_rate"]) 76 return {"bytes": buffer.getvalue(), "path": value.get("path")} 77 elif value.get("bytes") is not None or value.get("path") is not None: File ~/miniconda3/envs/datasets/lib/python3.8/site-packages/soundfile.py:314, in write(file, data, samplerate, subtype, endian, format, closefd) 312 else: 313 channels = data.shape[1] --> 314 with SoundFile(file, 'w', samplerate, channels, 315 subtype, endian, format, closefd) as f: 316 f.write(data) File ~/miniconda3/envs/datasets/lib/python3.8/site-packages/soundfile.py:627, in SoundFile.__init__(self, file, mode, samplerate, channels, subtype, endian, format, closefd) 625 mode_int = _check_mode(mode) 626 self._mode = mode --> 627 self._info = _create_info_struct(file, mode, samplerate, channels, 628 format, subtype, endian) 629 self._file = self._open(file, mode_int, closefd) 630 if set(mode).issuperset('r+') and self.seekable(): 631 # Move write position to 0 (like in Python file objects) File ~/miniconda3/envs/datasets/lib/python3.8/site-packages/soundfile.py:1416, in _create_info_struct(file, mode, samplerate, channels, format, subtype, endian) 1414 original_format = format 1415 if format is None: -> 1416 format = _get_format_from_filename(file, mode) 1417 assert isinstance(format, (_unicode, str)) 1418 else: File ~/miniconda3/envs/datasets/lib/python3.8/site-packages/soundfile.py:1457, in _get_format_from_filename(file, mode) 1455 pass 1456 if format.upper() not in _formats and 'r' not in mode: -> 1457 raise TypeError("No format specified and unable to get format from " 1458 "file extension: {0!r}".format(file)) 1459 return format TypeError: No format specified and unable to get format from file extension: <_io.BytesIO object at 0x7fd8daf88180> ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: datasets master - Platform: Ubuntu 20.04 - Python version: python 3.8.12 - PyArrow version: 6.0.1 ## Solution I guess we just need to add `format` arg in [this line](https://github.com/huggingface/datasets/blob/master/src/datasets/features/audio.py#L75) like this: ```python sf.write(buffer, value["array"], value["sampling_rate"], format="wav") ``` BTW discovered this when trying to decode audio in mp3 format without torchaudio (would be useful for TensorFlow users), like this: ```python from datasets import load_dataset, Features, Audio ds = load_dataset("common_voice", "vi", split="test") ds = ds.remove_columns("audio") ds.select(range(3)) # 3 samples just for testing def load_mp3_with_librosa(example): arr, sr = librosa.load(example["path"]) example["audio"] = { "path": example["path"], "array": arr, "sampling_rate": sr } return example updated_dataset = ds.map(lambda example: load_mp3_with_librosa(example), features=Features( {"audio": Audio(decode=False)} )) ``` @lhoestq @mariosasko @albertvillanova am I right in my logic? do we agree that we can set wav as the format? 🤗
{ "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/polinaeterna", "id": 16348744, "login": "polinaeterna", "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "repos_url": "https://api.github.com/users/polinaeterna/repos", "site_admin": false, "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "type": "User", "url": "https://api.github.com/users/polinaeterna", "user_view_type": "public" }
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/3996/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3996/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
5 days, 21:04:26
https://api.github.com/repos/huggingface/datasets/issues/3993
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3993/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3993/comments
https://api.github.com/repos/huggingface/datasets/issues/3993/events
https://github.com/huggingface/datasets/issues/3993
1,178,201,495
I_kwDODunzps5GOe2X
3,993
Streaming dataset + interleave + DataLoader hangs with multiple workers
{ "avatar_url": "https://avatars.githubusercontent.com/u/614861?v=4", "events_url": "https://api.github.com/users/jpilaul/events{/privacy}", "followers_url": "https://api.github.com/users/jpilaul/followers", "following_url": "https://api.github.com/users/jpilaul/following{/other_user}", "gists_url": "https://api.github.com/users/jpilaul/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jpilaul", "id": 614861, "login": "jpilaul", "node_id": "MDQ6VXNlcjYxNDg2MQ==", "organizations_url": "https://api.github.com/users/jpilaul/orgs", "received_events_url": "https://api.github.com/users/jpilaul/received_events", "repos_url": "https://api.github.com/users/jpilaul/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jpilaul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jpilaul/subscriptions", "type": "User", "url": "https://api.github.com/users/jpilaul", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
[]
[ "Same thing occurs when streaming files loaded from disk.", "Hi ! Thanks for reporting, could this be related to https://github.com/huggingface/datasets/issues/3950 ?\r\n\r\nCurrently streaming datasets only works in single process, but we're working on having in work in distributed setups as well :) (EDIT: done)", "Hi, thanks for your reply. It seems related :)", "+1", "Please update `datasets` if you're having this issue. What version are you using ?" ]
2022-03-23T14:27:29
2023-02-28T14:14:24
null
NONE
null
null
null
null
## Describe the bug Interleaving multiple iterable datasets that use `load_dataset` on streaming mode hangs when passed to `torch.utils.data.DataLoader` with multiple workers. ## Steps to reproduce the bug ```python from datasets import interleave_datasets, load_dataset from torch.utils.data import DataLoader en_dataset = load_dataset('oscar', "unshuffled_deduplicated_en", split='train', streaming=True) fr_dataset = load_dataset('oscar', "unshuffled_deduplicated_fr", split='train', streaming=True) it_dataset = load_dataset('oscar', "unshuffled_deduplicated_it", split='train', streaming=True) de_dataset = load_dataset('oscar', "unshuffled_deduplicated_de", split='train', streaming=True) multilingual_dataset = interleave_datasets([en_dataset, fr_dataset, de_dataset, it_dataset]) multilingual_dataset = multilingual_dataset.with_format('torch') next(iter(multilingual_dataset)) # works fairly fast dataloader = DataLoader(multilingual_dataset, batch_size=8, num_workers=4) for batch in dataloader: print(len(batch)) # prints nothing after 30 min of waiting dataloader = DataLoader(multilingual_dataset, batch_size=8, num_workers=0) for batch in dataloader: print(len(batch)) # prints right away ``` ## Expected results It should be able to iterate the dataset with multiple workers. ## Actual results Prints with results with `next(iter(multilingual_dataset)) ` and `num_workers=0` but it prints nothing with `num_workers=4` or any number above 0. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.0.1.dev0 - `pytorch` version: 1.10.0+cu113 - Python version: 3.7 - PyArrow version: 6.0.1
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3993/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3993/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
https://api.github.com/repos/huggingface/datasets/issues/3992
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3992/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3992/comments
https://api.github.com/repos/huggingface/datasets/issues/3992/events
https://github.com/huggingface/datasets/issues/3992
1,177,946,153
I_kwDODunzps5GNggp
3,992
Image column is not decoded in map when using with with_transform
{ "avatar_url": "https://avatars.githubusercontent.com/u/5902432?v=4", "events_url": "https://api.github.com/users/phihung/events{/privacy}", "followers_url": "https://api.github.com/users/phihung/followers", "following_url": "https://api.github.com/users/phihung/following{/other_user}", "gists_url": "https://api.github.com/users/phihung/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/phihung", "id": 5902432, "login": "phihung", "node_id": "MDQ6VXNlcjU5MDI0MzI=", "organizations_url": "https://api.github.com/users/phihung/orgs", "received_events_url": "https://api.github.com/users/phihung/received_events", "repos_url": "https://api.github.com/users/phihung/repos", "site_admin": false, "starred_url": "https://api.github.com/users/phihung/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/phihung/subscriptions", "type": "User", "url": "https://api.github.com/users/phihung", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" } ]
[ "Hi! This behavior stems from this line: https://github.com/huggingface/datasets/blob/799b817d97590ddc97cbd38d07469403e030de8c/src/datasets/arrow_dataset.py#L1919\r\nBasically, the `Image`/`Audio` columns are decoded only if the `format_type` attribute is `None` (`set_format`/`with_format` and `set_transform`/`with_transform` assign a non-`None` value to it) and the `input_columns` param is not specified (see https://github.com/huggingface/datasets/issues/3756). We will remove these limitations soon.\r\n\r\n\r\n\r\n" ]
2022-03-23T10:51:13
2022-12-13T16:59:06
2022-12-13T16:59:06
NONE
null
null
null
null
## Describe the bug Image column is not _decoded_ in **map** when using with `with_transform` ## Steps to reproduce the bug ```python from datasets import Image, Dataset def add_C(batch): batch["C"] = batch["A"] return batch ds = Dataset.from_dict({"A": ["image.png"]}).cast_column("A", Image()) ds = ds.with_transform(lambda x: x) # <= This line causes the problem ds = ds.map(add_C, batched=True) print(ds[0]) ``` ## Expected results ``` {'C': <PIL.PngImagePlugin.PngImageFile>, ...} ``` ## Actual results ``` {'C': {'bytes': None, 'path': 'image.png'}, ...} ``` If we remove the `with_transform` line, we get the expected result. ## Environment info - `datasets` version: 2.0.0 - Platform: Mac OSX - Python version: 3.8.12 - PyArrow version: 7.0.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3992/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3992/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
265 days, 6:07:53
https://api.github.com/repos/huggingface/datasets/issues/3991
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3991/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3991/comments
https://api.github.com/repos/huggingface/datasets/issues/3991/events
https://github.com/huggingface/datasets/issues/3991
1,177,362,901
I_kwDODunzps5GLSHV
3,991
Add Lung Image Database Consortium image collection (LIDC-IDRI) dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4", "events_url": "https://api.github.com/users/omarespejel/events{/privacy}", "followers_url": "https://api.github.com/users/omarespejel/followers", "following_url": "https://api.github.com/users/omarespejel/following{/other_user}", "gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/omarespejel", "id": 4755430, "login": "omarespejel", "node_id": "MDQ6VXNlcjQ3NTU0MzA=", "organizations_url": "https://api.github.com/users/omarespejel/orgs", "received_events_url": "https://api.github.com/users/omarespejel/received_events", "repos_url": "https://api.github.com/users/omarespejel/repos", "site_admin": false, "starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions", "type": "User", "url": "https://api.github.com/users/omarespejel", "user_view_type": "public" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" }, { "color": "bfdadc", "default": false, "description": "Vision datasets", "id": 3608941089, "name": "vision", "node_id": "LA_kwDODunzps7XHBIh", "url": "https://api.github.com/repos/huggingface/datasets/labels/vision" } ]
open
false
null
[]
[]
2022-03-22T22:16:05
2022-03-23T12:57:16
null
NONE
null
null
null
null
## Adding a Dataset - **Name:** *Lung Image Database Consortium image collection (LIDC-IDRI)* - **Description:** *Consists of diagnostic and lung cancer screening thoracic computed tomography (CT) scans with marked-up annotated lesions. It is a web-accessible international resource for development, training, and evaluation of computer-assisted diagnostic (CAD) methods for lung cancer detection and diagnosis. Initiated by the National Cancer Institute (NCI), further advanced by the Foundation for the National Institutes of Health (FNIH), and accompanied by the Food and Drug Administration (FDA) through active participation, this public-private partnership demonstrates the success of a consortium founded on a consensus-based process.* - **Data:** *[link to the Github repository or current dataset location](https://wiki.cancerimagingarchive.net/display/Public/LIDC-IDRI)* - **Motivation:** *Key dataset in the healthcare community* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). FYI @osanseviero @abidlabs
null
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/3991/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3991/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
https://api.github.com/repos/huggingface/datasets/issues/3990
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3990/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3990/comments
https://api.github.com/repos/huggingface/datasets/issues/3990/events
https://github.com/huggingface/datasets/issues/3990
1,176,976,247
I_kwDODunzps5GJzt3
3,990
Improve AutomaticSpeechRecognition task template
{ "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/polinaeterna", "id": 16348744, "login": "polinaeterna", "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "repos_url": "https://api.github.com/users/polinaeterna/repos", "site_admin": false, "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "type": "User", "url": "https://api.github.com/users/polinaeterna", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
[ "There is an open PR to do that: #3364. I just haven't had time to finish it... ", "> There is an open PR to do that: #3364. I just haven't had time to finish it...\r\n\r\n😬 thanks..." ]
2022-03-22T15:41:08
2022-03-23T17:12:40
2022-03-23T17:12:40
CONTRIBUTOR
null
null
null
null
**Is your feature request related to a problem? Please describe.** [AutomaticSpeechRecognition task template](https://github.com/huggingface/datasets/blob/master/src/datasets/tasks/automatic_speech_recognition.py) is outdated as it uses path to audiofile as an audio column instead of a Audio feature itself (I guess it's because Audio feature didn't exist at the time this template was created). **Describe the solution you'd like** Change audio columns from string path to Audio feature.
{ "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/polinaeterna", "id": 16348744, "login": "polinaeterna", "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "repos_url": "https://api.github.com/users/polinaeterna/repos", "site_admin": false, "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "type": "User", "url": "https://api.github.com/users/polinaeterna", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3990/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3990/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
1 day, 1:31:32
https://api.github.com/repos/huggingface/datasets/issues/3986
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3986/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3986/comments
https://api.github.com/repos/huggingface/datasets/issues/3986/events
https://github.com/huggingface/datasets/issues/3986
1,176,429,565
I_kwDODunzps5GHuP9
3,986
Dataset loads indefinitely after modifying default cache path (~/.cache/huggingface)
{ "avatar_url": "https://avatars.githubusercontent.com/u/10686779?v=4", "events_url": "https://api.github.com/users/kelvinAI/events{/privacy}", "followers_url": "https://api.github.com/users/kelvinAI/followers", "following_url": "https://api.github.com/users/kelvinAI/following{/other_user}", "gists_url": "https://api.github.com/users/kelvinAI/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kelvinAI", "id": 10686779, "login": "kelvinAI", "node_id": "MDQ6VXNlcjEwNjg2Nzc5", "organizations_url": "https://api.github.com/users/kelvinAI/orgs", "received_events_url": "https://api.github.com/users/kelvinAI/received_events", "repos_url": "https://api.github.com/users/kelvinAI/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kelvinAI/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kelvinAI/subscriptions", "type": "User", "url": "https://api.github.com/users/kelvinAI", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
[]
[ "Hi ! I didn't managed to reproduce the issue. When you kill the process, is there any stacktrace that shows at what point in the code python is hanging ?", "Hi @lhoestq , I've traced the issue back to file locking. It's similar to this thread, using Lustre filesystem as well. https://github.com/huggingface/datasets/issues/329 . In this case the user was able to modify and add -o flock option while mounting and it solved the problem. \r\nHowever in other cases such as mine, we do not have the permissions to modify the commands while mounting. I'm still trying to figure out a workaround. Any ideas how can we use a mounted Lustre filesystem with no flock option?\r\n", "Hi @kelvinAI , I've had this issue on our institution's system which uses Lustre (in addition to our compute nodes being siloed off from external network access). The workaround I made for downloading/loading datasets was to set the `$HFHOME` environment variable to a location on the node's local storage (SSD), effectively a location that gets cleared regularly and sometimes gets used for temporary or cached files which is pretty common, e.g. \"scratch\" storage. Maybe your sysadmins, if you have them, could point you to subdirectories on a node that aren't linked to the Lustre filesystem. After downloading to scratch I found that the transformers, modules, and metrics cached folders were fine to move to my user drives on the Lustre filesystem but cached datasets that had fingerprints still had some issues with filelock, so it would help to use the function `my_dataset.save_to_disk('path/on/lustre_fs')` and static class function `Dataset.load_from_disk('path/on/lustre_fs')`. In rough steps:\r\n\r\n1. Initially download to scratch storage with `ds = datasets.load_dataset(dataset_name)`\r\n2. Call `ds.save_to_disk(my_path_on_lustre)` with a path in your user space on the Lustre filesystem\r\n3. Load datasets with `from datasets import Dataset; new_ds = Dataset.load_from_disk(my_path_on_lustre)`\r\n\r\nObviously this hinges on there existing scratch storage on the nodes you're using. Fingers crossed.", "Hi @jpmcd , thanks for sharing your experience. For my case, the Lustre filesystem (with more storage space) is the scratch storage like the one you've mentioned. We have a local storage for each user but unfortunately there's not enough space in it to 'cache' huge datasets, hence that is why I tried changing HF_HOME to point to the scratch disk with more space and encountered the flock issue. Unfortunately I'm not aware of any viable solution to this for now so I simply fall back to using torch dataset. ", "@jpmcd your comment saved me from pulling my hair out in frustration. Setting `HF_HOME` to a directory that's not on Lustre works like a charm. ✨ " ]
2022-03-22T08:23:21
2023-03-06T16:55:04
null
NONE
null
null
null
null
## Describe the bug Dataset loads indefinitely after modifying cache path (~/.cache/huggingface) If none of the environment variables are set, this custom dataset loads fine ( json-based dataset with custom dataset load script) ** Update: Transformer modules faces the same issue as well during loading ## A clear and concise description of what the bug is. Issue: - Dataset loading stalls / freezes indefinitely when HF_HOME is changed to a custom directory - No error code, had to terminate the process - There are some files created in the cache directory: ``` custom_cache_dir | -- modules | -- __init__.py | -- datasets_modules | -- __init__.py | -- datasets | -- __init__.py | -- script.py (Dataset loading script) | -- script.lock ``` There's no error nor any logs thrown so I'm out of ideas of how to to debug this. The custom dataset works fine if the default ~/.cache dir is used, but unfortunately it's out of space and we do not have permissions to modify the disk. ## Steps to reproduce the bug What I've tried: - Modifying HF_HOME (https://github.com/huggingface/transformers/issues/8703) - Modifying HF_DATASETS_CACHE (https://huggingface.co/docs/datasets/v1.12.0/cache.html) - Modifying cache_dir param during runtime ```python >>> from datasets import load_dataset >>> dataset = load_dataset('test_dataset', cache_dir='/path/to/new/cache') ``` - Disabling dataset cache ```python >>> from datasets import set_caching_enabled >>> set_caching_enabled(False) ``` ## Expected results Datasets should load / cache as usual with the only exception that cache directory is different ## Actual results Any actions taken above to change the cache directory results in loading indefinitely without terminating. ## Environment info - `transformers` version: 4.18.0.dev0 - Platform: Linux-4.15.0-54-generic-x86_64-with-glibc2.10 - Python version: 3.8.8 - Huggingface_hub version: 0.4.0 - PyTorch version (GPU?): 1.8.1+cu102 (True) - Tensorflow version (GPU?): 2.4.1 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3986/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3986/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
https://api.github.com/repos/huggingface/datasets/issues/3985
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3985/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3985/comments
https://api.github.com/repos/huggingface/datasets/issues/3985/events
https://github.com/huggingface/datasets/issues/3985
1,175,982,937
I_kwDODunzps5GGBNZ
3,985
[image feature] Too many files open error when image feature is returned as a path
{ "avatar_url": "https://avatars.githubusercontent.com/u/3616806?v=4", "events_url": "https://api.github.com/users/apsdehal/events{/privacy}", "followers_url": "https://api.github.com/users/apsdehal/followers", "following_url": "https://api.github.com/users/apsdehal/following{/other_user}", "gists_url": "https://api.github.com/users/apsdehal/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/apsdehal", "id": 3616806, "login": "apsdehal", "node_id": "MDQ6VXNlcjM2MTY4MDY=", "organizations_url": "https://api.github.com/users/apsdehal/orgs", "received_events_url": "https://api.github.com/users/apsdehal/received_events", "repos_url": "https://api.github.com/users/apsdehal/repos", "site_admin": false, "starred_url": "https://api.github.com/users/apsdehal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/apsdehal/subscriptions", "type": "User", "url": "https://api.github.com/users/apsdehal", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[]
2022-03-21T21:54:05
2022-03-23T18:19:27
2022-03-23T18:19:27
CONTRIBUTOR
null
null
null
null
## Describe the bug PR in context: #3967. If I load the dataset in this PR (TextVQA), and do a simple list comprehension on the dataset, I get `Too many open files error`. This is happening due to the way we are loading the image feature when a str path is returned from the `_generate_examples`. Specifically at https://github.com/huggingface/datasets/blob/508eb4ab5d52f590baa677b4f64b1cc069139f7b/src/datasets/features/image.py#L110, we are open the file handle to the image but never closing it. This in my understanding is causing the issue. ## Steps to reproduce the bug Pull the PR locally and run the following code ```python from datasets import load_dataset dataset = load_dataset("./datasets/textvqa")["train"] data = [item for item in dataset] # Error happens ``` ## Expected results List comprehension should work smoothly ## Actual results `Too many open files error` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.0.1.dev0 - Platform: macOS-12.2-arm64-arm-64bit - Python version: 3.10.0 - PyArrow version: 7.0.0 - Pandas version: 1.4.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3985/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3985/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
1 day, 20:25:22
https://api.github.com/repos/huggingface/datasets/issues/3984
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3984/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3984/comments
https://api.github.com/repos/huggingface/datasets/issues/3984/events
https://github.com/huggingface/datasets/issues/3984
1,175,822,117
I_kwDODunzps5GFZ8l
3,984
Local and automatic tests fail
{ "avatar_url": "https://avatars.githubusercontent.com/u/20767068?v=4", "events_url": "https://api.github.com/users/MarkusSagen/events{/privacy}", "followers_url": "https://api.github.com/users/MarkusSagen/followers", "following_url": "https://api.github.com/users/MarkusSagen/following{/other_user}", "gists_url": "https://api.github.com/users/MarkusSagen/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/MarkusSagen", "id": 20767068, "login": "MarkusSagen", "node_id": "MDQ6VXNlcjIwNzY3MDY4", "organizations_url": "https://api.github.com/users/MarkusSagen/orgs", "received_events_url": "https://api.github.com/users/MarkusSagen/received_events", "repos_url": "https://api.github.com/users/MarkusSagen/repos", "site_admin": false, "starred_url": "https://api.github.com/users/MarkusSagen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MarkusSagen/subscriptions", "type": "User", "url": "https://api.github.com/users/MarkusSagen", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "Hi ! To be able to run the tests, you need to install all the test dependencies and additional ones with\r\n```\r\npip install -e .[tests]\r\npip install -r additional-tests-requirements.txt --no-deps\r\n```\r\n\r\nIn particular, you probably need to `sacrebleu`. It looks like it wasn't able to instantiate `sacrebleu.TER` properly." ]
2022-03-21T19:07:37
2023-07-25T15:18:40
2023-07-25T15:18:40
NONE
null
null
null
null
## Describe the bug Running the tests from CircleCI on a PR or locally fails, even with no changes. Tests seem to fail on `test_metric_common.py` ## Steps to reproduce the bug ```shell git clone https://huggingface/datasets.git cd datasets ``` ```python python -m pip install -e . pytest ``` ## Expected results All tests passing ## Actual results ``` tests/test_metric_common.py:91: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.pyenv/versions/3.8.5/lib/python3.8/doctest.py:1336: in __run exec(compile(example.source, filename, "single", <doctest datasets_modules.metrics.ter.c0cfb5adedac7eb15ffa47bba6a70fabd80f3eb906ee508abf5e1906285d1155.ter.Ter[3]>:1: in <module> ??? ../datasets/src/datasets/metric.py:430: in compute output = self._compute(**inputs, **compute_kwargs) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = Metric(name: "ter", features: {'predictions': Value(dtype='string', id='sequence'), 'references': Sequence(feature=Val...ences=references) >>> print(results) {'score': 0.0, 'num_edits': 0, 'ref_length': 6.5} """, stored examples: 0) predictions = ['hello there general kenobi', 'foo bar foobar'] references = [['hello there general kenobi', 'hello there !'], ['foo bar foobar', 'foo bar foobar']] normalized = False, no_punct = False, asian_support = False, case_sensitive = False def _compute( self, predictions, references, normalized: bool = False, no_punct: bool = False, asian_support: bool = False, case_sensitive: bool = False, ): references_per_prediction = len(references[0]) if any(len(refs) != references_per_prediction for refs in references): raise ValueError("Sacrebleu requires the same number of references for each prediction") transformed_references = [[refs[i] for refs in references] for i in range(references_per_prediction)] > sb_ter = TER(normalized, no_punct, asian_support, case_sensitive) E TypeError: __init__() takes 2 positional arguments but 5 were given /tmp/pytest-of-markussagen/pytest-1/cache/modules/datasets_modules/metrics/ter/c0cfb5adedac7eb15ffa47bba6a70fabd80f3eb906ee508abf5e1906285d1155/ter.py:130: TypeError ------------------------------ Captured stdout call ------------------------------- Trying: predictions = ["hello there general kenobi", "foo bar foobar"] Expecting nothing ok Trying: references = [["hello there general kenobi", "hello there !"], ["foo bar foobar", "foo bar foobar"]] Expecting nothing ok Trying: ter = datasets.load_metric("ter") Expecting nothing ok Trying: results = ter.compute(predictions=predictions, references=references) Expecting nothing ================================ warnings summary ================================= ../.pyenv/versions/3.8.5/envs/huggingface/lib/python3.8/site-packages/hdfs/config.py:15 /home/markussagen/.pyenv/versions/3.8.5/envs/huggingface/lib/python3.8/site-packages/hdfs/config.py:15: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses from imp import load_source ../datasets/src/datasets/commands/test.py:35 /home/markussagen/datasets/src/datasets/commands/test.py:35: PytestCollectionWarning: cannot collect test class 'TestCommand' because it has a __init__ constructor (from: tests/commands/test_test.py) class TestCommand(BaseDatasetsCLICommand): tests/commands/test_test.py:33 /home/markussagen/mydataset/tests/commands/test_test.py:33: PytestCollectionWarning: cannot collect test class 'TestCommandArgs' because it has a __new__ constructor (from: tests/commands/test_test.py) class TestCommandArgs: tests/test_arrow_dataset.py: 760 warnings tests/test_formatting.py: 60 warnings tests/test_search.py: 31 warnings tests/features/test_array_xd.py: 117 warnings /home/markussagen/datasets/src/datasets/formatting/formatting.py:197: DeprecationWarning: `np.object` is a deprecated alias for the builtin `object`. To silence this warning, use `object` by itself. Doing this will not modify any behavior and is safe. Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations (isinstance(x, np.ndarray) and (x.dtype == np.object or x.shape != array[0].shape)) tests/test_arrow_dataset.py: 154 warnings tests/features/test_array_xd.py: 1 warning /home/markussagen/datasets/src/datasets/formatting/formatting.py:201: DeprecationWarning: `np.object` is a deprecated alias for the builtin `object`. To silence this warning, use `object` by itself. Doing this will not modify any behavior and is safe. Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations return np.array(array, copy=False, **{**self.np_array_kwargs, "dtype": np.object}) tests/test_arrow_dataset.py: 60 warnings /home/markussagen/datasets/src/datasets/arrow_dataset.py:3105: DeprecationWarning: `np.str` is a deprecated alias for the builtin `str`. To silence this warning, use `str` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.str_` here. Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations elif np.issubdtype(values.dtype, np.str): tests/test_arrow_dataset.py: 138 warnings tests/test_formatting.py: 21 warnings /home/markussagen/datasets/src/datasets/formatting/tf_formatter.py:69: DeprecationWarning: `np.object` is a deprecated alias for the builtin `object`. To silence this warning, use `object` by itself. Doing this will not modify any behavior and is safe. Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations data_struct.dtype == np.object tests/test_arrow_dataset.py: 240 warnings tests/test_formatting.py: 20 warnings /home/markussagen/datasets/src/datasets/formatting/torch_formatter.py:49: DeprecationWarning: `np.object` is a deprecated alias for the builtin `object`. To silence this warning, use `object` by itself. Doing this will not modify any behavior and is safe. Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations if data_struct.dtype == np.object: # pytorch tensors cannot be instantied from an array of objects tests/test_arrow_dataset.py: 12 warnings tests/test_search.py: 2 warnings tests/features/test_array_xd.py: 6 warnings tests/features/test_image.py: 4 warnings /home/markussagen/datasets/src/datasets/features/features.py:1129: DeprecationWarning: `np.object` is a deprecated alias for the builtin `object`. To silence this warning, use `object` by itself. Doing this will not modify any behavior and is safe. Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations [0] + [len(arr) for arr in l_arr], dtype=np.object tests/test_dataset_common.py::LocalDatasetTest::test_builder_class_banking77 /tmp/pytest-of-markussagen/pytest-1/cache/modules/datasets_modules/datasets/banking77/aec0289529599d4572d76ab00c8944cb84f88410ad0c9e7da26189d31f62a55b/banking77.py:24: DeprecationWarning: invalid escape sequence \~ _CITATION = """\ tests/test_dataset_common.py::LocalDatasetTest::test_builder_class_universal_dependencies /tmp/pytest-of-markussagen/pytest-1/cache/modules/datasets_modules/datasets/universal_dependencies/065e728dfe9a8371434a6e87132c2386a6eacab1a076d3a12aa417b994e6ef7d/universal_dependencies.py:6: DeprecationWarning: invalid escape sequence \= _CITATION = """\ tests/test_filesystem.py: 105 warnings /home/markussagen/.pyenv/versions/3.8.5/envs/huggingface/lib/python3.8/site-packages/responses/__init__.py:398: DeprecationWarning: stream argument is deprecated. Use stream parameter in request directly warn( tests/test_formatting.py::FormatterTest::test_jax_formatter tests/test_formatting.py::FormatterTest::test_jax_formatter tests/test_formatting.py::FormatterTest::test_jax_formatter tests/test_formatting.py::FormatterTest::test_jax_formatter tests/test_formatting.py::FormatterTest::test_jax_formatter_np_array_kwargs tests/test_formatting.py::FormatterTest::test_jax_formatter_np_array_kwargs tests/test_formatting.py::FormatterTest::test_jax_formatter_np_array_kwargs tests/test_formatting.py::FormatterTest::test_jax_formatter_np_array_kwargs /home/markussagen/datasets/src/datasets/formatting/jax_formatter.py:57: DeprecationWarning: `np.object` is a deprecated alias for the builtin `object`. To silence this warning, use `object` by itself. Doing this will not modify any behavior and is safe. Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations if data_struct.dtype == np.object: # jax arrays cannot be instantied from an array of objects tests/test_formatting.py::FormatterTest::test_jax_formatter tests/test_formatting.py::FormatterTest::test_jax_formatter tests/test_formatting.py::FormatterTest::test_jax_formatter /home/markussagen/.pyenv/versions/3.8.5/envs/huggingface/lib/python3.8/site-packages/jax/_src/numpy/lax_numpy.py:3567: UserWarning: Explicitly requested dtype <class 'jax._src.numpy.lax_numpy.int64'> requested in array is not available, and will be truncated to dtype int32. To enable more dtypes, set the jax_enable_x64 configuration option or the JAX_ENABLE_X64 shell environment variable. See https://github.com/google/jax#current-gotchas for more. lax._check_user_dtype_supported(dtype, "array") tests/test_metric_common.py::LocalMetricTest::test_load_metric_frugalscore /home/markussagen/.pyenv/versions/3.8.5/envs/huggingface/lib/python3.8/site-packages/apscheduler/util.py:95: PytzUsageWarning: The zone attribute is specific to pytz's interface; please migrate to a new time zone provider. For more details on how to do so, see https://pytz-deprecation-shim.readthedocs.io/en/latest/migration.html if obj.zone == 'local': tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_to_hub_custom_features _audio /home/markussagen/.pyenv/versions/3.8.5/envs/huggingface/lib/python3.8/site-packages/librosa/core/constantq.py:1059: DeprecationWarning: `np.complex` is a deprecated alias for the builtin `complex`. To silence this warning, use `complex` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.complex128` here. Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations dtype=np.complex, tests/features/test_array_xd.py::test_array_xd_with_none /home/markussagen/mydataset/tests/features/test_array_xd.py:338: DeprecationWarning: `np.object` is a deprecated alias for the builtin `object`. To silence this warning, use `object` by itself. Doing this will not modify any behavior and is safe. Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations assert isinstance(arr, np.ndarray) and arr.dtype == np.object and arr.shape == (3,) -- Docs: https://docs.pytest.org/en/stable/warnings.html ============================= short test summary info ============================= FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_bleurt - I... FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_chrf - Att... FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_code_eval FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_comet - Im... FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_competition_math FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_coval - Im... FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_frugalscore FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_perplexity FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_ter - Type... ``` ## Environment info - `datasets` version: 2.0.1.dev0 - Platform: Linux-5.16.11-76051611-generic-x86_64-with-glibc2.33 - Python version: 3.8.5 - PyArrow version: 5.0.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3984/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3984/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
490 days, 20:11:03
https://api.github.com/repos/huggingface/datasets/issues/3983
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3983/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3983/comments
https://api.github.com/repos/huggingface/datasets/issues/3983/events
https://github.com/huggingface/datasets/issues/3983
1,175,759,412
I_kwDODunzps5GFKo0
3,983
Infinitely attempting lock
{ "avatar_url": "https://avatars.githubusercontent.com/u/11869652?v=4", "events_url": "https://api.github.com/users/jyrr/events{/privacy}", "followers_url": "https://api.github.com/users/jyrr/followers", "following_url": "https://api.github.com/users/jyrr/following{/other_user}", "gists_url": "https://api.github.com/users/jyrr/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jyrr", "id": 11869652, "login": "jyrr", "node_id": "MDQ6VXNlcjExODY5NjUy", "organizations_url": "https://api.github.com/users/jyrr/orgs", "received_events_url": "https://api.github.com/users/jyrr/received_events", "repos_url": "https://api.github.com/users/jyrr/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jyrr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jyrr/subscriptions", "type": "User", "url": "https://api.github.com/users/jyrr", "user_view_type": "public" }
[]
closed
false
null
[]
[ "Hi ! Thanks for reporting. We're using filelock` as our locking mechanism.\r\n\r\nCan you try deleting the .lock file mentioned in the logs and try again ? Make sure that no other process is generating the `cnn_dailymail` dataset.\r\n\r\nIf it doesn't work, could you try to set up a lock using the latest version of filelock` and see if it works ?\r\n\r\n```\r\npip install filelock\r\n```\r\nhere is a code example from the `filelock` documentation that you can try:\r\n\r\n```python\r\nfrom filelock import FileLock\r\n\r\nlock = FileLock(\"test.txt.lock\")\r\nwith lock:\r\n with open(\"test.txt\", \"a\") as f:\r\n f.write(\"foo\")\r\n```", "I ran into this problem on my school server as well? Any update on how we can solve it? Thanks! ", "Have you tried running the code above to check if FileLock works in your setup ? You may also be interested in checking the https://github.com/tox-dev/filelock repository for issues", "Can you try using a different cache directory ? Maybe there are permissions issues with the default one.\r\n\r\nYou can do so by passing `cache_dir=...` to load_dataset()" ]
2022-03-21T18:11:57
2024-05-09T08:24:34
2022-05-06T16:12:18
NONE
null
null
null
null
I am trying to run one of the examples of the `transformers` repo, which makes use of `datasets`. Important to note is that I am trying to run this via a Databricks notebook, and all the files reside in the Databricks Filesystem (DBFS). ``` %sh python /dbfs/transformers/examples/pytorch/summarization/run_summarization.py \ --model_name_or_path t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /dbfs/transformers/tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate \ --log_level debug \ --cache_dir /dbfs/transformers/cache ``` All goes well until acquiring a lock -- ``` 03/21/2022 17:53:19 - DEBUG - datasets.utils.filelock - Attempting to acquire lock 140386484514192 on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock 03/21/2022 17:53:19 - DEBUG - datasets.utils.filelock - Lock 140386484514192 not acquired on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock, waiting 0.05 seconds ... 03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Attempting to acquire lock 140386484514192 on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock 03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Lock 140386484514192 not acquired on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock, waiting 0.05 seconds ... 03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Attempting to acquire lock 140386484514192 on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock 03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Lock 140386484514192 not acquired on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock, waiting 0.05 seconds ... 03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Attempting to acquire lock 140386484514192 on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock 03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Lock 140386484514192 not acquired on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock, waiting 0.05 seconds ... 03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Attempting to acquire lock 140386484514192 on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock 03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Lock 140386484514192 not acquired on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock, waiting 0.05 seconds ... 03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Attempting to acquire lock 140386484514192 on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock 03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Lock 140386484514192 not acquired on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock, waiting 0.05 seconds ... ``` and so on. I imagine this has to do with DBFS -- is there a way to tackle this?
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3983/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3983/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
45 days, 22:00:21
https://api.github.com/repos/huggingface/datasets/issues/3978
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3978/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3978/comments
https://api.github.com/repos/huggingface/datasets/issues/3978/events
https://github.com/huggingface/datasets/issues/3978
1,175,226,456
I_kwDODunzps5GDIhY
3,978
I can't view HFcallback dataset for ASR Space
{ "avatar_url": "https://avatars.githubusercontent.com/u/36753484?v=4", "events_url": "https://api.github.com/users/kingabzpro/events{/privacy}", "followers_url": "https://api.github.com/users/kingabzpro/followers", "following_url": "https://api.github.com/users/kingabzpro/following{/other_user}", "gists_url": "https://api.github.com/users/kingabzpro/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kingabzpro", "id": 36753484, "login": "kingabzpro", "node_id": "MDQ6VXNlcjM2NzUzNDg0", "organizations_url": "https://api.github.com/users/kingabzpro/orgs", "received_events_url": "https://api.github.com/users/kingabzpro/received_events", "repos_url": "https://api.github.com/users/kingabzpro/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kingabzpro/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kingabzpro/subscriptions", "type": "User", "url": "https://api.github.com/users/kingabzpro", "user_view_type": "public" }
[]
open
false
null
[]
[ "the dataset viewer is working on this dataset. I imagine the issue is that we would expect to be able to listen to the audio files in the `Please Record Your Voice file` column, right?\r\n\r\nmaybe @lhoestq or @albertvillanova could help\r\n\r\n<img width=\"1019\" alt=\"Capture d’écran 2022-03-24 à 17 36 20\" src=\"https://user-images.githubusercontent.com/1676121/159966006-57dcf8f7-b65f-4200-ac8c-66859318a8bb.png\">\r\n", "The structure of the dataset is not supported. Only the CSV file is parsed and the audio files are ignored.\r\n\r\nWe're working on supporting audio datasets with a specific structure in #3963 ", "Got it.", "Current error:\r\n\r\n```\r\nError code: StreamingRowsError\r\nException: LibsndfileError\r\nMessage: Error opening <File-like object HfFileSystem, datasets/kingabzpro/Urdu-ASR-flags@6a8878cfe3a41343fa86ec8b4254209fe56a0f0d/Please Record Your Voice/0.wav>: Format not recognised.\r\nTraceback: Traceback (most recent call last):\r\n File \"/src/services/worker/src/worker/utils.py\", line 263, in get_rows_or_raise\r\n return get_rows(\r\n File \"/src/services/worker/src/worker/utils.py\", line 204, in decorator\r\n return func(*args, **kwargs)\r\n File \"/src/services/worker/src/worker/utils.py\", line 241, in get_rows\r\n rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 1357, in __iter__\r\n example = _apply_feature_types_on_example(\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 1051, in _apply_feature_types_on_example\r\n decoded_example = features.decode_example(encoded_example, token_per_repo_id=token_per_repo_id)\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py\", line 1902, in decode_example\r\n return {\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py\", line 1903, in <dictcomp>\r\n column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id)\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py\", line 1325, in decode_nested_example\r\n return schema.decode_example(obj, token_per_repo_id=token_per_repo_id)\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/audio.py\", line 187, in decode_example\r\n array, sampling_rate = sf.read(f)\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/soundfile.py\", line 285, in read\r\n with SoundFile(file, 'r', samplerate, channels,\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/soundfile.py\", line 658, in __init__\r\n self._file = self._open(file, mode_int, closefd)\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/soundfile.py\", line 1216, in _open\r\n raise LibsndfileError(err, prefix=\"Error opening {0!r}: \".format(self.name))\r\n soundfile.LibsndfileError: Error opening <File-like object HfFileSystem, datasets/kingabzpro/Urdu-ASR-flags@6a8878cfe3a41343fa86ec8b4254209fe56a0f0d/Please Record Your Voice/0.wav>: Format not recognised.\r\n```\r\n\r\nMaybe switch to a discussion here? https://huggingface.co/datasets/kingabzpro/Urdu-ASR-flags/discussions. cc @albertvillanova " ]
2022-03-21T11:07:49
2023-09-25T12:19:53
null
NONE
null
null
null
null
## Dataset viewer issue for '*Urdu-ASR-flags*' **Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/kingabzpro/Urdu-ASR-flags)* *I think dataset should show some thing and if you want me to add script, please show me the documentation. I thought this was suppose to be automatic task.* Am I the one who added this dataset ? Yes
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3978/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3978/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null