url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
48
51
id
int64
600M
3.67B
node_id
stringlengths
18
24
number
int64
2
7.88k
title
stringlengths
1
290
user
dict
labels
listlengths
0
4
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
4
comments
listlengths
0
30
created_at
timestamp[s]date
2020-04-14 18:18:51
2025-11-26 16:16:56
updated_at
timestamp[s]date
2020-04-29 09:23:05
2025-11-30 03:52:07
closed_at
timestamp[s]date
2020-04-29 09:23:05
2025-11-21 12:31:19
author_association
stringclasses
4 values
type
null
active_lock_reason
null
draft
null
pull_request
null
body
stringlengths
0
228k
closed_by
dict
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
null
state_reason
stringclasses
4 values
sub_issues_summary
dict
issue_dependencies_summary
dict
is_pull_request
bool
1 class
closed_at_time_taken
duration[s]
https://api.github.com/repos/huggingface/datasets/issues/3706
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3706/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3706/comments
https://api.github.com/repos/huggingface/datasets/issues/3706/events
https://github.com/huggingface/datasets/issues/3706
1,132,218,874
I_kwDODunzps5DfEn6
3,706
Unable to load dataset 'big_patent'
{ "avatar_url": "https://avatars.githubusercontent.com/u/26432753?v=4", "events_url": "https://api.github.com/users/ankitk2109/events{/privacy}", "followers_url": "https://api.github.com/users/ankitk2109/followers", "following_url": "https://api.github.com/users/ankitk2109/following{/other_user}", "gists_url": "https://api.github.com/users/ankitk2109/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ankitk2109", "id": 26432753, "login": "ankitk2109", "node_id": "MDQ6VXNlcjI2NDMyNzUz", "organizations_url": "https://api.github.com/users/ankitk2109/orgs", "received_events_url": "https://api.github.com/users/ankitk2109/received_events", "repos_url": "https://api.github.com/users/ankitk2109/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ankitk2109/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ankitk2109/subscriptions", "type": "User", "url": "https://api.github.com/users/ankitk2109", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "Hi @ankitk2109,\r\n\r\nHave you tried passing the split name with the keyword `split=`? See e.g. an example in our Quick Start docs: https://huggingface.co/docs/datasets/quickstart.html#load-the-dataset-and-model\r\n```python\r\n ds = load_dataset(\"big_patent\", \"d\", split=\"validation\")", "Hi @albertvillanova,\r\n\r\nThanks for your response.\r\n\r\nYes, I tried the `split='validation'` as well. But getting the same issue. ", "I'm sorry, but I can't reproduce your problem:\r\n```python\r\nIn [5]: ds = load_dataset(\"big_patent\", \"d\", split=\"validation\")\r\nDownloading and preparing dataset big_patent/d (download: 6.01 GiB, generated: 169.61 MiB, post-processed: Unknown size, total: 6.17 GiB) to .../.cache/big_patent/d/1.0.0/bdefa7c0b39fba8bba1c6331b70b738e30d63c8ad4567f983ce315a5fef6131c...\r\nDownloading data: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 6.45G/6.45G [27:36<00:00, 3.89MB/s]\r\nExtracting data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [03:18<00:00, 66.08s/it]\r\nDataset big_patent downloaded and prepared to .../.cache/big_patent/d/1.0.0/bdefa7c0b39fba8bba1c6331b70b738e30d63c8ad4567f983ce315a5fef6131c. Subsequent calls will reuse this data. \r\n\r\nIn [6]: ds\r\nOut[6]: \r\nDataset({\r\n features: ['description', 'abstract'],\r\n num_rows: 565\r\n})\r\n", "Maybe you had a connection issue while downloading the file and this was corrupted?\r\nOur cache system uses the file you downloaded first time.\r\nIf so, you could try forcing redownload of the file with:\r\n```python\r\nds = load_dataset(\"big_patent\", \"d\", split=\"validation\", download_mode=\"force_redownload\")", "I am able to download the dataset with ``` download_mode=\"force_redownload\"```. As you mentioned it was an issue with the cached version which was failed earlier due to a network issue. I am closing the issue now, once again thank you." ]
2022-02-11T09:48:34
2022-02-14T15:26:03
2022-02-14T15:26:03
NONE
null
null
null
null
## Describe the bug Unable to load the "big_patent" dataset ## Steps to reproduce the bug ```python load_dataset('big_patent', 'd', 'validation') ``` ## Expected results Download big_patents' validation split from the 'd' subset ## Getting an error saying: {FileNotFoundError}Local file ..\huggingface\datasets\downloads\6159313604f4f2c01e7d1cac52139343b6c07f73f6de348d09be6213478455c5\bigPatentData\train.tar.gz doesn't exist ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version:1.18.3 - Platform: Windows - Python version:3.8 - PyArrow version:7.0.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/26432753?v=4", "events_url": "https://api.github.com/users/ankitk2109/events{/privacy}", "followers_url": "https://api.github.com/users/ankitk2109/followers", "following_url": "https://api.github.com/users/ankitk2109/following{/other_user}", "gists_url": "https://api.github.com/users/ankitk2109/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ankitk2109", "id": 26432753, "login": "ankitk2109", "node_id": "MDQ6VXNlcjI2NDMyNzUz", "organizations_url": "https://api.github.com/users/ankitk2109/orgs", "received_events_url": "https://api.github.com/users/ankitk2109/received_events", "repos_url": "https://api.github.com/users/ankitk2109/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ankitk2109/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ankitk2109/subscriptions", "type": "User", "url": "https://api.github.com/users/ankitk2109", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3706/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3706/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
3 days, 5:37:29
https://api.github.com/repos/huggingface/datasets/issues/3704
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3704/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3704/comments
https://api.github.com/repos/huggingface/datasets/issues/3704/events
https://github.com/huggingface/datasets/issues/3704
1,132,042,631
I_kwDODunzps5DeZmH
3,704
OSCAR-2109 datasets are misaligned and truncated
{ "avatar_url": "https://avatars.githubusercontent.com/u/5794899?v=4", "events_url": "https://api.github.com/users/adrianeboyd/events{/privacy}", "followers_url": "https://api.github.com/users/adrianeboyd/followers", "following_url": "https://api.github.com/users/adrianeboyd/following{/other_user}", "gists_url": "https://api.github.com/users/adrianeboyd/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/adrianeboyd", "id": 5794899, "login": "adrianeboyd", "node_id": "MDQ6VXNlcjU3OTQ4OTk=", "organizations_url": "https://api.github.com/users/adrianeboyd/orgs", "received_events_url": "https://api.github.com/users/adrianeboyd/received_events", "repos_url": "https://api.github.com/users/adrianeboyd/repos", "site_admin": false, "starred_url": "https://api.github.com/users/adrianeboyd/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/adrianeboyd/subscriptions", "type": "User", "url": "https://api.github.com/users/adrianeboyd", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "Hi @adrianeboyd, thanks for reporting.\r\n\r\nThere is indeed a bug in that community dataset:\r\nLine:\r\n```python\r\nmetadata_and_text_files = list(zip(metadata_files, text_files))\r\n``` \r\nshould be replaced with\r\n```python\r\nmetadata_and_text_files = list(zip(sorted(metadata_files), sorted(text_files)))\r\n```\r\n\r\nI am going to contact their owners (https://huggingface.co/oscar-corpus) in order to inform them about the bug.\r\n\r\nI keep you informed.", "That fix is part of it, but it's clearly not the only issue.\r\n\r\nI also already contacted the OSCAR creators, but I reported it here because it looked like huggingface members were the main authors in the git history. Is there a better place to have reported this?", "Hello,\r\n\r\nWe've had an issue that could be linked to this one here: https://github.com/oscar-corpus/corpus/issues/15.\r\n\r\nI have been spot checking the source (`.txt`/`.jsonl`) files for a while, and have not found issues, especially in the start/end of corpora (but I conceed that more integration testing would be necessary on our side).\r\n\r\nThe text and metadata files are designed to be used in sync (with `lang_part_n.txt` and `lang_meta_part_n.jsonl` working together), while staying independent from part to part, so that anyone could randomly choose a part and work with it.\r\n\r\nThe fix @albertvillanova proposed should fix the problem, as the parts will be in sync again.\r\n\r\nLet me know if you need help or more details, I'd be glad to help!", "I'm happy to move the discussion to the other repo!\r\n\r\nMerely sorting the files only **maybe** fixes the processing of the first part. If the first part contains non-unix newlines, it will still be misaligned/truncated, and all the following parts will be truncated with incorrect text offsets and metadata due the offset and newline bugs.", "Fixed:\r\n- https://huggingface.co/datasets/oscar-corpus/OSCAR-2109/commit/3cd7e95aa1799b73c5ea8afc3989635f3e19b86b", "Hi @Uinelj, This is a total noobs question but how can I integrate that bugfix into my code? I reinstalled the datasets library this time from source. Should that have fixed the issue? I am still facing the misalignment issue. Do I need to download the dataset from scratch?", "Hi, I re-downloaded the dataset and still have the problem. See: https://github.com/oscar-corpus/corpus/issues/18", "Sorry @norakassner for the late reply.\r\n\r\nThere are indeed several issues creating the misalignment, as @adrianeboyd cleverly pointed out:\r\n- https://huggingface.co/datasets/oscar-corpus/OSCAR-2109/commit/3cd7e95aa1799b73c5ea8afc3989635f3e19b86b fixed one of them\r\n- but there are still others to be fixed", "Normally, the issues should be fixed now:\r\n- Fix offset initialization for each file: https://huggingface.co/datasets/oscar-corpus/OSCAR-2109/commit/1ad9b7bfe00798a9258a923b887bb1c8d732b833\r\n- Disable default universal newline support: https://huggingface.co/datasets/oscar-corpus/OSCAR-2109/commit/0c2f307d3167f03632f502af361ac6c3c393f510\r\n\r\nFeel free to reopen if you find additional misalignments/truncations.\r\n\r\nCC: @adrianeboyd @norakassner @Uinelj ", "Thanks for the updates!\r\n\r\nThe purist in me would still like to have the rstrip not strip additional characters from the original text (unicode whitespace mainly in practice, I think), but the differences are extremely small in practice and it doesn't actually matter for my current task:\r\n\r\n```python\r\ntext = \"\".join([text_f.readline() for _ in range(meta[\"nb_sentences\"])]).rstrip(\"\\n\")\r\n```" ]
2022-02-11T08:14:59
2022-03-17T18:01:04
2022-03-16T16:21:28
NONE
null
null
null
null
## Describe the bug The `oscar-corpus/OSCAR-2109` data appears to be misaligned and truncated by the dataset builder for subsets that contain more than one part and for cases where the texts contain non-unix newlines. ## Steps to reproduce the bug A few examples, although I'm not sure how deterministic the particular (mis)alignment is in various configurations: ```python from datasets import load_dataset dataset = load_dataset("oscar-corpus/OSCAR-2109", "deduplicated_fi", split="train", use_auth_token=True) entry = dataset[0] # entry["text"] is from fi_part_3.txt.gz # entry["meta"] is from fi_meta_part_2.jsonl.gz dataset = load_dataset("oscar-corpus/OSCAR-2109", "deduplicated_no", split="train", use_auth_token=True) entry = dataset[900000] # entry["text"] is from no_part_3.txt.gz and contains a blank line # entry["meta"] is from no_meta_part_1.jsonl.gz dataset = load_dataset("oscar-corpus/OSCAR-2109", "deduplicated_mk", split="train", streaming=True, use_auth_token=True) # 9088 texts in the dataset are empty ``` For `deduplicated_fi`, all exported raw texts from the dataset are 17GB rather than 20GB as reported in the data splits overview table. The token count with `wc -w` for the raw texts is 2,067,556,874 rather than the expected 2,357,264,196 from the data splits table. For `deduplicated_no` all exported raw texts contain 624,040,887 rather than the expected 776,354,517 tokens. For `deduplicated_mk` it is 122,236,936 rather than 134,544,934 tokens. I'm not expecting the `wc -w` counts to line up exactly with the data splits table, but for comparison the `wc -w` count for `deduplicated_mk` on the raw texts is 134,545,424. ## Issues * The meta / text files are not paired correctly when loading, so the extracted texts do not have the right offsets, the metadata is not associated with the correct text, and the text files may not be processed to the end or may be processed beyond the end (empty texts). * The line count offset is not reset per file so the texts aren't aligned to the right offsets in any parts beyond the first part, leading to truncation when in effect blank lines are not skipped. * Non-unix newline characters are treated as newlines when reading the text files while the metadata only counts unix newlines for its line offsets, leading to further misalignments between the metadata and the extracted texts, and which also results in truncation. ## Expected results All texts from the OSCAR release are extracted according to the metadata and aligned with the correct metadata. ## Fixes Not necessarily the exact fixes/checks you may want to use (I didn't test all languages or do any cross-platform testing, I'm not sure all the details are compatible with streaming), however to highlight the issues: ```diff diff --git a/OSCAR-2109.py b/OSCAR-2109.py index bbac1076..5eee8de7 100644 --- a/OSCAR-2109.py +++ b/OSCAR-2109.py @@ -20,6 +20,7 @@ import collections import gzip import json +import os import datasets @@ -387,9 +388,20 @@ class Oscar2109(datasets.GeneratorBasedBuilder): with open(checksum_file, encoding="utf-8") as f: data_filenames = [line.split()[1] for line in f if line] data_urls = [self.config.base_data_path + data_filename for data_filename in data_filenames] - text_files = dl_manager.download([url for url in data_urls if url.endswith(".txt.gz")]) - metadata_files = dl_manager.download([url for url in data_urls if url.endswith(".jsonl.gz")]) + # sort filenames so corresponding parts are aligned + text_files = sorted(dl_manager.download([url for url in data_urls if url.endswith(".txt.gz")])) + metadata_files = sorted(dl_manager.download([url for url in data_urls if url.endswith(".jsonl.gz")])) + assert len(text_files) == len(metadata_files) metadata_and_text_files = list(zip(metadata_files, text_files)) + for meta_path, text_path in metadata_and_text_files: + # check that meta/text part numbers are the same + if "part" in os.path.basename(text_path): + assert ( + os.path.basename(text_path).replace(".txt.gz", "").split("_")[-1] + == os.path.basename(meta_path).replace(".jsonl.gz", "").split("_")[-1] + ) + else: + assert len(metadata_and_text_files) == 1 return [ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"metadata_and_text_files": metadata_and_text_files}), ] @@ -397,10 +409,14 @@ class Oscar2109(datasets.GeneratorBasedBuilder): def _generate_examples(self, metadata_and_text_files): """This function returns the examples in the raw (text) form by iterating on all the files.""" id_ = 0 - offset = 0 for meta_path, text_path in metadata_and_text_files: + # line offsets are per text file + offset = 0 logger.info("generating examples from = %s", text_path) - with gzip.open(open(text_path, "rb"), "rt", encoding="utf-8") as text_f: + # some texts contain non-Unix newlines that should not be + # interpreted as line breaks for the line counts in the metadata + # with readline() + with gzip.open(open(text_path, "rb"), "rt", encoding="utf-8", newline="\n") as text_f: with gzip.open(open(meta_path, "rb"), "rt", encoding="utf-8") as meta_f: for line in meta_f: # read meta @@ -411,7 +427,12 @@ class Oscar2109(datasets.GeneratorBasedBuilder): offset += 1 text_f.readline() # read text - text = "".join([text_f.readline() for _ in range(meta["nb_sentences"])]).rstrip() + text_lines = [text_f.readline() for _ in range(meta["nb_sentences"])] + # all lines contain text (no blank lines or EOF) + assert all(text_lines) + assert "\n" not in text_lines offset += meta["nb_sentences"] + # only strip the trailing newline + text = "".join(text_lines).rstrip("\n") yield id_, {"id": id_, "text": text, "meta": meta} id_ += 1 ``` I've tested this with a number of smaller deduplicated languages with 1-20 parts and the resulting datasets looked correct in terms of word count and size when compared to the data splits table and raw texts, and the text/metadata alignments were correct in all my spot checks. However, there are many many languages I didn't test and I'm not sure that there aren't any texts containing blank lines in the corpus, for instance. For the cases I tested, the assertions related to blank lines and EOF made it easier to verify that the text and metadata were aligned as intended, since there would be little chance of spurious alignments of variable-length texts across so much data.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3704/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3704/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
33 days, 8:06:29
https://api.github.com/repos/huggingface/datasets/issues/3703
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3703/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3703/comments
https://api.github.com/repos/huggingface/datasets/issues/3703/events
https://github.com/huggingface/datasets/issues/3703
1,131,882,772
I_kwDODunzps5DdykU
3,703
ImportError: To be able to use this metric, you need to install the following dependencies['seqeval'] using 'pip install seqeval' for instance'
{ "avatar_url": "https://avatars.githubusercontent.com/u/28425091?v=4", "events_url": "https://api.github.com/users/zhangyifei1/events{/privacy}", "followers_url": "https://api.github.com/users/zhangyifei1/followers", "following_url": "https://api.github.com/users/zhangyifei1/following{/other_user}", "gists_url": "https://api.github.com/users/zhangyifei1/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/zhangyifei1", "id": 28425091, "login": "zhangyifei1", "node_id": "MDQ6VXNlcjI4NDI1MDkx", "organizations_url": "https://api.github.com/users/zhangyifei1/orgs", "received_events_url": "https://api.github.com/users/zhangyifei1/received_events", "repos_url": "https://api.github.com/users/zhangyifei1/repos", "site_admin": false, "starred_url": "https://api.github.com/users/zhangyifei1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zhangyifei1/subscriptions", "type": "User", "url": "https://api.github.com/users/zhangyifei1", "user_view_type": "public" }
[]
closed
false
null
[]
[ "![图片](https://user-images.githubusercontent.com/28425091/153547502-6bb0938d-788b-4857-b946-c3cf08fefce4.png)\r\nMy datasets version", "![图片](https://user-images.githubusercontent.com/28425091/153547587-f4677166-af9b-44a0-95ad-b6dba873978a.png)\r\n", "Hi! Some of our metrics require additional dependencies to work. In your case, simply installing the `seqeval` package with `pip install seqeval` should resolve the issue.", "> Hi! Some of our metrics require additional dependencies to work. In your case, simply installing the `seqeval` package with `pip install seqeval` should resolve the issue.\r\nI installed seqeval, but still reported the same error. That's too bad.\r\n", "> > Hi! Some of our metrics require additional dependencies to work. In your case, simply installing the `seqeval` package with `pip install seqeval` should resolve the issue.\r\n> > I installed seqeval, but still reported the same error. That's too bad.\r\n\r\nSame issue here. What should I do to fix this error? Please help! Thank you.", "I tried to install **seqeval** package through anaconda instead of pip:\r\n`conda install -c conda-forge seqeval`\r\nIt worked for me!", "I can run it through the following steps:\r\n![image](https://user-images.githubusercontent.com/69563759/159264511-1e252a4e-c8c8-44ab-b7bc-b4aac609bd9e.png)\r\nThank you for answering for me!", "just change the file name seqeval.py to myseqeval.py", "Metrics are deprecated in `datasets` and `evaluate` should be used instead: https://github.com/huggingface/evaluate" ]
2022-02-11T06:38:42
2023-07-11T09:31:59
2023-07-11T09:31:59
NONE
null
null
null
null
hi : I want to use the seqeval indicator because of direct load_ When metric ('seqeval '), it will prompt that the network connection fails. So I downloaded the seqeval Py to load locally. Loading code: metric = load_ metric(path='mymetric/seqeval/seqeval.py') But tips: Traceback (most recent call last): File "/home/ubuntu/Python3.6_project/zyf_project/transformers/examples/pytorch/token-classification/run_ner.py", line 604, in <module> main() File "/home/ubuntu/Python3.6_project/zyf_project/transformers/examples/pytorch/token-classification/run_ner.py", line 481, in main metric = load_metric(path='mymetric/seqeval/seqeval.py') File "/home/ubuntu/Python3.6_project/zyf_project/transformers_venv_0209/lib/python3.7/site-packages/datasets/load.py", line 610, in load_metric dataset=False, File "/home/ubuntu/Python3.6_project/zyf_project/transformers_venv_0209/lib/python3.7/site-packages/datasets/load.py", line 450, in prepare_module f"To be able to use this {module_type}, you need to install the following dependencies" ImportError: To be able to use this metric, you need to install the following dependencies['seqeval'] using 'pip install seqeval' for instance' **What should I do? Please help me, thank you**
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3703/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3703/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
515 days, 2:53:17
https://api.github.com/repos/huggingface/datasets/issues/3700
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3700/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3700/comments
https://api.github.com/repos/huggingface/datasets/issues/3700/events
https://github.com/huggingface/datasets/issues/3700
1,130,252,496
I_kwDODunzps5DXkjQ
3,700
Unable to load a dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/97964230?v=4", "events_url": "https://api.github.com/users/PaulchauvinAI/events{/privacy}", "followers_url": "https://api.github.com/users/PaulchauvinAI/followers", "following_url": "https://api.github.com/users/PaulchauvinAI/following{/other_user}", "gists_url": "https://api.github.com/users/PaulchauvinAI/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/PaulchauvinAI", "id": 97964230, "login": "PaulchauvinAI", "node_id": "U_kgDOBdbQxg", "organizations_url": "https://api.github.com/users/PaulchauvinAI/orgs", "received_events_url": "https://api.github.com/users/PaulchauvinAI/received_events", "repos_url": "https://api.github.com/users/PaulchauvinAI/repos", "site_admin": false, "starred_url": "https://api.github.com/users/PaulchauvinAI/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PaulchauvinAI/subscriptions", "type": "User", "url": "https://api.github.com/users/PaulchauvinAI", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "Hi! `load_dataset` is intended to be used to load a canonical dataset (`wikipedia`), a packaged dataset (`csv`, `json`, ...) or a dataset hosted on the Hub. For local datasets saved with `save_to_disk(\"path/to/dataset\")`, use `load_from_disk(\"path/to/dataset\")`.", "Maybe we should raise an informative error message in this case...", "How should we load a locally self-gathered dataset then?" ]
2022-02-10T15:05:53
2024-07-04T08:39:23
2022-02-11T22:56:39
NONE
null
null
null
null
## Describe the bug Unable to load a dataset from Huggingface that I have just saved. ## Steps to reproduce the bug On Google colab `! pip install datasets ` `from datasets import load_dataset` `my_path = "wiki_dataset"` `dataset = load_dataset('wikipedia', "20200501.fr")` `dataset.save_to_disk(my_path)` `dataset = load_dataset(my_path)` ## Expected results Loading the dataset ## Actual results ValueError: Couldn't cast _data_files: list<item: struct<filename: string>> child 0, item: struct<filename: string> child 0, filename: string _fingerprint: string _format_columns: null _format_kwargs: struct<> _format_type: null _indexes: struct<> _output_all_columns: bool _split: string to {'builder_name': Value(dtype='string', id=None), 'citation': Value(dtype='string', id=None), 'config_name': Value(dtype='string', id=None), 'dataset_size': Value(dtype='int64', id=None), 'description': Value(dtype='string', id=None), 'download_checksums': {}, 'download_size': Value(dtype='int64', id=None), 'features': {'title': {'dtype': Value(dtype='string', id=None), 'id': Value(dtype='null', id=None), '_type': Value(dtype='string', id=None)}, 'text': {'dtype': Value(dtype='string', id=None), 'id': Value(dtype='null', id=None), '_type': Value(dtype='string', id=None)}}, 'homepage': Value(dtype='string', id=None), 'license': Value(dtype='string', id=None), 'post_processed': Value(dtype='null', id=None), 'post_processing_size': Value(dtype='null', id=None), 'size_in_bytes': Value(dtype='int64', id=None), 'splits': {'train': {'name': Value(dtype='string', id=None), 'num_bytes': Value(dtype='int64', id=None), 'num_examples': Value(dtype='int64', id=None), 'dataset_name': Value(dtype='string', id=None)}}, 'supervised_keys': Value(dtype='null', id=None), 'task_templates': Value(dtype='null', id=None), 'version': {'version_str': Value(dtype='string', id=None), 'description': Value(dtype='string', id=None), 'major': Value(dtype='int64', id=None), 'minor': Value(dtype='int64', id=None), 'patch': Value(dtype='int64', id=None)}} because column names don't match ## Environment info - `datasets` version: 1.18.3 - Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.12 - PyArrow version: 6.0.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3700/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3700/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
1 day, 7:50:46
https://api.github.com/repos/huggingface/datasets/issues/3688
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3688/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3688/comments
https://api.github.com/repos/huggingface/datasets/issues/3688/events
https://github.com/huggingface/datasets/issues/3688
1,127,218,321
I_kwDODunzps5DL_yR
3,688
Pyarrow version error
{ "avatar_url": "https://avatars.githubusercontent.com/u/49993443?v=4", "events_url": "https://api.github.com/users/Zaker237/events{/privacy}", "followers_url": "https://api.github.com/users/Zaker237/followers", "following_url": "https://api.github.com/users/Zaker237/following{/other_user}", "gists_url": "https://api.github.com/users/Zaker237/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Zaker237", "id": 49993443, "login": "Zaker237", "node_id": "MDQ6VXNlcjQ5OTkzNDQz", "organizations_url": "https://api.github.com/users/Zaker237/orgs", "received_events_url": "https://api.github.com/users/Zaker237/received_events", "repos_url": "https://api.github.com/users/Zaker237/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Zaker237/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Zaker237/subscriptions", "type": "User", "url": "https://api.github.com/users/Zaker237", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
[ "Hi @Zaker237, thanks for reporting.\r\n\r\nThis is weird: the error you get is only thrown if the installed pyarrow version is less than 3.0.0.\r\n\r\nCould you please check that you install pyarrow in the same Python virtual environment where you installed datasets?\r\n\r\nFrom the Python command line (or terminal) where you get the error, please type:\r\n```\r\nimport pyarrow\r\nprint(pyarrow.__version__)\r\nimport datasets\r\nprint(datasets.__version__)\r\n``` ", "hi @albertvillanova i try yesterday to create a new python environement with python 7 and try it on the environement and it worked. so i think that the error was not the package but may be jupyter notebook on conda. still yet i'm not yet sure but it worked in an environment created with venv", "OK, thanks @Zaker237 for your feedback.\r\n\r\nI close this issue then. Please, feel free to reopen it if the problem arises again." ]
2022-02-08T12:53:59
2022-02-09T06:35:33
2022-02-09T06:35:32
NONE
null
null
null
null
## Describe the bug I installed datasets(version 1.17.0, 1.18.0, 1.18.3) but i'm right now nor able to import it because of pyarrow. when i try to import it, i get the following error: `To use datasets, the module pyarrow>=3.0.0 is required, and the current version of pyarrow doesn't match this condition`. i tryed with all version of pyarrow execpt `4.0.0` but still get the same error. ## Steps to reproduce the bug ```python import datasets ``` ## Expected results A clear and concise description of the expected results. ## Actual results AttributeError Traceback (most recent call last) <ipython-input-19-652e886d387f> in <module> ----> 1 import datasets ~\AppData\Local\Continuum\anaconda3\lib\site-packages\datasets\__init__.py in <module> 26 27 ---> 28 if _version.parse(pyarrow.__version__).major < 3: 29 raise ImportWarning( 30 "To use `datasets`, the module `pyarrow>=3.0.0` is required, and the current version of `pyarrow` doesn't match this condition.\n" AttributeError: 'Version' object has no attribute 'major' ## Environment info Traceback (most recent call last): File "c:\users\alex\appdata\local\continuum\anaconda3\lib\runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "c:\users\alex\appdata\local\continuum\anaconda3\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "C:\Users\Alex\AppData\Local\Continuum\anaconda3\Scripts\datasets-cli.exe\__main__.py", line 5, in <module> File "c:\users\alex\appdata\local\continuum\anaconda3\lib\site-packages\datasets\__init__.py", line 28, in <module> if _version.parse(pyarrow.__version__).major < 3: AttributeError: 'Version' object has no attribute 'major' - `datasets` version: - Platform: Linux(Ubuntu) and Windows: conda on the both - Python version: 3.7 - PyArrow version: 7.0.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3688/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3688/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
17:41:33
https://api.github.com/repos/huggingface/datasets/issues/3687
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3687/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3687/comments
https://api.github.com/repos/huggingface/datasets/issues/3687/events
https://github.com/huggingface/datasets/issues/3687
1,127,154,766
I_kwDODunzps5DLwRO
3,687
Can't get the text data when calling to_tf_dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/82086367?v=4", "events_url": "https://api.github.com/users/phrasenmaeher/events{/privacy}", "followers_url": "https://api.github.com/users/phrasenmaeher/followers", "following_url": "https://api.github.com/users/phrasenmaeher/following{/other_user}", "gists_url": "https://api.github.com/users/phrasenmaeher/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/phrasenmaeher", "id": 82086367, "login": "phrasenmaeher", "node_id": "MDQ6VXNlcjgyMDg2MzY3", "organizations_url": "https://api.github.com/users/phrasenmaeher/orgs", "received_events_url": "https://api.github.com/users/phrasenmaeher/received_events", "repos_url": "https://api.github.com/users/phrasenmaeher/repos", "site_admin": false, "starred_url": "https://api.github.com/users/phrasenmaeher/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/phrasenmaeher/subscriptions", "type": "User", "url": "https://api.github.com/users/phrasenmaeher", "user_view_type": "public" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Rocketknight1", "id": 12866554, "login": "Rocketknight1", "node_id": "MDQ6VXNlcjEyODY2NTU0", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "type": "User", "url": "https://api.github.com/users/Rocketknight1", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Rocketknight1", "id": 12866554, "login": "Rocketknight1", "node_id": "MDQ6VXNlcjEyODY2NTU0", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "type": "User", "url": "https://api.github.com/users/Rocketknight1", "user_view_type": "public" } ]
[ "cc @Rocketknight1 ", "You are correct that `to_tf_dataset` only handles numerical columns right now, yes, though this is a limitation we might remove in future! The main reason we do this is that our models mostly do not include the tokenizer as a model layer, because it's very difficult to compile some of them in TF. So the \"normal\" Huggingface workflow is to first tokenize your dataset, and then pass tokenized tensors to the model.\r\n\r\nFor your use case, would you prefer to pass strings to the model, and use some text processing layers instead of the built-in tokenizers?", "Also tagging @gante just so he's aware, but I can handle this one!", "Thanks for the quick follow-up to my issue.\r\n\r\nFor my use-case, instead of the built-in tokenizers I wanted to use the `TextVectorization` layer to map from strings to integers. To achieve this, I came up with the following solution:\r\n\r\n```\r\nfrom datasets import load_dataset\r\nfrom transformers import DefaultDataCollator\r\nimport tensorflow as tf\r\nimport string\r\nimport re\r\nfrom tensorflow.keras.layers.experimental.preprocessing import TextVectorization\r\n\r\n#some hyper-parameters for the text-to-integer mapping\r\nmax_features = 20000\r\nembedding_dim = 128\r\nsequence_length = 210\r\n\r\ndata_collator = DefaultDataCollator(return_tensors=\"tf\")\r\ndataset = load_dataset(\"sst\", \"default\")\r\n\r\n#adapt the vectorization layer on train data only\r\nvectorize_layer.adapt(dataset[\"train\"].to_dict(batched=False)[\"sentence\"])\r\n\r\ndef prepare_features(text, label):\r\n text = tf.expand_dims(text, -1)\r\n return {\"vectorized_text\": vectorize_layer(text)[0], \"label\": tf.expand_dims(label, axis=-1)}\r\n\r\nencoded_dataset = dataset.map(lambda example: prepare_features(example[\"sentence\"], example[\"label\"]), batched=False)\r\n\r\n\r\ndef custom_standardization(input_data):\r\n lowercase = tf.strings.lower(input_data)\r\n return tf.strings.regex_replace(\r\n lowercase, f\"[{re.escape(string.punctuation)}]\", \"\"\r\n )\r\n\r\nvectorize_layer = TextVectorization(\r\n standardize=custom_standardization,\r\n max_tokens=max_features,\r\n output_mode=\"int\",\r\n output_sequence_length=sequence_length,\r\n)\r\n\r\ntrain_dataset = encoded_dataset[\"train\"].to_tf_dataset(columns=['vectorized_text'], label_cols=[\"label\"],\r\n shuffle=True, batch_size=1, collate_fn=data_collator).unbatch()\r\n#similar for the other sub-sets\r\n\r\n```\r\n\r\nSince the strings would have been mapped to integers or floats at some point, it's no drawback that this mapping is done early in the process. \r\n\r\nFor the future, however, it'd be more convenient to get the string data, since I am also inspecting the dataset (longest sentence, shortest sentence), which is more challenging when working with integer or float. For now, this can be done by calling `to_dict`.", "> For the future, however, it'd be more convenient to get the string data, since I am also inspecting the dataset (longest sentence, shortest sentence), which is more challenging when working with integer or float.\r\n\r\nYes, I agree, so let's keep this issue open.", "Going to close this now - methods like `to_tf_dataset` and `prepare_tf_dataset` now support string data, and have done for a while! If anyone sees this and is encountering issues with string data in those methods, please file a new issue!" ]
2022-02-08T11:52:10
2023-01-19T14:55:18
2023-01-19T14:55:18
NONE
null
null
null
null
I am working with the SST2 dataset, and am using TensorFlow 2.5 I'd like to convert it to a `tf.data.Dataset` by calling the `to_tf_dataset` method. The following snippet is what I am using to achieve this: ``` from datasets import load_dataset from transformers import DefaultDataCollator data_collator = DefaultDataCollator(return_tensors="tf") dataset = load_dataset("sst") train_dataset = dataset["train"].to_tf_dataset(columns=['sentence'], label_cols="label", shuffle=True, batch_size=8,collate_fn=data_collator) ``` However, this only gets me the labels; the text--the most important part--is missing: ``` for s in train_dataset.take(1): print(s) #prints something like: ({}, <tf.Tensor: shape=(8,), ...>) ``` As you can see, it only returns the label part, not the data, as indicated by the empty dictionary, `{}`. So far, I've played with various settings of the method arguments, but to no avail; I do not want to perform any text processing at this time. On my quest to achieve what I want ( a `tf.data.Dataset`), I've consulted these resources: [https://www.philschmid.de/huggingface-transformers-keras-tf](https://www.philschmid.de/huggingface-transformers-keras-tf) [https://huggingface.co/docs/datasets/use_dataset.html?highlight=tensorflow](https://huggingface.co/docs/datasets/use_dataset.html?highlight=tensorflow) I was surprised to not find more extensive examples on how to transform a Hugginface dataset to one compatible with TensorFlow. If you could point me to where I am going wrong, please do so. Thanks in advance for your support. --- Edit: In the [docs](https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.to_tf_dataset), I found the following description: _In general, only columns that the model can use as input should be included here (numeric data only)._ Does this imply that no textual, i.e., `string` data can be loaded?
{ "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Rocketknight1", "id": 12866554, "login": "Rocketknight1", "node_id": "MDQ6VXNlcjEyODY2NTU0", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "type": "User", "url": "https://api.github.com/users/Rocketknight1", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3687/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3687/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
345 days, 3:03:08
https://api.github.com/repos/huggingface/datasets/issues/3686
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3686/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3686/comments
https://api.github.com/repos/huggingface/datasets/issues/3686/events
https://github.com/huggingface/datasets/issues/3686
1,127,137,290
I_kwDODunzps5DLsAK
3,686
`Translation` features cannot be `flatten`ed
{ "avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4", "events_url": "https://api.github.com/users/SBrandeis/events{/privacy}", "followers_url": "https://api.github.com/users/SBrandeis/followers", "following_url": "https://api.github.com/users/SBrandeis/following{/other_user}", "gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/SBrandeis", "id": 33657802, "login": "SBrandeis", "node_id": "MDQ6VXNlcjMzNjU3ODAy", "organizations_url": "https://api.github.com/users/SBrandeis/orgs", "received_events_url": "https://api.github.com/users/SBrandeis/received_events", "repos_url": "https://api.github.com/users/SBrandeis/repos", "site_admin": false, "starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions", "type": "User", "url": "https://api.github.com/users/SBrandeis", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" } ]
[ "Thanks for reporting, @SBrandeis! Some additional feature types that don't behave as expected when flattened: `Audio`, `Image` and `TranslationVariableLanguages`" ]
2022-02-08T11:33:48
2022-03-18T17:28:13
2022-03-18T17:28:13
CONTRIBUTOR
null
null
null
null
## Describe the bug (`Dataset.flatten`)[https://github.com/huggingface/datasets/blob/master/src/datasets/arrow_dataset.py#L1265] fails for columns with feature (`Translation`)[https://github.com/huggingface/datasets/blob/3edbeb0ec6519b79f1119adc251a1a6b379a2c12/src/datasets/features/translation.py#L8] ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("europa_ecdc_tm", "en2fr", split="train[:10]") print(dataset.features) # {'translation': Translation(languages=['en', 'fr'], id=None)} print(dataset[0]) # {'translation': {'en': 'Vaccination against hepatitis C is not yet available.', 'fr': 'Aucune vaccination contre l’hépatite C n’est encore disponible.'}} dataset.flatten() ``` ## Expected results `dataset.flatten` should flatten the `Translation` column as if it were a dict of `Value("string")` ```python dataset[0] # {'translation.en': 'Vaccination against hepatitis C is not yet available.', 'translation.fr': 'Aucune vaccination contre l’hépatite C n’est encore disponible.' } dataset.features # {'translation.en': Value("string"), 'translation.fr': Value("string")} ``` ## Actual results ```python In [31]: dset.flatten() --------------------------------------------------------------------------- KeyError Traceback (most recent call last) <ipython-input-31-bb88eb5276ee> in <module> ----> 1 dset.flatten() [...]\site-packages\datasets\fingerprint.py in wrapper(*args, **kwargs) 411 # Call actual function 412 --> 413 out = func(self, *args, **kwargs) 414 415 # Update fingerprint of in-place transforms + update in-place history of transforms [...]\site-packages\datasets\arrow_dataset.py in flatten(self, new_fingerprint, max_depth) 1294 break 1295 dataset.info.features = self.features.flatten(max_depth=max_depth) -> 1296 dataset._data = update_metadata_with_features(dataset._data, dataset.features) 1297 logger.info(f'Flattened dataset from depth {depth} to depth {1 if depth + 1 < max_depth else "unknown"}.') 1298 dataset._fingerprint = new_fingerprint [...]\site-packages\datasets\arrow_dataset.py in update_metadata_with_features(table, features) 534 def update_metadata_with_features(table: Table, features: Features): 535 """To be used in dataset transforms that modify the features of the dataset, in order to update the features stored in the metadata of its schema.""" --> 536 features = Features({col_name: features[col_name] for col_name in table.column_names}) 537 if table.schema.metadata is None or b"huggingface" not in table.schema.metadata: 538 pa_metadata = ArrowWriter._build_metadata(DatasetInfo(features=features)) [...]\site-packages\datasets\arrow_dataset.py in <dictcomp>(.0) 534 def update_metadata_with_features(table: Table, features: Features): 535 """To be used in dataset transforms that modify the features of the dataset, in order to update the features stored in the metadata of its schema.""" --> 536 features = Features({col_name: features[col_name] for col_name in table.column_names}) 537 if table.schema.metadata is None or b"huggingface" not in table.schema.metadata: 538 pa_metadata = ArrowWriter._build_metadata(DatasetInfo(features=features)) KeyError: 'translation.en' ``` ## Environment info - `datasets` version: 1.18.3 - Platform: Windows-10-10.0.19041-SP0 - Python version: 3.7.10 - PyArrow version: 3.0.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/3686/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3686/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
38 days, 5:54:25
https://api.github.com/repos/huggingface/datasets/issues/3679
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3679/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3679/comments
https://api.github.com/repos/huggingface/datasets/issues/3679/events
https://github.com/huggingface/datasets/issues/3679
1,124,062,133
I_kwDODunzps5C_9O1
3,679
Download datasets from a private hub
{ "avatar_url": "https://avatars.githubusercontent.com/u/3436143?v=4", "events_url": "https://api.github.com/users/juliensimon/events{/privacy}", "followers_url": "https://api.github.com/users/juliensimon/followers", "following_url": "https://api.github.com/users/juliensimon/following{/other_user}", "gists_url": "https://api.github.com/users/juliensimon/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/juliensimon", "id": 3436143, "login": "juliensimon", "node_id": "MDQ6VXNlcjM0MzYxNDM=", "organizations_url": "https://api.github.com/users/juliensimon/orgs", "received_events_url": "https://api.github.com/users/juliensimon/received_events", "repos_url": "https://api.github.com/users/juliensimon/repos", "site_admin": false, "starred_url": "https://api.github.com/users/juliensimon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/juliensimon/subscriptions", "type": "User", "url": "https://api.github.com/users/juliensimon", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "A929D8", "default": false, "description": "", "id": 3814924348, "name": "private-hub", "node_id": "LA_kwDODunzps7jYyA8", "url": "https://api.github.com/repos/huggingface/datasets/labels/private-hub" } ]
closed
false
null
[]
[ "For reference:\r\nhttps://github.com/huggingface/transformers/issues/15514\r\nhttps://github.com/huggingface/huggingface_hub/issues/650", "Hi ! For information one can set the environment variable `HF_ENDPOINT` (default is `https://huggingface.co`) if they want to use a private hub.\r\n\r\nWe may need to coordinate with the other libraries to have a consistent way of changing the hub endpoint", "Yes, I tested it successfully this morning. Thanks." ]
2022-02-04T10:49:06
2022-02-22T11:08:07
2022-02-22T11:08:07
NONE
null
null
null
null
In the context of a private hub deployment, customers would like to use load_dataset() to load datasets from their hub, not from the public hub. This doesn't seem to be configurable at the moment and it would be nice to add this feature. The obvious workaround is to clone the repo first and then load it from local storage, but this adds an extra step. It'd be great to have the same experience regardless of where the hub is hosted. The same issue exists with the transformers library and the CLI. I'm going to create issues there as well, and I'll reference them below.
{ "avatar_url": "https://avatars.githubusercontent.com/u/3436143?v=4", "events_url": "https://api.github.com/users/juliensimon/events{/privacy}", "followers_url": "https://api.github.com/users/juliensimon/followers", "following_url": "https://api.github.com/users/juliensimon/following{/other_user}", "gists_url": "https://api.github.com/users/juliensimon/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/juliensimon", "id": 3436143, "login": "juliensimon", "node_id": "MDQ6VXNlcjM0MzYxNDM=", "organizations_url": "https://api.github.com/users/juliensimon/orgs", "received_events_url": "https://api.github.com/users/juliensimon/received_events", "repos_url": "https://api.github.com/users/juliensimon/repos", "site_admin": false, "starred_url": "https://api.github.com/users/juliensimon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/juliensimon/subscriptions", "type": "User", "url": "https://api.github.com/users/juliensimon", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3679/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3679/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
18 days, 0:19:01
https://api.github.com/repos/huggingface/datasets/issues/3677
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3677/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3677/comments
https://api.github.com/repos/huggingface/datasets/issues/3677/events
https://github.com/huggingface/datasets/issues/3677
1,123,192,866
I_kwDODunzps5C8pAi
3,677
Discovery cannot be streamed anymore
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
[ "Seems like a regression from https://github.com/huggingface/datasets/pull/2843\r\n\r\nOr maybe it's an issue with the hosting. I don't think so, though, because https://www.dropbox.com/s/aox84z90nyyuikz/discovery.zip seems to work as expected\r\n\r\n", "Hi @severo, thanks for reporting.\r\n\r\nSome servers do not support HTTP range requests, and those are required to stream some file formats (like ZIP in this case).\r\n\r\nLet me try to propose a workaround. " ]
2022-02-03T15:02:03
2022-02-10T16:51:24
2022-02-10T16:51:24
COLLABORATOR
null
null
null
null
## Describe the bug A clear and concise description of what the bug is. ## Steps to reproduce the bug ```python from datasets import load_dataset iterable_dataset = load_dataset("discovery", name="discovery", split="train", streaming=True) list(iterable_dataset.take(1)) ``` ## Expected results The first row of the train split. ## Actual results ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 365, in __iter__ for key, example in self._iter(): File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 362, in _iter yield from ex_iterable File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 272, in __iter__ yield from islice(self.ex_iterable, self.n) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 79, in __iter__ yield from self.generate_examples_fn(**self.kwargs) File "/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/discovery/542fab7a9ddc1d9726160355f7baa06a1ccc44c40bc8e12c09e9bc743aca43a2/discovery.py", line 333, in _generate_examples with open(data_file, encoding="utf8") as f: File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/streaming.py", line 64, in wrapper return function(*args, use_auth_token=use_auth_token, **kwargs) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/utils/streaming_download_manager.py", line 369, in xopen file_obj = fsspec.open(file, mode=mode, *args, **kwargs).open() File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/core.py", line 456, in open return open_files( File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/core.py", line 288, in open_files fs, fs_token, paths = get_fs_token_paths( File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/core.py", line 611, in get_fs_token_paths fs = filesystem(protocol, **inkwargs) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/registry.py", line 253, in filesystem return cls(**storage_options) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/spec.py", line 68, in __call__ obj = super().__call__(*args, **kwargs) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/implementations/zip.py", line 57, in __init__ self.zip = zipfile.ZipFile(self.fo) File "/home/slesage/.pyenv/versions/3.9.6/lib/python3.9/zipfile.py", line 1257, in __init__ self._RealGetContents() File "/home/slesage/.pyenv/versions/3.9.6/lib/python3.9/zipfile.py", line 1320, in _RealGetContents endrec = _EndRecData(fp) File "/home/slesage/.pyenv/versions/3.9.6/lib/python3.9/zipfile.py", line 263, in _EndRecData fpin.seek(0, 2) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py", line 676, in seek raise ValueError("Cannot seek streaming HTTP file") ValueError: Cannot seek streaming HTTP file ``` ## Environment info - `datasets` version: 1.18.3 - Platform: Linux-5.11.0-1027-aws-x86_64-with-glibc2.31 - Python version: 3.9.6 - PyArrow version: 6.0.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3677/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3677/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
7 days, 1:49:21
https://api.github.com/repos/huggingface/datasets/issues/3676
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3676/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3676/comments
https://api.github.com/repos/huggingface/datasets/issues/3676/events
https://github.com/huggingface/datasets/issues/3676
1,123,096,362
I_kwDODunzps5C8Rcq
3,676
`None` replaced by `[]` after first batch in map
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" } ]
[ "It looks like this is because of this behavior in pyarrow:\r\n```python\r\nimport pyarrow as pa\r\n\r\narr = pa.array([None, [0]])\r\nreconstructed_arr = pa.ListArray.from_arrays(arr.offsets, arr.values)\r\nprint(reconstructed_arr.to_pylist())\r\n# [[], [0]]\r\n```\r\n\r\nIt seems that `arr.offsets` can reconstruct the array properly, but an offsets array with null values can:\r\n```python\r\nfixed_offsets = pa.array([None, 0, 1])\r\nfixed_arr = pa.ListArray.from_arrays(fixed_offsets, arr.values)\r\nprint(fixed_arr.to_pylist())\r\n# [None, [0]]\r\n\r\nprint(arr.offsets.to_pylist())\r\n# [0, 0, 1]\r\nprint(fixed_offsets.to_pylist())\r\n# [None, 0, 1]\r\n```\r\nEDIT: this is because `arr.offsets` is not enough to reconstruct the array, we also need the validity bitmap", "The offsets don't have nulls because they don't include the validity bitmap from `arr.buffers()[0]`, which is used to say which values are null and which values are non-null.\r\n\r\nThough the validity bitmap also seems to be wrong:\r\n```python\r\nbin(int(arr.buffers()[0].hex(), 16))\r\n# '0b10'\r\n# it should be 0b110 - 1 corresponds to non-null and 0 corresponds to null, if you take the bits in reverse order\r\n```\r\n\r\nSo apparently I can't even create the fixed offsets array using this.\r\n\r\nIf I understand correctly it's always missing the 1 on the left, so I can add it manually as a hack to fix the issue until this is fixed in pyarrow EDIT: actually it may be more complicated than that\r\n\r\nEDIT2: actuall it's right, it corresponds to the validity bitmap of the array of logical length 2. So if we use the offsets array, the values array, and this validity bitmap it should be possible to reconstruct the array properly", "I created an issue on Apache Arrow's JIRA: https://issues.apache.org/jira/browse/ARROW-15837", "And another one: https://issues.apache.org/jira/browse/ARROW-15839", "FYI the behavior is the same with:\r\n- `datasets` version: 1.18.3\r\n- Platform: Linux-5.8.0-50-generic-x86_64-with-debian-bullseye-sid\r\n- Python version: 3.7.11\r\n- PyArrow version: 6.0.1\r\n\r\n\r\nbut not with:\r\n- `datasets` version: 1.8.0\r\n- Platform: Linux-4.18.0-305.40.2.el8_4.x86_64-x86_64-with-redhat-8.4-Ootpa\r\n- Python version: 3.7.11\r\n- PyArrow version: 3.0.0\r\n\r\ni.e. it outputs:\r\n```py\r\n0 [None, [0]]\r\n1 [None, [0]]\r\n2 [None, [0]]\r\n3 [None, [0]]\r\n```\r\n", "Thanks for the insights @PaulLerner !\r\n\r\nI found a way to workaround this issue for the code example presented in this issue.\r\n\r\nNote that empty lists will still appear when you explicitly `cast` a list of lists that contain None values like [None, [0]] to a new feature type (e.g. to change the integer precision). In this case it will show a warning that it happened. If you don't cast anything, then the None values will be kept as expected.\r\n\r\nLet me know what you think !", "Hi! I feel like I’m missing something in your answer, *what* is the workaround? Is it fixed in some `datasets` version?", "`pa.ListArray.from_arrays` returns empty lists instead of None values. The workaround I added inside `datasets` simply consists in not using `pa.ListArray.from_arrays` :)\r\n\r\nOnce this PR [here ](https://github.com/huggingface/datasets/pull/4282)is merged, we'll release a new version of `datasets` that currectly returns the None values in the case described in this issue\r\n\r\nEDIT: released :) but let's keep this issue open because it might happen again if users change the integer precision for example" ]
2022-02-03T13:36:48
2022-10-28T13:13:20
2022-10-28T13:13:20
MEMBER
null
null
null
null
Sometimes `None` can be replaced by `[]` when running map: ```python from datasets import Dataset ds = Dataset.from_dict({"a": range(4)}) ds = ds.map(lambda x: {"b": [[None, [0]]]}, batched=True, batch_size=1, remove_columns=["a"]) print(ds.to_pandas()) # b # 0 [None, [0]] # 1 [[], [0]] # 2 [[], [0]] # 3 [[], [0]] ``` This issue has been experienced when running the `run_qa.py` example from `transformers` (see issue https://github.com/huggingface/transformers/issues/15401) This can be due to a bug in when casting `None` in nested lists. Casting only happens after the first batch, since the first batch is used to infer the feature types. cc @sgugger
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 2, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 3, "url": "https://api.github.com/repos/huggingface/datasets/issues/3676/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3676/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
266 days, 23:36:32
https://api.github.com/repos/huggingface/datasets/issues/3675
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3675/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3675/comments
https://api.github.com/repos/huggingface/datasets/issues/3675/events
https://github.com/huggingface/datasets/issues/3675
1,123,078,408
I_kwDODunzps5C8NEI
3,675
Add CodeContests dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
closed
false
null
[]
[ "@mariosasko Can I take this up?", "This dataset is now available here: https://huggingface.co/datasets/deepmind/code_contests." ]
2022-02-03T13:20:00
2022-07-20T11:07:05
2022-07-20T11:07:05
COLLABORATOR
null
null
null
null
## Adding a Dataset - **Name:** CodeContests - **Description:** CodeContests is a competitive programming dataset for machine-learning. - **Paper:** - **Data:** https://github.com/deepmind/code_contests - **Motivation:** This dataset was used when training [AlphaCode](https://deepmind.com/blog/article/Competitive-programming-with-AlphaCode). Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3675/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3675/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
166 days, 21:47:05
https://api.github.com/repos/huggingface/datasets/issues/3673
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3673/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3673/comments
https://api.github.com/repos/huggingface/datasets/issues/3673/events
https://github.com/huggingface/datasets/issues/3673
1,123,010,520
I_kwDODunzps5C78fY
3,673
`load_dataset("snli")` is different from dataset viewer
{ "avatar_url": "https://avatars.githubusercontent.com/u/61748653?v=4", "events_url": "https://api.github.com/users/pietrolesci/events{/privacy}", "followers_url": "https://api.github.com/users/pietrolesci/followers", "following_url": "https://api.github.com/users/pietrolesci/following{/other_user}", "gists_url": "https://api.github.com/users/pietrolesci/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/pietrolesci", "id": 61748653, "login": "pietrolesci", "node_id": "MDQ6VXNlcjYxNzQ4NjUz", "organizations_url": "https://api.github.com/users/pietrolesci/orgs", "received_events_url": "https://api.github.com/users/pietrolesci/received_events", "repos_url": "https://api.github.com/users/pietrolesci/repos", "site_admin": false, "starred_url": "https://api.github.com/users/pietrolesci/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pietrolesci/subscriptions", "type": "User", "url": "https://api.github.com/users/pietrolesci", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" }, { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo", "user_view_type": "public" } ]
[ "Yes, we decided to replace the encoded label with the corresponding label when possible in the dataset viewer. But\r\n1. maybe it's the wrong default\r\n2. we could find a way to show both (with a switch, or showing both ie. `0 (neutral)`).\r\n", "Hi @severo,\r\n\r\nThanks for clarifying. \r\n\r\nI think this default is a bit counterintuitive for the user. However, this is a personal opinion that might not be general. I think it is nice to have the actual (non-encoded) labels in the viewer. On the other hand, it would be nice to match what the user sees with what they get when they download a dataset. I don't know - I can see the difficulty of choosing a default :)\r\nMaybe having non-encoded labels as a default can be useful?\r\n\r\nAnyway, I think the issue has been addressed. Thanks a lot for your super-quick answer!\r\n\r\n ", "Thanks for the 👍 in https://github.com/huggingface/datasets/issues/3673#issuecomment-1029008349 @mariosasko @gary149 @pietrolesci, but as I proposed various solutions, it's not clear to me which you prefer. Could you write your preferences as a comment?\r\n\r\n_(note for myself: one idea per comment in the future)_", "As I am working with seq2seq, I prefer having the label in string form rather than numeric. So the viewer is fine and the underlying dataset should be \"decoded\" (from int to str). In this way, the user does not have to search for a mapping `int -> original name` (even though is trivial to find, I reckon). Also, encoding labels is rather easy.\r\n\r\nI hope this is useful", "I like the idea of \"0 (neutral)\". The label name can even be greyed to make it clear that it's not part of the actual item in the dataset, it's just the meaning.", "I like @lhoestq's idea of having grayed-out labels.", "Proposals by @gary149. Which one do you prefer? Please vote with the thumbs\r\n\r\n- 👍 \r\n\r\n ![image](https://user-images.githubusercontent.com/1676121/152387949-883c7d7e-a9f3-48aa-bff9-11a691555e6e.png)\r\n\r\n- 👎 \r\n\r\n ![image (1)](https://user-images.githubusercontent.com/1676121/152388061-32d95e42-cade-4ae4-9a77-7365e7b72b8f.png)\r\n\r\n", "I like Option 1 better as it shows clearly what the user is downloading", "Thanks! ", "It's [live](https://huggingface.co/datasets/glue/viewer/cola/train):\r\n\r\n<img width=\"1126\" alt=\"Capture d’écran 2022-02-14 à 10 26 03\" src=\"https://user-images.githubusercontent.com/1676121/153836716-25f6205b-96af-42d8-880a-7c09cb24c420.png\">\r\n\r\nThanks all for the help to improve the UI!", "Love it ! thanks :)" ]
2022-02-03T12:10:43
2022-02-16T11:22:31
2022-02-11T17:01:21
NONE
null
null
null
null
## Describe the bug The dataset that is downloaded from the Hub via `load_dataset("snli")` is different from what is available in the dataset viewer. In the viewer the labels are not encoded (i.e., "neutral", "entailment", "contradiction"), while the downloaded dataset shows the encoded labels (i.e., 0, 1, 2). Is this expected? ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - Platform: Ubuntu 20.4 - Python version: 3.7
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3673/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3673/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
8 days, 4:50:38
https://api.github.com/repos/huggingface/datasets/issues/3671
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3671/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3671/comments
https://api.github.com/repos/huggingface/datasets/issues/3671/events
https://github.com/huggingface/datasets/issues/3671
1,122,864,253
I_kwDODunzps5C7Yx9
3,671
Give an estimate of the dataset size in DatasetInfo
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
[]
2022-02-03T09:47:10
2022-02-03T09:47:10
null
COLLABORATOR
null
null
null
null
**Is your feature request related to a problem? Please describe.** Currently, only part of the datasets provide `dataset_size`, `download_size`, `size_in_bytes` (and `num_bytes` and `num_examples` inside `splits`). I would want to get this information, or an estimation, for all the datasets. **Describe the solution you'd like** - get access to the git information for the dataset files hosted on the hub - look at the [`Content-Length`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Length) for the files served by HTTP
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3671/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3671/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
https://api.github.com/repos/huggingface/datasets/issues/3668
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3668/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3668/comments
https://api.github.com/repos/huggingface/datasets/issues/3668/events
https://github.com/huggingface/datasets/issues/3668
1,122,261,736
I_kwDODunzps5C5Fro
3,668
Couldn't cast array of type string error with cast_column
{ "avatar_url": "https://avatars.githubusercontent.com/u/25264037?v=4", "events_url": "https://api.github.com/users/R4ZZ3/events{/privacy}", "followers_url": "https://api.github.com/users/R4ZZ3/followers", "following_url": "https://api.github.com/users/R4ZZ3/following{/other_user}", "gists_url": "https://api.github.com/users/R4ZZ3/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/R4ZZ3", "id": 25264037, "login": "R4ZZ3", "node_id": "MDQ6VXNlcjI1MjY0MDM3", "organizations_url": "https://api.github.com/users/R4ZZ3/orgs", "received_events_url": "https://api.github.com/users/R4ZZ3/received_events", "repos_url": "https://api.github.com/users/R4ZZ3/repos", "site_admin": false, "starred_url": "https://api.github.com/users/R4ZZ3/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/R4ZZ3/subscriptions", "type": "User", "url": "https://api.github.com/users/R4ZZ3", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "Hi ! I wasn't able to reproduce the error, are you still experiencing this ? I tried calling `cast_column` on a string column containing paths.\r\n\r\nIf you manage to share a reproducible code example that would be perfect", "Hi,\r\n\r\nI think my team mate got this solved. Clolsing it for now and will reopen if I experience this again.\r\nThanks :) ", "Hi @R4ZZ3,\r\n\r\nIf it is not too much of a bother, can you please help me how to resolve this error? I am exactly getting the same error where I am going as per the documentation guideline:\r\n\r\n`my_audio_dataset = my_audio_dataset.cast_column(\"audio_paths\", Audio())`\r\n\r\nwhere `\"audio_paths\"` is a dataset column (feature) having strings of absolute paths to mp3 files of the dataset.\r\n\r\n", "I was having the same issue with this code:\r\n\r\n```\r\ndataset = dataset.map(\r\n lambda batch: {\"full_path\" : os.path.join(self.data_path, batch[\"path\"])},\r\n num_procs = 4\r\n)\r\nmy_audio_dataset = dataset.cast_column(\"full_path\", Audio(sampling_rate=16_000))\r\n```\r\n\r\nRemoving the \"num_procs\" argument fixed it somehow.\r\nUsing a mac with m1 chip", "Hi @Hubert-Bonisseur, I think this will be fixed by https://github.com/huggingface/datasets/pull/4614" ]
2022-02-02T18:33:29
2022-07-19T13:36:24
2022-07-19T13:36:24
NONE
null
null
null
null
## Describe the bug In OVH cloud during Huggingface Robust-speech-recognition event on a AI training notebook instance using jupyter lab and running jupyter notebook When using the dataset.cast_column("audio",Audio(sampling_rate=16_000)) method I get error ![image](https://user-images.githubusercontent.com/25264037/152214027-9c42a71a-dd24-463c-a346-57e0287e5a8f.png) This was working with datasets version 1.17.1.dev0 but now with version 1.18.3 produces the error above. ## Steps to reproduce the bug load dataset: ![image](https://user-images.githubusercontent.com/25264037/152216145-159553b6-cddc-4f0b-8607-7e76b600e22a.png) remove columns: ![image](https://user-images.githubusercontent.com/25264037/152214707-7c7e89d1-87d8-4b4f-8cfc-5d7223d35644.png) run my fix_path function. This also creates the audio column that is referring to the absolute file path of the audio ![image](https://user-images.githubusercontent.com/25264037/152214773-51f71ccf-d31b-4449-b63a-1af56436e49f.png) Then I concatenate few other datasets and finally try the cast_column method ![image](https://user-images.githubusercontent.com/25264037/152215032-f341ec86-9d6d-48c9-943b-e2efe37a4d98.png) but get error: ![image](https://user-images.githubusercontent.com/25264037/152215073-b85bd057-98e8-413c-9b05-51e9805f2c24.png) ## Expected results A clear and concise description of the expected results. ## Actual results Specify the actual results or traceback. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.3 - Platform: OVH Cloud, AI Training section, container for Huggingface Robust Speech Recognition event image(baaastijn/ovh_huggingface) ![image](https://user-images.githubusercontent.com/25264037/152215161-b4ff7bfb-2736-4afb-9223-761a3338d23c.png) - Python version: 3.8.8 - PyArrow version: ![image](https://user-images.githubusercontent.com/25264037/152215936-4d365760-557e-456b-b5eb-ad1d15cf5073.png)
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3668/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3668/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
166 days, 19:02:55
https://api.github.com/repos/huggingface/datasets/issues/3663
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3663/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3663/comments
https://api.github.com/repos/huggingface/datasets/issues/3663/events
https://github.com/huggingface/datasets/issues/3663
1,121,067,647
I_kwDODunzps5C0iJ_
3,663
[Audio] Path of Common Voice cannot be used for audio loading anymore
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }, { "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/polinaeterna", "id": 16348744, "login": "polinaeterna", "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "repos_url": "https://api.github.com/users/polinaeterna/repos", "site_admin": false, "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "type": "User", "url": "https://api.github.com/users/polinaeterna", "user_view_type": "public" }, { "avatar_url": "https://avatars.githubusercontent.com/u/26864830?v=4", "events_url": "https://api.github.com/users/anton-l/events{/privacy}", "followers_url": "https://api.github.com/users/anton-l/followers", "following_url": "https://api.github.com/users/anton-l/following{/other_user}", "gists_url": "https://api.github.com/users/anton-l/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/anton-l", "id": 26864830, "login": "anton-l", "node_id": "MDQ6VXNlcjI2ODY0ODMw", "organizations_url": "https://api.github.com/users/anton-l/orgs", "received_events_url": "https://api.github.com/users/anton-l/received_events", "repos_url": "https://api.github.com/users/anton-l/repos", "site_admin": false, "starred_url": "https://api.github.com/users/anton-l/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anton-l/subscriptions", "type": "User", "url": "https://api.github.com/users/anton-l", "user_view_type": "public" }, { "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" } ]
[ "Having talked to @lhoestq, I see that this feature is no longer supported. \r\n\r\nI really don't think this was a good idea. It is a major breaking change and one for which we don't even have a working solution at the moment, which is bad for PyTorch as we don't want to force people to have `datasets` decode audio files automatically, but **really** bad for Tensorflow and Flax where we **currently cannot** even use `datasets` to load `.mp3` files - e.g. `common_voice` doesn't work anymore in a TF training script. Note this worked perfectly fine before making the change (think it was done [here](https://github.com/huggingface/datasets/pull/3290) no?)\r\n\r\nIMO, it's really important to think about a solution here and I strongly favor to make a difference here between loading a dataset in streaming mode and in non-streaming mode, so that in non-streaming mode the actual downloaded file is displayed. It's really crucial for people to be able to analyse the original files IMO when the dataset is not downloaded in streaming mode. \r\n\r\nThere are the following reasons why it is paramount to have access to the **original** audio file in my opinion (in non-streaming mode):\r\n- There are a wide variety of different libraries to load audio data with varying support on different platforms. For me it was quite clear that there is simply to single good library to load audio files for all platforms - so we have to leave the option to the user to decide which loading to use.\r\n- We had support for audio datasets a long time before streaming audio was possible. There were quite some versions where we advertised **everywhere** to load the audio from the path name (and there are many places where we still do even though it's not possible anymore). To give some examples:\r\n - Official example of TF Wav2Vec2: https://github.com/huggingface/transformers/blob/f427e750490b486944cc9be3c99834ad5cf78b57/src/transformers/models/wav2vec2/modeling_tf_wav2vec2.py#L1423 Wav2Vec2 is as important for speech as BERT is for NLP - so it's **very** important. The official example currently doesn't work and we don't even have a workaround for it for MP3 files at the moment. Same goes for Flax.\r\n - The most downloaded non-nlp checkpoint: https://huggingface.co/facebook/wav2vec2-base-960h#usage has a usage example which doesn't work anymore with the current datasets implementation. I'll update this now, but we have >1000 wav2vec2 checkpoints on the Hub and we can't update all the model cards.\r\n => This is a big breaking change with no current solution. For `transformers` breaking changes are one of the biggest complaints.\r\n- Similar to this we also shouldn't assume that there is only one resampling method for Audio. I think it's good to have one offered automatically by `datasets`, but we have to leave the user the freedom to choose her/his own resampling as well. Resampling can take very different filtering windows and other parameters which are currently somewhat hardcoded in `datasets`, which users might very well want to change.\r\n\r\n\r\n=> IMO, it's a **very** big priority to again have the correct absolute path in non-streaming mode. The other solution of providing a path-like object derived from the bytes stocked in the `.array` file is not nearly as user-friendly, but better than nothing. ", "Agree that we need to have access to the original sound files. Few days ago I was looking for these original files because I suspected there is bug in the audio resampling (confirmed in https://github.com/huggingface/datasets/issues/3662) and I want to do my own resampling to workaround the bug, which is now not possible anymore due to the unavailability of the original files.", "@patrickvonplaten \r\n> The other solution of providing a path-like object derived from the bytes stocked in the .array file is not nearly as user-friendly, but better than nothing\r\n\r\nJust to clarify, here you describe the approach that uses the `Audio.decode` attribute to access the underlying bytes?\r\n\r\n> The official example currently doesn't work and we don't even have a workaround for it for MP3 files at the moment\r\n\r\nI'd assume this is because we use `sox_io` as a backend for decoding. However, soon we should be able to use `soundfile`, which supports path-like objects, for MP3 (https://github.com/huggingface/datasets/pull/3667#issuecomment-1030090627).\r\n\r\nYour concern is reasonable, but there are situations where we can only serve bytes (see https://github.com/huggingface/datasets/pull/3685 for instance). IMO it makes sense to fix the affected datasets for now, but I don't think we should care too much whether we rely on local paths or bytes after soundfile adds support for MP3 as long as our examples work (shouldn't be too hard to update the `map_to_array` functions) and we properly document how to access the underlying path/bytes for custom decoding (via `ds.cast_column(\"audio\", Audio(decode=False))`).\r\n", "Related to this discussion: in https://github.com/huggingface/datasets/pull/3664#issuecomment-1031866858 I propose how we could change `iter_archive` to work for streaming and also return local paths (as it used too !). I'd love your opinions on this", "> @patrickvonplaten\r\n> \r\n> > The other solution of providing a path-like object derived from the bytes stocked in the .array file is not nearly as user-friendly, but better than nothing\r\n> \r\n> Just to clarify, here you describe the approach that uses the `Audio.decode` attribute to access the underlying bytes?\r\n\r\nYes! \r\n\r\n> \r\n> > The official example currently doesn't work and we don't even have a workaround for it for MP3 files at the moment\r\n> \r\n> I'd assume this is because we use `sox_io` as a backend for decoding. However, soon we should be able to use `soundfile`, which supports path-like objects, for MP3 ([#3667 (comment)](https://github.com/huggingface/datasets/pull/3667#issuecomment-1030090627)). \r\n> Your concern is reasonable, but there are situations where we can only serve bytes (see #3685 for instance). IMO it makes sense to fix the affected datasets for now, but I don't think we should care too much whether we rely on local paths or bytes after soundfile adds support for MP3 as long as our examples work (shouldn't be too hard to update the `map_to_array` functions) and we properly document how to access the underlying path/bytes for custom decoding (via `ds.cast_column(\"audio\", Audio(decode=False))`).\r\n\r\nYes this might be, but I highly doubt that `soundfile` is the go-to library for audio then. @anton-l and I have tried out a bunch of different audio loading libraries (`soundfile`, `librosa`, `torchaudio`, pure `ffmpeg`, `audioread`, ...). One thing that was pretty clear to me is that there is just no \"de-facto standard\" library and they all have pros and cons. None of the libraries really supports \"batch\"-ed audio loading. Some depend on PyTorch. `torchaudio` is 100x faster (really!) than `librosa's` fallback on MP3. `torchaudio` often has problems with multi-proessing, ... Also we should keep in mind that resampling is similarly not as simple as reading a text file. It's a pretty complex signal processing transform and people very well might want to use special filters, etc...at the moment we just hard-code `torchaudio's` or `librosa's` default filter when doing resampling.\r\n\r\n=> All this to say that we **should definitely** care about whether we rely on local paths or bytes IMO. We don't want to loose all users that are forced to use `datasets` decoding or resampling or have to built a very much not intuitive way of loading bytes into a numpy array. It's much more intuitive to be able to inspect a local file. I feel pretty strongly about this and am happy to also jump on a call. Keeping libraries flexible and lean as well as exposing internals is very important IMO (this philosophy has worked quite well so far with Transformers).\r\n\r\n", "Thanks a lot for the very detailed explanation. Now everything makes much more sense.", "From https://github.com/huggingface/datasets/pull/3736 the Common Voice dataset now gives access to the local audio files as before", "I understand the argument that it is bad to have a breaking change. How to deal with the introduction of breaking changes is a topic of its own and not sure how you want to deal with that (or is the policy this is never allowed, and there must be a `load_dataset_v2` or so if you really want to introduce a breaking change?).\r\n\r\nRegardless of whether it is a breaking change, however, I don't see the other arguments.\r\n\r\n> but **really** bad for Tensorflow and Flax where we **currently cannot** even use `datasets` to load `.mp3` files\r\n\r\nI don't exactly understand this. Why not?\r\n\r\nWhy does the HF dataset on-the-fly decoding mechanism not work? Why is it anyway specific to PyTorch or TensorFlow? Isn't this independent?\r\n\r\nBut even if you just provide the raw bytes to TF, on TF you could just use sth like `tfio.audio.decode_mp3` or `tf.audio.decode_ogg` or `tfio.audio.decode_flac`?\r\n\r\n> There are the following reasons why it is paramount to have access to the original audio file in my opinion ...\r\n\r\nI don't really understand the arguments (despite that it maybe breaks existing code). You anyway have the original audio files but it is just embedded in the dataset? I don't really know about any library which cannot also load the audio from memory (i.e. from the dataset).\r\n\r\nBtw, on librosa being slow for decoding audio files, I saw that as well, so we have this comment RETURNN:\r\n\r\n> Don't use librosa.load which internally uses audioread which would use Gstreamer as a backend which has multiple issues:\r\n> https://github.com/beetbox/audioread/issues/62\r\n> https://github.com/beetbox/audioread/issues/63\r\n> Instead, use PySoundFile (soundfile), which is also faster. See here for discussions:\r\n> https://github.com/beetbox/audioread/issues/64\r\n> https://github.com/librosa/librosa/issues/681\r\n\r\nResampling is also a separate aspect, which is also less straightforward and with different compromises between speed and quality. So there the different tradeoffs and different implementations can make a difference.\r\n\r\nHowever, I don't see how this is related to the question whether there should be the raw bytes inside the dataset or as separate local files.\r\n", "Thanks for your comments here @albertz - cool to get your input! \r\n\r\nAnswering a bit here between the lines:\r\n\r\n> I understand the argument that it is bad to have a breaking change. How to deal with the introduction of breaking changes is a topic of its own and not sure how you want to deal with that (or is the policy this is never allowed, and there must be a `load_dataset_v2` or so if you really want to introduce a breaking change?).\r\n> \r\n> Regardless of whether it is a breaking change, however, I don't see the other arguments.\r\n> \r\n> > but **really** bad for Tensorflow and Flax where we **currently cannot** even use `datasets` to load `.mp3` files\r\n> \r\n> I don't exactly understand this. Why not?\r\n\r\n> Why does the HF dataset on-the-fly decoding mechanism not work? Why is it anyway specific to PyTorch or TensorFlow? Isn't this independent?\r\n\r\nThe problem with decoding on the fly is that we currently rely on `torchaudio` for this now which relies on `torch` which is not necessarily something people would like to install when using `tensorflow` or `flax`. Therefore we cannot just rely on people using the decoding on the fly method. We just didn't find a library that is ML framework independent and fast enough for all formats. `torchaudio` is currently in our opinion by far the best here.\r\n\r\nSo for TF and Flax it's important that users can load audio files or bytes they way the want to - this might become less important if we find (or make) a good library with few dependencies that is fast for all kinds of platforms / use cases.\r\n\r\n\r\nNow the question is whether it's better to store audio data as a path to a file or as raw bytes I guess.\\\r\nMy main arguments for storing the audio data as a path to a file is pretty much all about users experience - I don't really expect our users to understand the inner workings of datasets:\r\n\r\n- 1. It's not straightforward to know which function to use to decode it - not all `load_audio(...)` or `read_audio(...)` work on raw bytes. E.g. Looking at https://pytorch.org/audio/stable/torchaudio.html?highlight=load#torchaudio.load one would not see directly how to load raw bytes . There are also some functions of other libraries which only work on files which would require the user to save the bytes as a file first before being able to load it.\r\n- 2. It's difficult to see which format the bytes are coming from (mp3, ogg, ...) - guess this could be remedied by adding the format to each sample though\r\n- 3. It is a bit scary IMO to see raw bytes for users. Overall, I think it's better to leave the data in it's raw form as this way it's much easier for people to play around with the audio files, less need to read docs because people don't worry about what happened to the audio files (are the bytes already resampled?)\r\n\r\nBut the argument that the audio should be loadable directly from memory is good - haven't thought about this too much. \r\nI guess it's still very much possible for the user to do this:\r\n\r\n```python\r\ndef save_as_bytes:\r\n batch[\"bytes\"] = read_in_bytes_from_file(batch[\"file\"])\\\r\n os.remove(batch[\"file\"])\r\n\r\nds = ds.map(save_as_bytes)\r\n\r\nds.save_to_disk(...)\r\n```\r\n\r\nGuess the question is more a bit about what should be the default case?", "> The problem with decoding on the fly is that we currently rely on `torchaudio` for this now which relies on `torch` which is not necessarily something people would like to install when using `tensorflow` or `flax`. Therefore we cannot just rely on people using the decoding on the fly method. We just didn't find a library that is ML framework independent and fast enough for all formats. `torchaudio` is currently in our opinion by far the best here.\r\n\r\nBut how is this relevant for this issue here? I thought this issue here is about having the (correct) path in the dataset or having raw bytes in the dataset.\r\n\r\nHow did TF users use it at all then? Or they just do not use on-the-fly decoding? I did not even notice this problem (maybe because I had `torchaudio` installed). But what do they use instead?\r\n\r\nBut as I outlined before, they could just use `tfio.audio.decode_flac` and co, where it would be more natural if you already provide the raw bytes.\r\n\r\n> Looking at https://pytorch.org/audio/stable/torchaudio.html?highlight=load#torchaudio.load one would not see directly how to load raw bytes\r\n\r\nI was not really familiar with `torchaudio`. It seems that they really don't provide an easy/direct API to operate on raw bytes. Which is very strange and unfortunate because as far as I can see, all the underlying backend libraries (e.g. soundfile) easily allow that. So I would say that this is the fault of `torchaudio` then. But despite, if you anyway use `torchaudio` with `soundfile` backend, why not just use `soundfile` directly. It's very simple to use and crossplatform.\r\n\r\nBut ok, now we are just discussing how to handle the on-the-fly decoding. I still think this is a separate issue and having raw bytes in the dataset instead of local files should just be fine as well.\r\n\r\n\r\n> It is a bit scary IMO to see raw bytes for users.\r\n\r\nI think nobody who writes code is scared by seeing the raw bytes content of a binary file. :)\r\n\r\n\r\n> I guess it's still very much possible for the user to do this:\r\n> \r\n> ```python\r\n> def save_as_bytes:\r\n> batch[\"bytes\"] = read_in_bytes_from_file(batch[\"file\"])\\\r\n> os.remove(batch[\"file\"])\r\n> \r\n> ds = ds.map(save_as_bytes)\r\n> \r\n> ds.save_to_disk(...)\r\n> ```\r\n\r\nIn https://github.com/huggingface/datasets/pull/4184#issuecomment-1105191639, you said/proposed that this `map` is not needed anymore and `save_to_disk` could do it automatically (maybe via some option)?\r\n\r\n> Guess the question is more a bit about what should be the default case?\r\n\r\nYea this is up to you. I'm happy as long as we can get it the way we want easily and this is a well supported use case. :)\r\n", "> In https://github.com/huggingface/datasets/pull/4184#issuecomment-1105191639, you said/proposed that this map is not needed anymore and save_to_disk could do it automatically (maybe via some option)?\r\n\r\nYes! Should be super easy now see discussion here: https://github.com/rwth-i6/i6_core/issues/257#issuecomment-1105494468\r\n\r\nThanks for the super useful input :-)", "Despite the comments that this has been fixed, I am finding the exact same problem is occurring again (with datasets version 2.3.2)", "> Despite the comments that this has been fixed, I am finding the exact same problem is occurring again (with datasets version 2.3.2)\r\n\r\nIt appears downgrading to torchaudio 0.11.0 fixed this problem.", "@DCNemesis, sorry which problem exactly is occuring again? Also cc @lhoestq @polinaeterna here", "@patrickvonplaten @lhoestq @polinaeterna I was unable to load audio from Common Voice using 🤗 with the current version of torchaudio, but downgrading to torchaudio 0.11.0 fixed it. This is probably more of a torch problem than a Hugging Face problem.", "@DCNemesis that's interesting, could you please share the error message if you still can access it? ", "@polinaeterna I believe it is the same exact error as above. It occurs on other .mp3 sources as well, but the problem is with torchaudio > 0.11.0. I've created a short colab notebook that reproduces the error, and the fix here: https://colab.research.google.com/drive/18wsuwdHwBPN3JkcnhEtk8MUYqF9swuWZ?usp=sharing", "Hi @DCNemesis,\r\n\r\nYour issue was slightly different from the original one in this issue page. Yours seems related to a change in the backend used by `torchaudio` (`ffmpeg` instead of `sox`). Refer to the issue page here:\r\n- #4776\r\n\r\nNormally, it should be circumvented with the patch made by @polinaeterna in:\r\n- #4923", "I think the original issue reported here was already fixed by:\r\n- #3736\r\n\r\nOtherwise, feel free to reopen." ]
2022-02-01T18:40:10
2022-09-21T15:03:09
2022-09-21T14:56:22
CONTRIBUTOR
null
null
null
null
## Describe the bug ## Steps to reproduce the bug ```python from datasets import load_dataset from torchaudio import load ds = load_dataset("common_voice", "ab", split="train") # both of the following commands fail at the moment load(ds[0]["audio"]["path"]) load(ds[0]["path"]) ``` ## Expected results The path should be the complete absolute path to the downloaded audio file not some relative path. ## Actual results ```bash ~/hugging_face/venv_3.9/lib/python3.9/site-packages/torchaudio/backend/sox_io_backend.py in load(filepath, frame_offset, num_frames, normalize, channels_first, format) 150 filepath, frame_offset, num_frames, normalize, channels_first, format) 151 filepath = os.fspath(filepath) --> 152 return torch.ops.torchaudio.sox_io_load_audio_file( 153 filepath, frame_offset, num_frames, normalize, channels_first, format) 154 RuntimeError: Error loading audio file: failed to open file cv-corpus-6.1-2020-12-11/ab/clips/common_voice_ab_19904194.mp3 ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.3.dev0 - Platform: Linux-5.4.0-96-generic-x86_64-with-glibc2.27 - Python version: 3.9.1 - PyArrow version: 3.0.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3663/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3663/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
231 days, 20:16:12
https://api.github.com/repos/huggingface/datasets/issues/3662
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3662/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3662/comments
https://api.github.com/repos/huggingface/datasets/issues/3662/events
https://github.com/huggingface/datasets/issues/3662
1,121,024,403
I_kwDODunzps5C0XmT
3,662
[Audio] MP3 resampling is incorrect when dataset's audio files have different sampling rates
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
[ "Thanks @lhoestq for finding the reason of incorrect resampling. This issue affects all languages which have sound files with different sampling rates such as Turkish and Luganda.", "@cahya-wirawan - do you know how many languages have different sampling rates in Common Voice? I'm quite surprised to see this for multiple languages actually", "@cahya-wirawan, I can reproduce the problem for Common Voice 7 for Turkish. Here a script you can use:\r\n\r\n\r\n```python\r\n#!/usr/bin/env python3\r\nfrom datasets import load_dataset\r\nimport torchaudio\r\nfrom io import BytesIO\r\nfrom datasets import Audio\r\nfrom collections import Counter\r\nimport sys\r\n\r\nds_name = str(sys.argv[1])\r\nlang = str(sys.argv[2])\r\n\r\nds = load_dataset(ds_name, lang, split=\"train\", use_auth_token=True)\r\nds = ds.cast_column(\"audio\", Audio(decode=False))\r\n\r\nall_sampling_rates = []\r\n\r\n\r\ndef print_sampling_rate(x):\r\n x, sr = torchaudio.load(BytesIO(x[\"audio\"][\"bytes\"]), format=\"mp3\")\r\n all_sampling_rates.append(sr)\r\n\r\nds.map(print_sampling_rate)\r\n\r\n\r\nprint(Counter(all_sampling_rates))\r\n```\r\n\r\ncan be run with:\r\n\r\n```bash\r\npython run.py mozilla-foundation/common_voice_7_0 tr\r\n```\r\n\r\nFor CV 6.1 all samples seem to have the same audio", "It actually shows that many more samples are in 32kHz format than it 48kHz which is unexpected. Thanks a lot for flagging! Will contact Common Voice about this as well", "I only checked the CV 7.0 for Turkish, Luganda and Indonesian, they have audio files with difference sampling rates, and all of them are affected by this issue. Percentage of incorrect resampling as follow, Turkish: 9.1%, Luganda: 88.2% and Indonesian: 64.1%.\r\nI checked it using the original CV files. I check the original sampling rates and the length of audio array of each files and compare it with the length of audio array (and the sampling rate which is always 48kHz) from mozilla-foundation/common_voice_7_0 datasets. if the length of audio array from dataset is not equal to 48kHz/original sampling rate * length of audio array of the original audio file then it is affected,", "Ok wow, thanks a lot for checking this - you've found a pretty big bug :sweat_smile: It seems like **a lot** more datasets are actually affected than I original thought. We'll try to solve this as soon as possible and make an announcement tomorrow." ]
2022-02-01T17:55:04
2022-02-02T10:52:25
2022-02-02T10:52:25
MEMBER
null
null
null
null
The Audio feature resampler for MP3 gets stuck with the first original frequencies it meets, which leads to subsequent decoding to be incorrect. Here is a code to reproduce the issue: Let's first consider two audio files with different sampling rates 32000 and 16000: ```python # first download a mp3 file with sampling_rate=32000 !wget https://file-examples-com.github.io/uploads/2017/11/file_example_MP3_700KB.mp3 import torchaudio audio_path = "file_example_MP3_700KB.mp3" audio_path2 = audio_path.replace(".mp3", "_resampled.mp3") resample = torchaudio.transforms.Resample(32000, 16000) # create a new file with sampling_rate=16000 torchaudio.save(audio_path2, resample(torchaudio.load(audio_path)[0]), 16000) ``` Then we can see an issue here when decoding: ```python from datasets import Dataset, Audio dataset = Dataset.from_dict({"audio": [audio_path, audio_path2]}).cast_column("audio", Audio(48000)) dataset[0] # decode the first audio file sets the resampler orig_freq to 32000 print(dataset .features["audio"]._resampler.orig_freq) # 32000 print(dataset[0]["audio"]["array"].shape) # here decoding is fine # (1308096,) dataset = Dataset.from_dict({"audio": [audio_path, audio_path2]}).cast_column("audio", Audio(48000)) dataset[1] # decode the second audio file sets the resampler orig_freq to 16000 print(dataset .features["audio"]._resampler.orig_freq) # 16000 print(dataset[0]["audio"]["array"].shape) # here decoding uses orig_freq=16000 instead of 32000 # (2616192,) ``` The value of `orig_freq` doesn't change no matter what file needs to be decoded cc @patrickvonplaten @anton-l @cahya-wirawan @albertvillanova The issue seems to be here in `Audio.decode_mp3`: https://github.com/huggingface/datasets/blob/4c417d52def6e20359ca16c6723e0a2855e5c3fd/src/datasets/features/audio.py#L176-L180
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3662/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3662/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
16:57:21
https://api.github.com/repos/huggingface/datasets/issues/3659
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3659/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3659/comments
https://api.github.com/repos/huggingface/datasets/issues/3659/events
https://github.com/huggingface/datasets/issues/3659
1,120,913,672
I_kwDODunzps5Cz8kI
3,659
push_to_hub but preview not working
{ "avatar_url": "https://avatars.githubusercontent.com/u/66082334?v=4", "events_url": "https://api.github.com/users/thomas-happify/events{/privacy}", "followers_url": "https://api.github.com/users/thomas-happify/followers", "following_url": "https://api.github.com/users/thomas-happify/following{/other_user}", "gists_url": "https://api.github.com/users/thomas-happify/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thomas-happify", "id": 66082334, "login": "thomas-happify", "node_id": "MDQ6VXNlcjY2MDgyMzM0", "organizations_url": "https://api.github.com/users/thomas-happify/orgs", "received_events_url": "https://api.github.com/users/thomas-happify/received_events", "repos_url": "https://api.github.com/users/thomas-happify/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thomas-happify/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomas-happify/subscriptions", "type": "User", "url": "https://api.github.com/users/thomas-happify", "user_view_type": "public" }
[ { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
[ "Hi @thomas-happify, please note that the preview may take some time before rendering the data.\r\n\r\nI've seen it is already working.\r\n\r\nI close this issue. Please feel free to reopen it if the problem arises again." ]
2022-02-01T16:23:57
2022-02-09T08:00:37
2022-02-09T08:00:37
NONE
null
null
null
null
## Dataset viewer issue for '*happifyhealth/twitter_pnn*' **Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/happifyhealth/twitter_pnn)* I used ``` dataset.push_to_hub("happifyhealth/twitter_pnn") ``` but the preview is not working. Am I the one who added this dataset ? Yes
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3659/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3659/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
7 days, 15:36:40
https://api.github.com/repos/huggingface/datasets/issues/3658
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3658/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3658/comments
https://api.github.com/repos/huggingface/datasets/issues/3658/events
https://github.com/huggingface/datasets/issues/3658
1,120,880,395
I_kwDODunzps5Cz0cL
3,658
Dataset viewer issue for *P3*
{ "avatar_url": "https://avatars.githubusercontent.com/u/22351555?v=4", "events_url": "https://api.github.com/users/jeffistyping/events{/privacy}", "followers_url": "https://api.github.com/users/jeffistyping/followers", "following_url": "https://api.github.com/users/jeffistyping/following{/other_user}", "gists_url": "https://api.github.com/users/jeffistyping/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jeffistyping", "id": 22351555, "login": "jeffistyping", "node_id": "MDQ6VXNlcjIyMzUxNTU1", "organizations_url": "https://api.github.com/users/jeffistyping/orgs", "received_events_url": "https://api.github.com/users/jeffistyping/received_events", "repos_url": "https://api.github.com/users/jeffistyping/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jeffistyping/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jeffistyping/subscriptions", "type": "User", "url": "https://api.github.com/users/jeffistyping", "user_view_type": "public" }
[]
closed
false
null
[]
[ "The error is now:\r\n\r\n```\r\nStatus code: 400\r\nException: Status400Error\r\nMessage: this dataset is not supported for now.\r\n```\r\n\r\nWe've disabled the dataset viewer for several big datasets like this one. We hope being able to reenable it soon.", "The list of splits cannot be obtained. cc @huggingface/datasets ", "```\r\nError code: SplitsNamesError\r\nException: SplitsNotFoundError\r\nMessage: The split names could not be parsed from the dataset config.\r\nTraceback: Traceback (most recent call last):\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py\", line 354, in get_dataset_config_info\r\n for split_generator in builder._split_generators(\r\n File \"/tmp/modules-cache/datasets_modules/datasets/bigscience--P3/12c0badfecad4564ecb8a6f81b5d0559656f269f08b13c59c93283f3a84134ba/P3.py\", line 154, in _split_generators\r\n data_dir = dl_manager.download_and_extract(_URLs)\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py\", line 944, in download_and_extract\r\n return self.extract(self.download(url_or_urls))\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py\", line 907, in extract\r\n urlpaths = map_nested(self._extract, path_or_paths, map_tuple=True)\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py\", line 393, in map_nested\r\n mapped = [\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py\", line 394, in <listcomp>\r\n _single_map_nested((function, obj, types, None, True, None))\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py\", line 346, in _single_map_nested\r\n return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar}\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py\", line 346, in <dictcomp>\r\n return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar}\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py\", line 346, in _single_map_nested\r\n return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar}\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py\", line 346, in <dictcomp>\r\n return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar}\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py\", line 330, in _single_map_nested\r\n return function(data_struct)\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py\", line 912, in _extract\r\n protocol = _get_extraction_protocol(urlpath, use_auth_token=self.download_config.use_auth_token)\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py\", line 402, in _get_extraction_protocol\r\n return _get_extraction_protocol_with_magic_number(f)\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py\", line 367, in _get_extraction_protocol_with_magic_number\r\n magic_number = f.read(MAGIC_NUMBER_MAX_LENGTH)\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py\", line 574, in read\r\n return super().read(length)\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/spec.py\", line 1575, in read\r\n out = self.cache._fetch(self.loc, self.loc + length)\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/caching.py\", line 377, in _fetch\r\n self.cache = self.fetcher(start, bend)\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/asyn.py\", line 111, in wrapper\r\n return sync(self.loop, func, *args, **kwargs)\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/asyn.py\", line 96, in sync\r\n raise return_result\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/asyn.py\", line 53, in _runner\r\n result[0] = await coro\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py\", line 616, in async_fetch_range\r\n out = await r.read()\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/aiohttp/client_reqrep.py\", line 1036, in read\r\n self._body = await self.content.read()\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/aiohttp/streams.py\", line 375, in read\r\n block = await self.readany()\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/aiohttp/streams.py\", line 397, in readany\r\n await self._wait(\"readany\")\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/aiohttp/streams.py\", line 304, in _wait\r\n await waiter\r\n aiohttp.client_exceptions.ClientPayloadError: Response payload is not completed\r\n \r\n The above exception was the direct cause of the following exception:\r\n \r\n Traceback (most recent call last):\r\n File \"/src/services/worker/src/worker/responses/splits.py\", line 75, in get_splits_response\r\n split_full_names = get_dataset_split_full_names(dataset, hf_token)\r\n File \"/src/services/worker/src/worker/responses/splits.py\", line 35, in get_dataset_split_full_names\r\n return [\r\n File \"/src/services/worker/src/worker/responses/splits.py\", line 38, in <listcomp>\r\n for split in get_dataset_split_names(dataset, config, use_auth_token=hf_token)\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py\", line 404, in get_dataset_split_names\r\n info = get_dataset_config_info(\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py\", line 359, in get_dataset_config_info\r\n raise SplitsNotFoundError(\"The split names could not be parsed from the dataset config.\") from err\r\n datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.\r\n```", "Closing in favor of https://huggingface.co/datasets/bigscience/P3/discussions/6 and https://github.com/huggingface/datasets-server/issues/1689" ]
2022-02-01T15:57:56
2023-09-25T12:16:21
2023-09-25T12:16:21
NONE
null
null
null
null
## Dataset viewer issue for '*P3*' **Link: https://huggingface.co/datasets/bigscience/P3** ``` Status code: 400 Exception: SplitsNotFoundError Message: The split names could not be parsed from the dataset config. ``` Am I the one who added this dataset ? No
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3658/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3658/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
600 days, 20:18:25
https://api.github.com/repos/huggingface/datasets/issues/3656
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3656/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3656/comments
https://api.github.com/repos/huggingface/datasets/issues/3656/events
https://github.com/huggingface/datasets/issues/3656
1,120,510,823
I_kwDODunzps5CyaNn
3,656
checksum error subjqa dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/9828683?v=4", "events_url": "https://api.github.com/users/RensDimmendaal/events{/privacy}", "followers_url": "https://api.github.com/users/RensDimmendaal/followers", "following_url": "https://api.github.com/users/RensDimmendaal/following{/other_user}", "gists_url": "https://api.github.com/users/RensDimmendaal/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/RensDimmendaal", "id": 9828683, "login": "RensDimmendaal", "node_id": "MDQ6VXNlcjk4Mjg2ODM=", "organizations_url": "https://api.github.com/users/RensDimmendaal/orgs", "received_events_url": "https://api.github.com/users/RensDimmendaal/received_events", "repos_url": "https://api.github.com/users/RensDimmendaal/repos", "site_admin": false, "starred_url": "https://api.github.com/users/RensDimmendaal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RensDimmendaal/subscriptions", "type": "User", "url": "https://api.github.com/users/RensDimmendaal", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
[ "Hi @RensDimmendaal, \r\n\r\nI'm sorry but I can't reproduce your bug:\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n ...: ds = load_dataset(\"subjqa\", \"electronics\")\r\nDownloading builder script: 9.15kB [00:00, 4.10MB/s] \r\nDownloading metadata: 17.7kB [00:00, 8.51MB/s] \r\nDownloading and preparing dataset subjqa/electronics (download: 10.86 MiB, generated: 3.01 MiB, post-processed: Unknown size, total: 13.86 MiB) to .../.cache/huggingface/datasets/subjqa/electronics/1.1.0/e5588f9298ff2d70686a00cc377e4bdccf4e32287459e3c6baf2dc5ab57fe7fd...\r\nDownloading data: 11.4MB [00:03, 3.50MB/s]\r\nDataset subjqa downloaded and prepared to .../.cache/huggingface/datasets/subjqa/electronics/1.1.0/e5588f9298ff2d70686a00cc377e4bdccf4e32287459e3c6baf2dc5ab57fe7fd. Subsequent calls will reuse this data.\r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 605.09it/s]\r\n\r\nIn [2]: ds\r\nOut[2]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['domain', 'nn_mod', 'nn_asp', 'query_mod', 'query_asp', 'q_reviews_id', 'question_subj_level', 'ques_subj_score', 'is_ques_subjective', 'review_id', 'id', 'title', 'context', 'question', 'answers'],\r\n num_rows: 1295\r\n })\r\n test: Dataset({\r\n features: ['domain', 'nn_mod', 'nn_asp', 'query_mod', 'query_asp', 'q_reviews_id', 'question_subj_level', 'ques_subj_score', 'is_ques_subjective', 'review_id', 'id', 'title', 'context', 'question', 'answers'],\r\n num_rows: 358\r\n })\r\n validation: Dataset({\r\n features: ['domain', 'nn_mod', 'nn_asp', 'query_mod', 'query_asp', 'q_reviews_id', 'question_subj_level', 'ques_subj_score', 'is_ques_subjective', 'review_id', 'id', 'title', 'context', 'question', 'answers'],\r\n num_rows: 255\r\n })\r\n})\r\n```\r\n\r\nCould you please try again and see if the problem persists?\r\n\r\nIf that is the case, you can circumvent the issue by passing `ignore_verifications`:\r\n```python\r\nds = load_dataset(\"subjqa\", \"electronics\", ignore_verifications=True)", "Thanks checking!\r\n\r\nYou're totally right. I don't know what's changed, but I'm glad it's working now!\r\n\r\n" ]
2022-02-01T10:53:33
2022-02-10T10:56:59
2022-02-10T10:56:38
NONE
null
null
null
null
## Describe the bug I get a checksum error when loading the `subjqa` dataset (used in the transformers book). ## Steps to reproduce the bug ```python from datasets import load_dataset subjqa = load_dataset("subjqa","electronics") ``` ## Expected results Loading the dataset ## Actual results ``` --------------------------------------------------------------------------- NonMatchingChecksumError Traceback (most recent call last) <ipython-input-2-d2857d460155> in <module>() 2 from datasets import load_dataset 3 ----> 4 subjqa = load_dataset("subjqa","electronics") 3 frames /usr/local/lib/python3.7/dist-packages/datasets/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name) 38 if len(bad_urls) > 0: 39 error_msg = "Checksums didn't match" + for_verification_name + ":\n" ---> 40 raise NonMatchingChecksumError(error_msg + str(bad_urls)) 41 logger.info("All the checksums matched successfully" + for_verification_name) 42 NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://github.com/lewtun/SubjQA/archive/refs/heads/master.zip'] ``` ## Environment info Google colab - `datasets` version: 1.18.2 - Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.12 - PyArrow version: 3.0.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/9828683?v=4", "events_url": "https://api.github.com/users/RensDimmendaal/events{/privacy}", "followers_url": "https://api.github.com/users/RensDimmendaal/followers", "following_url": "https://api.github.com/users/RensDimmendaal/following{/other_user}", "gists_url": "https://api.github.com/users/RensDimmendaal/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/RensDimmendaal", "id": 9828683, "login": "RensDimmendaal", "node_id": "MDQ6VXNlcjk4Mjg2ODM=", "organizations_url": "https://api.github.com/users/RensDimmendaal/orgs", "received_events_url": "https://api.github.com/users/RensDimmendaal/received_events", "repos_url": "https://api.github.com/users/RensDimmendaal/repos", "site_admin": false, "starred_url": "https://api.github.com/users/RensDimmendaal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RensDimmendaal/subscriptions", "type": "User", "url": "https://api.github.com/users/RensDimmendaal", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3656/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3656/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
9 days, 0:03:05
https://api.github.com/repos/huggingface/datasets/issues/3655
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3655/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3655/comments
https://api.github.com/repos/huggingface/datasets/issues/3655/events
https://github.com/huggingface/datasets/issues/3655
1,119,801,077
I_kwDODunzps5Cvs71
3,655
Pubmed dataset not reachable
{ "avatar_url": "https://avatars.githubusercontent.com/u/77638579?v=4", "events_url": "https://api.github.com/users/abhi-mosaic/events{/privacy}", "followers_url": "https://api.github.com/users/abhi-mosaic/followers", "following_url": "https://api.github.com/users/abhi-mosaic/following{/other_user}", "gists_url": "https://api.github.com/users/abhi-mosaic/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/abhi-mosaic", "id": 77638579, "login": "abhi-mosaic", "node_id": "MDQ6VXNlcjc3NjM4NTc5", "organizations_url": "https://api.github.com/users/abhi-mosaic/orgs", "received_events_url": "https://api.github.com/users/abhi-mosaic/received_events", "repos_url": "https://api.github.com/users/abhi-mosaic/repos", "site_admin": false, "starred_url": "https://api.github.com/users/abhi-mosaic/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abhi-mosaic/subscriptions", "type": "User", "url": "https://api.github.com/users/abhi-mosaic", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
[ "Hi @abhi-mosaic, thanks for reporting.\r\n\r\nI'm looking at it... ", "also hitting this issue", "Hey @albertvillanova, sorry to reopen this... I can confirm that on `master` branch the dataset is downloadable now but it is still broken in streaming mode:\r\n\r\n```python\r\n >>> import datasets\r\n >>> pubmed_train = datasets.load_dataset('pubmed', split='train', streaming=True)\r\n >>> next(iter(pubmed_train))\r\n```\r\n```\r\n No such file or directory: 'gzip://pubmed22n0001.xml::ftp://ftp.ncbi.nlm.nih.gov/pubmed/baseline/pubmed22n0001.xml.gz'\r\n```\r\n", "Hi @abhi-mosaic, would you mind opening another issue for this new problem?\r\n\r\nFirst issue (already solved) was a ConnectionError due to the yearly update release of PubMed: we fixed it by updating the URLs from year 2021 to year 2022.\r\n\r\nHowever this is another problem: to make pubmed streamable. Please note that NOT all our datastes are streamable: we are making streamable more and more of them... but this is an on-going process...\r\n\r\nThanks.", "@albertvillanova \r\nWhen I tried below codes, I got the similar error\r\n\r\n```\r\n\r\ndataset=load_dataset(\"pubmed\",split=\"train\")\r\n\r\nCouldn't reach ftp://ftp.ncbi.nlm.nih.gov/pubmed/baseline/pubmed21n0601.xml.gz\r\n```", "@y-rok you need to update `datasets`:\r\n```shell\r\npip install -U datasets\r\n```" ]
2022-01-31T18:45:47
2022-12-19T19:18:10
2022-02-14T14:15:41
CONTRIBUTOR
null
null
null
null
## Describe the bug Trying to use the `pubmed` dataset fails to reach / download the source files. ## Steps to reproduce the bug ```python pubmed_train = datasets.load_dataset('pubmed', split='train') ``` ## Expected results Should begin downloading the pubmed dataset. ## Actual results ``` ConnectionError: Couldn't reach ftp://ftp.ncbi.nlm.nih.gov/pubmed/baseline/pubmed21n0865.xml.gz (InvalidSchema("No connection adapters were found for 'ftp://ftp.ncbi.nlm.nih.gov/pubmed/baseline/pubmed21n0865.xml.gz'")) ``` ## Environment info - `datasets` version: 1.18.2 - Platform: macOS-11.4-x86_64-i386-64bit - Python version: 3.8.2 - PyArrow version: 6.0.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3655/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3655/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
13 days, 19:29:54
https://api.github.com/repos/huggingface/datasets/issues/3653
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3653/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3653/comments
https://api.github.com/repos/huggingface/datasets/issues/3653/events
https://github.com/huggingface/datasets/issues/3653
1,119,186,952
I_kwDODunzps5CtXAI
3,653
`to_json` in multiprocessing fashion sometimes deadlock
{ "avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4", "events_url": "https://api.github.com/users/thomasw21/events{/privacy}", "followers_url": "https://api.github.com/users/thomasw21/followers", "following_url": "https://api.github.com/users/thomasw21/following{/other_user}", "gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thomasw21", "id": 24695242, "login": "thomasw21", "node_id": "MDQ6VXNlcjI0Njk1MjQy", "organizations_url": "https://api.github.com/users/thomasw21/orgs", "received_events_url": "https://api.github.com/users/thomasw21/received_events", "repos_url": "https://api.github.com/users/thomasw21/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions", "type": "User", "url": "https://api.github.com/users/thomasw21", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
[]
[]
2022-01-31T09:35:07
2022-01-31T09:35:07
null
CONTRIBUTOR
null
null
null
null
## Describe the bug `to_json` in multiprocessing fashion sometimes deadlock, instead of raising exceptions. Temporary solution is to see that it deadlocks, and then reduce the number of processes or batch size in order to reduce the memory footprint. As @lhoestq pointed out, this might be related to https://bugs.python.org/issue22393#msg315684 where `multiprocessing` fails to raise the OOM exception. One suggested alternative is not use `concurrent.futures` instead. ## Steps to reproduce the bug ## Expected results Script fails when one worker hits OOM, and raise appropriate error. ## Actual results Deadlock ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.8.1 - Platform: Linux - Python version: 3.8 - PyArrow version: 6.0.1
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3653/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3653/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
https://api.github.com/repos/huggingface/datasets/issues/3649
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3649/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3649/comments
https://api.github.com/repos/huggingface/datasets/issues/3649/events
https://github.com/huggingface/datasets/issues/3649
1,117,502,250
I_kwDODunzps5Cm7sq
3,649
Add IGLUE dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lewtun", "id": 26859204, "login": "lewtun", "node_id": "MDQ6VXNlcjI2ODU5MjA0", "organizations_url": "https://api.github.com/users/lewtun/orgs", "received_events_url": "https://api.github.com/users/lewtun/received_events", "repos_url": "https://api.github.com/users/lewtun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "type": "User", "url": "https://api.github.com/users/lewtun", "user_view_type": "public" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" }, { "color": "19E633", "default": false, "description": "Multimodal datasets", "id": 3608944167, "name": "multimodal", "node_id": "LA_kwDODunzps7XHB4n", "url": "https://api.github.com/repos/huggingface/datasets/labels/multimodal" } ]
open
false
null
[]
[]
2022-01-28T14:59:41
2022-01-28T15:02:35
null
MEMBER
null
null
null
null
## Adding a Dataset - **Name:** IGLUE - **Description:** IGLUE brings together 4 vision-and-language tasks across 20 languages (Twitter [thread](https://twitter.com/ebugliarello/status/1487045497583976455?s=20&t=SB4LZGDhhkUW83ugcX_m5w)) - **Paper:** https://arxiv.org/abs/2201.11732 - **Data:** https://github.com/e-bug/iglue - **Motivation:** This dataset would provide a nice example of combining the text and image features of `datasets` together for multimodal applications. Note: the data / code are not yet visible on the GitHub repo, so I've pinged the authors for more information. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3649/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3649/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
https://api.github.com/repos/huggingface/datasets/issues/3645
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3645/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3645/comments
https://api.github.com/repos/huggingface/datasets/issues/3645/events
https://github.com/huggingface/datasets/issues/3645
1,116,541,298
I_kwDODunzps5CjRFy
3,645
Streaming dataset based on dl_manager.iter_archive/iter_files are not reset correctly
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" } ]
[]
2022-01-27T17:17:41
2022-01-28T16:34:28
2022-01-28T16:34:28
MEMBER
null
null
null
null
Hi ! When iterating over a streaming dataset once, it's not reset correctly because of some issues with `dl_manager.iter_archive` and `dl_manager.iter_files`. Indeed they are generator functions (so the iterator that is returned can be exhausted). They should be iterables instead, and be reset if we do a for loop again: ```python from datasets import load_dataset d = load_dataset("common_voice", "ab", split="test", streaming=True) i = 0 for i, _ in enumerate(d): pass print(i) # 8 # let's do it again i = 0 for i, _ in enumerate(d): pass print(i) # 0 ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3645/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3645/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
23:16:47
https://api.github.com/repos/huggingface/datasets/issues/3644
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3644/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3644/comments
https://api.github.com/repos/huggingface/datasets/issues/3644/events
https://github.com/huggingface/datasets/issues/3644
1,116,519,670
I_kwDODunzps5CjLz2
3,644
Add a GROUP BY operator
{ "avatar_url": "https://avatars.githubusercontent.com/u/208336?v=4", "events_url": "https://api.github.com/users/felix-schneider/events{/privacy}", "followers_url": "https://api.github.com/users/felix-schneider/followers", "following_url": "https://api.github.com/users/felix-schneider/following{/other_user}", "gists_url": "https://api.github.com/users/felix-schneider/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/felix-schneider", "id": 208336, "login": "felix-schneider", "node_id": "MDQ6VXNlcjIwODMzNg==", "organizations_url": "https://api.github.com/users/felix-schneider/orgs", "received_events_url": "https://api.github.com/users/felix-schneider/received_events", "repos_url": "https://api.github.com/users/felix-schneider/repos", "site_admin": false, "starred_url": "https://api.github.com/users/felix-schneider/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/felix-schneider/subscriptions", "type": "User", "url": "https://api.github.com/users/felix-schneider", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
[ "Hi ! At the moment you can use `to_pandas()` to get a pandas DataFrame that supports `group_by` operations (make sure your dataset fits in memory though)\r\n\r\nWe use Arrow as a back-end for `datasets` and it doesn't have native group by (see https://github.com/apache/arrow/issues/2189) unfortunately.\r\n\r\nI just drafted what it could look like to have `group_by` in `datasets`:\r\n```python\r\nfrom datasets import concatenate_datasets\r\n\r\ndef group_by(d, col, join): \r\n \"\"\"from: https://github.com/huggingface/datasets/issues/3644\"\"\"\r\n # Get the indices of each group\r\n groups = {key: [] for key in d.unique(col)} \r\n def create_groups_indices(key, i): \r\n groups[key].append(i) \r\n d.map(create_groups_indices, with_indices=True, input_columns=col) \r\n # Get one dataset object per group\r\n groups = {key: d.select(indices) for key, indices in groups.items()} \r\n # Apply join function\r\n groups = {\r\n key: dataset_group.map(join, batched=True, batch_size=len(dataset_group), remove_columns=d.column_names)\r\n for key, dataset_group in groups.items()\r\n } \r\n # Return concatenation of all the joined groups\r\n return concatenate_datasets(groups.values())\r\n```\r\n\r\nexample of usage:\r\n```python\r\n\r\ndef join(batch): \r\n # take the batch of all the examples of a group, and return a batch with one aggregated example\r\n # (we could aggregate examples into several rows instead of one, if you want)\r\n return {\"total\": [batch[\"i\"]]} \r\n\r\nd = Dataset.from_dict({\r\n \"i\": [i for i in range(50)],\r\n \"group_key\": [i % 4 for i in range(50)],\r\n})\r\nprint(group_by(d, \"group_key\", join))\r\n# total\r\n# 0 [0, 4, 8, 12, 16, 20, 24, 28, 32, 36, 40, 44, 48]\r\n# 1 [1, 5, 9, 13, 17, 21, 25, 29, 33, 37, 41, 45, 49]\r\n# 2 [2, 6, 10, 14, 18, 22, 26, 30, 34, 38, 42, 46]\r\n# 3 [3, 7, 11, 15, 19, 23, 27, 31, 35, 39, 43, 47]\r\n```\r\n\r\nLet me know if that helps !\r\n\r\ncc @albertvillanova @mariosasko for visibility", "@lhoestq As of PyArrow 7.0.0, `pa.Table` has the [`group_by` method](https://arrow.apache.org/docs/python/generated/pyarrow.Table.html#pyarrow.Table.group_by), so we should also consider using that function for grouping. ", "Any update on this?", "You can use https://github.com/mariosasko/datasets_sql by @mariosasko to go group by operations using SQL queries", "Hi, I have a similar issue as OP but the suggested solutions do not work for my case. Basically, I process documents through a model to extract the last_hidden_state, using the \"map\" method on a Dataset object, but would like to average the result over a categorical column at the end (i.e. groupby this column).\r\n- A to_pandas() saturates the memory, although it gives me the desired result through a .groupby().apply(np.mean, axis=0) on a smaller use-case,\r\n- The solution posted on Feb 4 is much too slow,\r\n- datasets_sql seems to not like the fact that I'm averaging np.arrays.\r\nSo I'm kinda out of \"non brute force\" options... Any help appreciated", "> Hi, I have a similar issue as OP but the suggested solutions do not work for my case. Basically, I process documents through a model to extract the last_hidden_state, using the \"map\" method on a Dataset object, but would like to average the result over a categorical column at the end (i.e. groupby this column).\r\n \r\nIf you haven't yet, you could explore using [Polars](https://www.pola.rs/) for this. It's a new DataFrame library written in Rust with Python bindings. It is Pandas like it in many ways ,but does have some biggish differences in syntax/approach so it's definitely not a drop-in replacement. \r\n\r\nPolar's also uses Arrow as a backend but also supports out-of-memory operations; in this case, it's probably easiest to write out your dataset to parquet and then use the polar's `scan_parquet` method (this will lazily read from the parquet file). The thing you get back from that is a `LazyDataFrame` i.e. nothing is loaded into memory until you specify a query and call a `collect` method. \r\n\r\nExample below of doing a groupby on a dataset which definitely wouldn't fit into memory on my machine:\r\n\r\n```\r\nfrom datasets import load_dataset\r\nimport polars as pl\r\n\r\nds = load_dataset(\"blbooks\")\r\nds['train'].to_parquet(\"test.parquet\")\r\ndf = pl.scan_parquet(\"test.parquet\")\r\ndf.groupby('date').agg([pl.count()]).collect()\r\n```\r\n\r\n>datasets_sql seems to not like the fact that I'm averaging np.arrays.\r\n\r\nI am not certain how Polars will handle this either. It does have NumPy support (https://pola-rs.github.io/polars-book/user-guide/howcani/interop/numpy.html) but I assume Polars will need to have at least enough memory in each group you want to average over so you may still end up needing more memory depending on the size of your dataset/groups. \r\n\r\n\r\n", "Hi @davanstrien , thanks a lot, I didn't know about this library and the answer works! I need to try it on the full dataset now, but I'm hopeful. Here's what my code looks like:\r\n```\r\nlist_size = 768\r\ndf.groupby(\"date\").agg(\r\n pl.concat_list(\r\n [\r\n pl.col(\"hidden_state\")\r\n .arr.slice(n, 1)\r\n .arr.first()\r\n .mean()\r\n for n in range(0, list_size)\r\n ]\r\n ).collect()\r\n```\r\n\r\nFor some reasons, the following code was giving me a \"mean() got unexpected argument 'axis'\":\r\n```\r\ndf2 = df.groupby('date').agg(\r\n pl.col(\"hidden_state\").map(np.mean).alias(\"average_hidden_state\")\r\n).collect()\r\n\r\n```\r\n\r\nEDIT: The solution works on my large dataset, the memory does not crash and the time is reasonable, thanks a lot again!", "@jeremylhour glad this worked for you :) ", "I find this functionality missing in my workflow as well and the workarounds with SQL and Polars unsatisfying. Since PyArrow has exposed this functionality, I hope this soon makes it into a release. (:", "Any update on this feature? ", "We added a proper Polars integration at #3334 if it can help:\r\n```python\r\n>>> from datasets import load_dataset\r\n>>> ds = load_dataset(\"TheBritishLibrary/blbooks\", \"1700_1799\", split=\"train\")\r\n>>> ds.to_polars().groupby('date').len()\r\n┌─────────────────────┬──────┐\r\n│ date ┆ len │\r\n│ --- ┆ --- │\r\n│ datetime[ms] ┆ u32 │\r\n╞═════════════════════╪══════╡\r\n│ 1796-01-01 00:00:00 ┆ 5831 │\r\n│ 1775-01-01 00:00:00 ┆ 4697 │\r\n│ 1749-01-01 00:00:00 ┆ 1118 │\r\n│ 1740-01-01 00:00:00 ┆ 713 │\r\n│ 1714-01-01 00:00:00 ┆ 865 │\r\n│ … ┆ … │\r\n│ 1795-01-01 00:00:00 ┆ 5930 │\r\n│ 1754-01-01 00:00:00 ┆ 1373 │\r\n│ 1780-01-01 00:00:00 ┆ 1970 │\r\n│ 1734-01-01 00:00:00 ┆ 1047 │\r\n│ 1719-01-01 00:00:00 ┆ 1235 │\r\n└─────────────────────┴──────┘\r\n```\r\n", "Umm... did any responses GET REQUESTS? I cannot understand why 'integrations' are mentioned.", "@lhoestq so does `to_polars` work with memory mapping? Because to_pandas doesn't, does it?", "According to the [polars docs](https://docs.pola.rs/api/python/dev/reference/api/polars.from_arrow.html):\n\n> This operation will be zero copy for the most part. Types that are not supported by Polars may be cast to the closest supported type.\n\nwhich means that for the most part the memory mapped data is not copied, so yes it works with memory mapping :)" ]
2022-01-27T16:57:54
2025-01-28T11:39:48
null
NONE
null
null
null
null
**Is your feature request related to a problem? Please describe.** Using batch mapping, we can easily split examples. However, we lack an appropriate option for merging them back together by some key. Consider this example: ```python # features: # { # "example_id": datasets.Value("int32"), # "text": datasets.Value("string") # } ds = datasets.Dataset() def split(examples): sentences = [text.split(".") for text in examples["text"]] return { "example_id": [ example_id for example_id, sents in zip(examples["example_id"], sentences) for _ in sents ], "sentence": [sent for sents in sentences for sent in sents], "sentence_id": [i for sents in sentences for i in range(len(sents))], } split_ds = ds.map(split, batched=True) def process(examples): outputs = some_neural_network_that_works_on_sentences(examples["sentence"]) return {"outputs": outputs} split_ds = split_ds.map(process, batched=True) ``` I have a dataset consisting of texts that I would like to process sentence by sentence in a batched way. Afterwards, I would like to put it back together as it was, merging the outputs together. **Describe the solution you'd like** Ideally, it would look something like this: ```python def join(examples): order = np.argsort(examples["sentence_id"]) text = ".".join(examples["text"][i] for i in order) outputs = [examples["outputs"][i] for i in order] return {"text": text, "outputs": outputs} ds = split_ds.group_by("example_id", join) ``` **Describe alternatives you've considered** Right now, we can do this: ```python def merge(example): meeting_id = example["example_id"] parts = split_ds.filter(lambda x: x["example_id"] == meeting_id).sort("segment_no") return {"outputs": list(parts["outputs"])} ds = ds.map(merge) ``` Of course, we could process the dataset like this: ```python def process(example): outputs = some_neural_network_that_works_on_sentences(example["text"].split(".")) return {"outputs": outputs} ds = ds.map(process, batched=True) ``` However, that does not allow using an arbitrary batch size and may lead to very inefficient use of resources if the batch size is much larger than the number of sentences in one example. I would very much appreciate some kind of group by operator to merge examples based on the value of one column.
null
{ "+1": 8, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 8, "url": "https://api.github.com/repos/huggingface/datasets/issues/3644/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3644/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
https://api.github.com/repos/huggingface/datasets/issues/3640
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3640/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3640/comments
https://api.github.com/repos/huggingface/datasets/issues/3640/events
https://github.com/huggingface/datasets/issues/3640
1,116,133,769
I_kwDODunzps5ChtmJ
3,640
Issues with custom dataset in Wav2Vec2
{ "avatar_url": "https://avatars.githubusercontent.com/u/9079808?v=4", "events_url": "https://api.github.com/users/peregilk/events{/privacy}", "followers_url": "https://api.github.com/users/peregilk/followers", "following_url": "https://api.github.com/users/peregilk/following{/other_user}", "gists_url": "https://api.github.com/users/peregilk/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/peregilk", "id": 9079808, "login": "peregilk", "node_id": "MDQ6VXNlcjkwNzk4MDg=", "organizations_url": "https://api.github.com/users/peregilk/orgs", "received_events_url": "https://api.github.com/users/peregilk/received_events", "repos_url": "https://api.github.com/users/peregilk/repos", "site_admin": false, "starred_url": "https://api.github.com/users/peregilk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/peregilk/subscriptions", "type": "User", "url": "https://api.github.com/users/peregilk", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "Closed and moved to transformers." ]
2022-01-27T12:09:05
2022-01-27T12:29:48
2022-01-27T12:29:48
NONE
null
null
null
null
We are training Vav2Vec using the run_speech_recognition_ctc_bnb.py-script. This is working fine with Common Voice, however using our custom dataset and data loader at [NbAiLab/NPSC]( https://huggingface.co/datasets/NbAiLab/NPSC) it crashes after roughly 1 epoch with the following stack trace: ![image](https://user-images.githubusercontent.com/9079808/151355893-6d5887cc-ca19-4b12-948a-124eb6dac372.png) We are able to work around the issue, for instance by adding this check in line#222 in transformers/models/wav2vec2/modeling_wav2vec2.py: ```python if input_length - (mask_length - 1) < num_masked_span: num_masked_span = input_length - (mask_length - 1) ``` Interestingly, these are the variable values before the adjustment: ``` input_length=10 mask_length=10 num_masked_span=2 ```` After adjusting num_masked_spin to 1, the training script runs. The issue is also fixed by setting “replace=True” in the same function. Do you have any idea what is causing this, and how to fix this error permanently? If you do not think this is an Datasets issue, feel free to move the issue.
{ "avatar_url": "https://avatars.githubusercontent.com/u/9079808?v=4", "events_url": "https://api.github.com/users/peregilk/events{/privacy}", "followers_url": "https://api.github.com/users/peregilk/followers", "following_url": "https://api.github.com/users/peregilk/following{/other_user}", "gists_url": "https://api.github.com/users/peregilk/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/peregilk", "id": 9079808, "login": "peregilk", "node_id": "MDQ6VXNlcjkwNzk4MDg=", "organizations_url": "https://api.github.com/users/peregilk/orgs", "received_events_url": "https://api.github.com/users/peregilk/received_events", "repos_url": "https://api.github.com/users/peregilk/repos", "site_admin": false, "starred_url": "https://api.github.com/users/peregilk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/peregilk/subscriptions", "type": "User", "url": "https://api.github.com/users/peregilk", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3640/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3640/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
0:20:43
https://api.github.com/repos/huggingface/datasets/issues/3639
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3639/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3639/comments
https://api.github.com/repos/huggingface/datasets/issues/3639/events
https://github.com/huggingface/datasets/issues/3639
1,116,021,420
I_kwDODunzps5ChSKs
3,639
same value of precision, recall, f1 score at each epoch for classification task.
{ "avatar_url": "https://avatars.githubusercontent.com/u/10828657?v=4", "events_url": "https://api.github.com/users/Dhanachandra/events{/privacy}", "followers_url": "https://api.github.com/users/Dhanachandra/followers", "following_url": "https://api.github.com/users/Dhanachandra/following{/other_user}", "gists_url": "https://api.github.com/users/Dhanachandra/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Dhanachandra", "id": 10828657, "login": "Dhanachandra", "node_id": "MDQ6VXNlcjEwODI4NjU3", "organizations_url": "https://api.github.com/users/Dhanachandra/orgs", "received_events_url": "https://api.github.com/users/Dhanachandra/received_events", "repos_url": "https://api.github.com/users/Dhanachandra/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Dhanachandra/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Dhanachandra/subscriptions", "type": "User", "url": "https://api.github.com/users/Dhanachandra", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
[ "Hi @Dhanachandra, \r\n\r\nWe have tests for all our metrics and they work as expected: under the hood, we use scikit-learn implementations.\r\n\r\nMaybe the cause is somewhere else. For example:\r\n- Is it a binary or a multiclass or a multilabel classification? Default computation of these metrics is for binary classification; if you would like multiclass or multilabel, you should pass the corresponding parameters; see their documentation (e.g.: https://scikit-learn.org/stable/modules/generated/sklearn.metrics.precision_score.html) or code below:\r\n\r\nhttps://huggingface.co/docs/datasets/using_metrics.html#computing-the-metric-scores\r\n\r\n```python\r\nIn [1]: from datasets import load_metric\r\n\r\nIn [2]: precision = load_metric(\"precision\")\r\n\r\nIn [3]: print(precision.inputs_description)\r\n\r\nArgs:\r\n predictions: Predicted labels, as returned by a model.\r\n references: Ground truth labels.\r\n labels: The set of labels to include when average != 'binary', and\r\n their order if average is None. Labels present in the data can\r\n be excluded, for example to calculate a multiclass average ignoring\r\n a majority negative class, while labels not present in the data will\r\n result in 0 components in a macro average. For multilabel targets,\r\n labels are column indices. By default, all labels in y_true and\r\n y_pred are used in sorted order.\r\n average: This parameter is required for multiclass/multilabel targets.\r\n If None, the scores for each class are returned. Otherwise, this\r\n determines the type of averaging performed on the data:\r\n binary: Only report results for the class specified by pos_label.\r\n This is applicable only if targets (y_{true,pred}) are binary.\r\n micro: Calculate metrics globally by counting the total true positives,\r\n false negatives and false positives.\r\n macro: Calculate metrics for each label, and find their unweighted mean.\r\n This does not take label imbalance into account.\r\n weighted: Calculate metrics for each label, and find their average\r\n weighted by support (the number of true instances for each label).\r\n This alters ‘macro’ to account for label imbalance; it can result\r\n in an F-score that is not between precision and recall.\r\n samples: Calculate metrics for each instance, and find their average\r\n (only meaningful for multilabel classification).\r\n sample_weight: Sample weights.\r\n\r\nReturns:\r\n precision: Precision score.\r\n\r\nExamples:\r\n\r\n >>> precision_metric = datasets.load_metric(\"precision\")\r\n >>> results = precision_metric.compute(references=[0, 1], predictions=[0, 1])\r\n >>> print(results)\r\n {'precision': 1.0}\r\n\r\n >>> predictions = [0, 2, 1, 0, 0, 1]\r\n >>> references = [0, 1, 2, 0, 1, 2]\r\n >>> results = precision_metric.compute(predictions=predictions, references=references, average='macro')\r\n >>> print(results)\r\n {'precision': 0.2222222222222222}\r\n >>> results = precision_metric.compute(predictions=predictions, references=references, average='micro')\r\n >>> print(results)\r\n {'precision': 0.3333333333333333}\r\n >>> results = precision_metric.compute(predictions=predictions, references=references, average='weighted')\r\n >>> print(results)\r\n {'precision': 0.2222222222222222}\r\n >>> results = precision_metric.compute(predictions=predictions, references=references, average=None)\r\n >>> print(results)\r\n {'precision': array([0.66666667, 0. , 0. ])}\r\n```\r\n" ]
2022-01-27T10:14:16
2022-02-24T09:02:18
2022-02-24T09:02:17
NONE
null
null
null
null
**1st Epoch:** 1/27/2022 09:30:48 - INFO - datasets.metric - Removing /home/ubuntu/.cache/huggingface/metrics/f1/default/default_experiment-1-0.arrow.59it/s] 01/27/2022 09:30:48 - INFO - datasets.metric - Removing /home/ubuntu/.cache/huggingface/metrics/precision/default/default_experiment-1-0.arrow 01/27/2022 09:30:49 - INFO - datasets.metric - Removing /home/ubuntu/.cache/huggingface/metrics/recall/default/default_experiment-1-0.arrow PRECISION: {'precision': 0.7612903225806451} RECALL: {'recall': 0.7612903225806451} F1: {'f1': 0.7612903225806451} {'eval_loss': 1.4658324718475342, 'eval_accuracy': 0.7612903118133545, 'eval_runtime': 30.0054, 'eval_samples_per_second': 46.492, 'eval_steps_per_second': 46.492, 'epoch': 3.0} **4th Epoch:** 1/27/2022 09:56:55 - INFO - datasets.metric - Removing /home/ubuntu/.cache/huggingface/metrics/f1/default/default_experiment-1-0.arrow.92it/s] 01/27/2022 09:56:56 - INFO - datasets.metric - Removing /home/ubuntu/.cache/huggingface/metrics/precision/default/default_experiment-1-0.arrow 01/27/2022 09:56:56 - INFO - datasets.metric - Removing /home/ubuntu/.cache/huggingface/metrics/recall/default/default_experiment-1-0.arrow PRECISION: {'precision': 0.7698924731182796} RECALL: {'recall': 0.7698924731182796} F1: {'f1': 0.7698924731182796} ## Environment info !git clone https://github.com/huggingface/transformers %cd transformers !pip install . !pip install -r /content/transformers/examples/pytorch/token-classification/requirements.txt !pip install datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3639/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3639/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
27 days, 22:48:01
https://api.github.com/repos/huggingface/datasets/issues/3638
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3638/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3638/comments
https://api.github.com/repos/huggingface/datasets/issues/3638/events
https://github.com/huggingface/datasets/issues/3638
1,115,725,703
I_kwDODunzps5CgJ-H
3,638
AutoTokenizer hash value got change after datasets.map
{ "avatar_url": "https://avatars.githubusercontent.com/u/13161779?v=4", "events_url": "https://api.github.com/users/tshu-w/events{/privacy}", "followers_url": "https://api.github.com/users/tshu-w/followers", "following_url": "https://api.github.com/users/tshu-w/following{/other_user}", "gists_url": "https://api.github.com/users/tshu-w/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/tshu-w", "id": 13161779, "login": "tshu-w", "node_id": "MDQ6VXNlcjEzMTYxNzc5", "organizations_url": "https://api.github.com/users/tshu-w/orgs", "received_events_url": "https://api.github.com/users/tshu-w/received_events", "repos_url": "https://api.github.com/users/tshu-w/repos", "site_admin": false, "starred_url": "https://api.github.com/users/tshu-w/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tshu-w/subscriptions", "type": "User", "url": "https://api.github.com/users/tshu-w", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" } ]
[ "This issue was original reported at https://github.com/huggingface/transformers/issues/14931 and It seems like this issue also occur with other AutoClass like AutoFeatureExtractor.", "Thanks for moving the issue here !\r\n\r\nI wasn't able to reproduce the issue on my env (the hashes stay the same):\r\n```\r\n- `transformers` version: 1.15.0\r\n- `tokenizers` version: 0.10.3\r\n- `datasets` version: 1.18.1\r\n- `dill` version: 0.3.4\r\n- Platform: Linux-4.19.0-18-cloud-amd64-x86_64-with-debian-10.11\r\n- Python version: 3.7.10\r\n- PyArrow version: 6.0.1\r\n```\r\nHowever I was able to reproduce it on Google Colab (the hashes end up different):\r\n```\r\n- `transformers` version: 1.15.0\r\n- `tokenizers` version: 0.10.3\r\n- `datasets` version: 1.18.1\r\n- `dill` version: 0.3.4\r\n- Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic\r\n- Python version: 3.7.12\r\n- PyArrow version: 3.0.0\r\n```\r\nI'll investigate why it doesn't work properly on Google Colab :)", "I found the issue: the tokenizer has something inside it that changes.\r\n\r\nBefore the call, `tokenizer._tokenizer.truncation` is None, and after the call it changes to this for some reason:\r\n```\r\n{'max_length': 512, 'strategy': 'longest_first', 'stride': 0}\r\n```\r\n\r\nDoes anybody know why calling the tokenizer would change its state this way ? cc @Narsil @SaulLu maybe ?", "`tokenizer.encode(..)` does not accept argument like max_length, strategy or stride.\r\n\r\nIn `tokenizers` you have to modify the tokenizer state by setting various `TruncationParams` (and/or `PaddingParams`).\r\nHowever, since this is modifying the state, you need to mutably borrow the tokenizer (a rust concept). The key principle is that there can ever be only 1 mutable borrow at a time during the span of the tokenizer lifecycle.\r\n\r\nBecause of this, if `transformers` blindly set `TruncationParams` and `PaddingParams` on every call, it would cause the tokenizer to crash (or make the various threads accessing it hang, which is not necessarily better).\r\n\r\nIn order to avoid that, we decided to handle it this way : https://github.com/huggingface/transformers/pull/12550 . \r\n\r\nWhich should explain the state of the tokenizer being modified (hence its hash).\r\n\r\nNow for a temporary solution, simply encoding once with the tokenizer should give it it's proper hash (since by default the tokenizer doesn't have this state, looks at the first encoding call, and creates it).\r\n\r\nWe could try and set these 2 dicts at initialization time, but it wouldn't work if a user modified the tokenizer state later\r\n```python\r\ntokenizer = AutoTokenizer.from_pretrained(..)\r\ntokenizer.truncation_side = \"left\"\r\n# Now we have a difference between `tokenizer._tokenizer.truncation` and `tokenizer.truncation_side`\r\n```\r\nIf we wanted to fix it correctly it would mean mapping every assignation to it's proper location on `tokenizer.{padding/truncation}`\r\n\r\nI think it's important to note that we cannot guarantee a tokenizer' hash remains the same if *any* of those parameters are modified through the `.map` function.\r\n\r\nEdit: Another option would be to override the default __hash__ function, but I don't know if there's a sound implementation that could fit.", "Thanks a lot for the explanation !\r\nI think if we set these 2 dicts at initialization time it would be amazing already\r\n\r\nShall we open an issue in `transformers` to ask for these dictionaries to be set when the tokenizer is instantiated ?\r\n\r\n> Edit: Another option would be to override the default hash function, but I don't know if there's a sound implementation that could fit.\r\n\r\nIn `datasets` we can easily have custom hashing for objects of the other HF libraries if we want. For example we ignore the cache some tokenizers have. However in this specific case it touches parameters that may change the behavior of the tokenizer itself. I'm not sure the logic that determines how a tokenizer behaves should be in `datasets`", "A hack we could have in the `datasets` lib would be to call the tokenizer before hashing it in order to set all its parameters correctly - but it sounds a lot like a hack and I'm not sure this can work in the long run", "Fully agree with everything you said. \r\n\r\nI think the best course of action is creating an issue in `transformers`. I can start the work on this.\r\nI think the code changes are fairly simple. Making a sound test + not breaking other stuff might be different :D", "It should be noted that this problem also occurs in other AutoClasses, such as AutoFeatureExtractor, so I don't think handling it in Datasets is a long-term practice either.", "> I think the best course of action is creating an issue in `transformers`. I can start the work on this.\r\n\r\n@Narsil Hi, I reopen this issue in `transformers` https://github.com/huggingface/transformers/issues/14931", "Here is @Narsil comment from https://github.com/huggingface/transformers/issues/14931#issuecomment-1074981569\r\n> # TL;DR\r\n> Call the function once on a dummy example beforehand will fix it.\r\n> \r\n> ```python\r\n> tokenizer(\"Some\", \"test\", truncation=True)\r\n> ```\r\n> \r\n> # Long answer\r\n> If I remember the last status, it's hard doing anything, since the call itself\r\n> \r\n> ```python\r\n> tokenizer(example[\"sentence1\"], example[\"sentence2\"], truncation=True)\r\n> ```\r\n> \r\n> will modify the tokenizer. It's the `truncation=True` that modifies the tokenizer to put it into truncation mode if you will. Calling the tokenizer once with that argument would fix the cache.\r\n> \r\n> Finding a fix that :\r\n> \r\n> * Doesn't imply a huge chunk of work on `tokenizers` (with potential loss of performance, and breaking backward compatibility)\r\n> * Doesn't imply `datasets` running a first pass of the loop\r\n> * Doesn't imply `datasets` looking at the map function itself\r\n> * Uses a sound `hash` for this object in `datasets`.\r\n> \r\n> is IIRC impossible for this use case.\r\n> \r\n> I can explain a bit more why the first option is not desirable.\r\n> \r\n> In order to \"fix\" this for tokenizers, we would need to make `tokenizer(..)` purely without side effects. This means that the \"options\" of tokenization (like `truncation` and `padding` at least) would have\r\n", "For me this workaround only works if I don't pass the `num_proc=X` argument to `datasets.map`", "Is there an easy solution for setting both num_proc and padding/truncation for fast tokenizer or caching just not a thing in this case? " ]
2022-01-27T03:19:03
2024-03-11T13:56:15
null
NONE
null
null
null
null
## Describe the bug AutoTokenizer hash value got change after datasets.map ## Steps to reproduce the bug 1. trash huggingface datasets cache 2. run the following code: ```python from transformers import AutoTokenizer, BertTokenizer from datasets import load_dataset from datasets.fingerprint import Hasher tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased') def tokenize_function(example): return tokenizer(example["sentence1"], example["sentence2"], truncation=True) raw_datasets = load_dataset("glue", "mrpc") print(Hasher.hash(tokenize_function)) print(Hasher.hash(tokenizer)) tokenized_datasets = raw_datasets.map(tokenize_function, batched=True) print(Hasher.hash(tokenize_function)) print(Hasher.hash(tokenizer)) ``` got ``` Reusing dataset glue (/home1/wts/.cache/huggingface/datasets/glue/mrpc/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad) 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 1112.35it/s] f4976bb4694ebc51 3fca35a1fd4a1251 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:00<00:00, 6.96ba/s] 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 15.25ba/s] 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 5.81ba/s] d32837619b7d7d01 5fd925c82edd62b6 ``` 3. run raw_datasets.map(tokenize_function, batched=True) again and see some dataset are not using cache. ## Expected results `AutoTokenizer` work like specific Tokenizer (The hash value don't change after map): ```python from transformers import AutoTokenizer, BertTokenizer from datasets import load_dataset from datasets.fingerprint import Hasher tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') def tokenize_function(example): return tokenizer(example["sentence1"], example["sentence2"], truncation=True) raw_datasets = load_dataset("glue", "mrpc") print(Hasher.hash(tokenize_function)) print(Hasher.hash(tokenizer)) tokenized_datasets = raw_datasets.map(tokenize_function, batched=True) print(Hasher.hash(tokenize_function)) print(Hasher.hash(tokenizer)) ``` ``` Reusing dataset glue (/home1/wts/.cache/huggingface/datasets/glue/mrpc/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad) 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 1091.22it/s] 46d4b31f54153fc7 5b8771afd8d43888 Loading cached processed dataset at /home1/wts/.cache/huggingface/datasets/glue/mrpc/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-6b07ff82ae9d5c51.arrow Loading cached processed dataset at /home1/wts/.cache/huggingface/datasets/glue/mrpc/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-af738a6d84f3864b.arrow Loading cached processed dataset at /home1/wts/.cache/huggingface/datasets/glue/mrpc/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-531d2a603ba713c1.arrow 46d4b31f54153fc7 5b8771afd8d43888 ``` ## Environment info - `datasets` version: 1.18.0 - Platform: Linux-5.4.0-91-generic-x86_64-with-glibc2.27 - Python version: 3.9.7 - PyArrow version: 6.0.1
null
{ "+1": 3, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 3, "url": "https://api.github.com/repos/huggingface/datasets/issues/3638/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3638/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
https://api.github.com/repos/huggingface/datasets/issues/3637
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3637/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3637/comments
https://api.github.com/repos/huggingface/datasets/issues/3637/events
https://github.com/huggingface/datasets/issues/3637
1,115,526,438
I_kwDODunzps5CfZUm
3,637
[TypeError: Couldn't cast array of type] Cannot load dataset in v1.18
{ "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lewtun", "id": 26859204, "login": "lewtun", "node_id": "MDQ6VXNlcjI2ODU5MjA0", "organizations_url": "https://api.github.com/users/lewtun/orgs", "received_events_url": "https://api.github.com/users/lewtun/received_events", "repos_url": "https://api.github.com/users/lewtun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "type": "User", "url": "https://api.github.com/users/lewtun", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "Hi @lewtun!\r\n \r\nThis one was tricky to debug. Initially, I tought there is a bug in the recently-added (by @lhoestq ) `cast_array_to_feature` function because `git bisect` points to the https://github.com/huggingface/datasets/commit/6ca96c707502e0689f9b58d94f46d871fa5a3c9c commit. Then, I noticed that the feature tpye of the `dialogue` field is `list`, which explains why you didn't get an error in earlier versions. Is there a specific reason why you use `list` instead of `Sequence` in the script? Maybe to avoid turning list of dicts to dicts of lists as it's done by `Sequence` for compatibility with TFDS or for performance reasons? If the field was `Sequence`, you would get an error in `encode_nested_example` because **the scripts yields some additional (nested) columns which are not specified in the `features` dictionary**. Previously, these additional columns would've been ignored by PyArrow (1), but now we have a check for them (2).\r\n(1) See PyArrow behavior:\r\n```\r\n>>> pa.array([{\"a\": 2, \"b\": 3}], type=pa.struct({\"a\": pa.int32()})) # pyarrow ignores the extra column\r\n-- is_valid: all not null\r\n-- child 0 type: int32\r\n [\r\n 2\r\n ]\r\n ```\r\n\r\n(2) Check:\r\nhttps://github.com/huggingface/datasets/blob/4c417d52def6e20359ca16c6723e0a2855e5c3fd/src/datasets/table.py#L1059\r\n\r\nThe fix is very simple: just add the missing columns to the _EMPTY_BELIEF_STATE list:\r\n```python\r\n_EMPTY_BELIEF_STATE.extend(['通用-产品类别', '火车-舱位档次', '通用-系列', '通用-价格区间', '通用-品牌'])\r\n```", "Hey @mariosasko, thank you so much for figuring this one out - it certainly looks like a tricky bug 😱 ! I don't think there's a specific reason to use `list` instead of `Sequence` with the script, but I'll let the dataset creators know to see if your suggestion is acceptable.\r\n\r\nThank you again!", "Thanks, this was indeed the fix! Would it make sense to produce a more informative error message in such cases? \r\n\r\nThe issue can be closed. \r\n\r\n" ]
2022-01-26T21:38:02
2022-02-09T16:15:53
2022-02-09T16:15:53
MEMBER
null
null
null
null
## Describe the bug I am trying to load the [`GEM/RiSAWOZ` dataset](https://huggingface.co/datasets/GEM/RiSAWOZ) in `datasets` v1.18.1 and am running into a type error when casting the features. The strange thing is that I can load the dataset with v1.17.0. Note that the error is also present if I install from `master` too. As far as I can tell, the dataset loading script is correct and the problematic features [here](https://huggingface.co/datasets/GEM/RiSAWOZ/blob/main/RiSAWOZ.py#L237) also look fine to me. ## Steps to reproduce the bug ```python from datasets import load_dataset dset = load_dataset("GEM/RiSAWOZ") ``` ## Expected results I can load the dataset without error. ## Actual results <details><summary>Traceback</summary> ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/builder.py in _prepare_split(self, split_generator) 1083 example = self.info.features.encode_example(record) -> 1084 writer.write(example, key) 1085 finally: ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/arrow_writer.py in write(self, example, key, writer_batch_size) 445 --> 446 self.write_examples_on_file() 447 ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/arrow_writer.py in write_examples_on_file(self) 403 batch_examples[col] = [row[0][col] for row in self.current_examples] --> 404 self.write_batch(batch_examples=batch_examples) 405 self.current_examples = [] ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/arrow_writer.py in write_batch(self, batch_examples, writer_batch_size) 496 typed_sequence = OptimizedTypedSequence(batch_examples[col], type=col_type, try_type=col_try_type, col=col) --> 497 arrays.append(pa.array(typed_sequence)) 498 inferred_features[col] = typed_sequence.get_inferred_type() ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/pyarrow/array.pxi in pyarrow.lib.array() ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/pyarrow/array.pxi in pyarrow.lib._handle_arrow_array_protocol() ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/arrow_writer.py in __arrow_array__(self, type) 204 # We only do it if trying_type is False - since this is what the user asks for. --> 205 out = cast_array_to_feature(out, type, allow_number_to_str=not self.trying_type) 206 return out ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs) 943 array = _sanitize(array) --> 944 return func(array, *args, **kwargs) 945 ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs) 919 else: --> 920 return func(array, *args, **kwargs) 921 ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in cast_array_to_feature(array, feature, allow_number_to_str) 1064 if isinstance(feature, list): -> 1065 return pa.ListArray.from_arrays(array.offsets, _c(array.values, feature[0])) 1066 elif isinstance(feature, Sequence): ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs) 943 array = _sanitize(array) --> 944 return func(array, *args, **kwargs) 945 ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs) 919 else: --> 920 return func(array, *args, **kwargs) 921 ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in cast_array_to_feature(array, feature, allow_number_to_str) 1059 if isinstance(feature, dict) and set(field.name for field in array.type) == set(feature): -> 1060 arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()] 1061 return pa.StructArray.from_arrays(arrays, names=list(feature)) ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in <listcomp>(.0) 1059 if isinstance(feature, dict) and set(field.name for field in array.type) == set(feature): -> 1060 arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()] 1061 return pa.StructArray.from_arrays(arrays, names=list(feature)) ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs) 943 array = _sanitize(array) --> 944 return func(array, *args, **kwargs) 945 ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs) 919 else: --> 920 return func(array, *args, **kwargs) 921 ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in cast_array_to_feature(array, feature, allow_number_to_str) 1059 if isinstance(feature, dict) and set(field.name for field in array.type) == set(feature): -> 1060 arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()] 1061 return pa.StructArray.from_arrays(arrays, names=list(feature)) ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in <listcomp>(.0) 1059 if isinstance(feature, dict) and set(field.name for field in array.type) == set(feature): -> 1060 arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()] 1061 return pa.StructArray.from_arrays(arrays, names=list(feature)) ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs) 943 array = _sanitize(array) --> 944 return func(array, *args, **kwargs) 945 ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs) 919 else: --> 920 return func(array, *args, **kwargs) 921 ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in cast_array_to_feature(array, feature, allow_number_to_str) 1086 return array_cast(array, feature(), allow_number_to_str=allow_number_to_str) -> 1087 raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}") 1088 TypeError: Couldn't cast array of type struct<医院-3.0T MRI: string, 医院-CT: string, 医院-DSA: string, 医院-公交线路: string, 医院-区域: string, 医院-名称: string, 医院-地址: string, 医院-地铁可达: string, 医院-地铁线路: string, 医院-性质: string, 医院-挂号时间: string, 医院-电话: string, 医院-等级: string, 医院-类别: string, 医院-重点科室: string, 医院-门诊时间: string, 天气-城市: string, 天气-天气: string, 天气-日期: string, 天气-温度: string, 天气-紫外线强度: string, 天气-风力风向: string, 旅游景点-区域: string, 旅游景点-名称: string, 旅游景点-地址: string, 旅游景点-开放时间: string, 旅游景点-是否地铁直达: string, 旅游景点-景点类型: string, 旅游景点-最适合人群: string, 旅游景点-消费: string, 旅游景点-特点: string, 旅游景点-电话号码: string, 旅游景点-评分: string, 旅游景点-门票价格: string, 汽车-价格(万元): string, 汽车-倒车影像: string, 汽车-动力水平: string, 汽车-厂商: string, 汽车-发动机排量(L): string, 汽车-发动机马力(Ps): string, 汽车-名称: string, 汽车-定速巡航: string, 汽车-巡航系统: string, 汽车-座位数: string, 汽车-座椅加热: string, 汽车-座椅通风: string, 汽车-所属价格区间: string, 汽车-油耗水平: string, 汽车-环保标准: string, 汽车-级别: string, 汽车-综合油耗(L/100km): string, 汽车-能源类型: string, 汽车-车型: string, 汽车-车系: string, 汽车-车身尺寸(mm): string, 汽车-驱动方式: string, 汽车-驾驶辅助影像: string, 火车-出发地: string, 火车-出发时间: string, 火车-到达时间: string, 火车-坐席: string, 火车-日期: string, 火车-时长: string, 火车-目的地: string, 火车-票价: string, 火车-舱位档次: string, 火车-车型: string, 火车-车次信息: string, 电影-主演: string, 电影-主演名单: string, 电影-具体上映时间: string, 电影-制片国家/地区: string, 电影-导演: string, 电影-年代: string, 电影-片名: string, 电影-片长: string, 电影-类型: string, 电影-豆瓣评分: string, 电脑-CPU: string, 电脑-CPU型号: string, 电脑-产品类别: string, 电脑-价格: string, 电脑-价格区间: string, 电脑-内存容量: string, 电脑-分类: string, 电脑-品牌: string, 电脑-商品名称: string, 电脑-屏幕尺寸: string, 电脑-待机时长: string, 电脑-显卡型号: string, 电脑-显卡类别: string, 电脑-游戏性能: string, 电脑-特性: string, 电脑-硬盘容量: string, 电脑-系列: string, 电脑-系统: string, 电脑-色系: string, 电脑-裸机重量: string, 电视剧-主演: string, 电视剧-主演名单: string, 电视剧-制片国家/地区: string, 电视剧-单集片长: string, 电视剧-导演: string, 电视剧-年代: string, 电视剧-片名: string, 电视剧-类型: string, 电视剧-豆瓣评分: string, 电视剧-集数: string, 电视剧-首播时间: string, 辅导班-上课方式: string, 辅导班-上课时间: string, 辅导班-下课时间: string, 辅导班-价格: string, 辅导班-区域: string, 辅导班-年级: string, 辅导班-开始日期: string, 辅导班-教室地点: string, 辅导班-教师: string, 辅导班-教师网址: string, 辅导班-时段: string, 辅导班-校区: string, 辅导班-每周: string, 辅导班-班号: string, 辅导班-科目: string, 辅导班-结束日期: string, 辅导班-课时: string, 辅导班-课次: string, 辅导班-课程网址: string, 辅导班-难度: string, 通用-产品类别: string, 通用-价格区间: string, 通用-品牌: string, 通用-系列: string, 酒店-价位: string, 酒店-停车场: string, 酒店-区域: string, 酒店-名称: string, 酒店-地址: string, 酒店-房型: string, 酒店-房费: string, 酒店-星级: string, 酒店-电话号码: string, 酒店-评分: string, 酒店-酒店类型: string, 飞机-准点率: string, 飞机-出发地: string, 飞机-到达时间: string, 飞机-日期: string, 飞机-目的地: string, 飞机-票价: string, 飞机-航班信息: string, 飞机-舱位档次: string, 飞机-起飞时间: string, 餐厅-人均消费: string, 餐厅-价位: string, 餐厅-区域: string, 餐厅-名称: string, 餐厅-地址: string, 餐厅-推荐菜: string, 餐厅-是否地铁直达: string, 餐厅-电话号码: string, 餐厅-菜系: string, 餐厅-营业时间: string, 餐厅-评分: string> to {'旅游景点-名称': Value(dtype='string', id=None), '旅游景点-区域': Value(dtype='string', id=None), '旅游景点-景点类型': Value(dtype='string', id=None), '旅游景点-最适合人群': Value(dtype='string', id=None), '旅游景点-消费': Value(dtype='string', id=None), '旅游景点-是否地铁直达': Value(dtype='string', id=None), '旅游景点-门票价格': Value(dtype='string', id=None), '旅游景点-电话号码': Value(dtype='string', id=None), '旅游景点-地址': Value(dtype='string', id=None), '旅游景点-评分': Value(dtype='string', id=None), '旅游景点-开放时间': Value(dtype='string', id=None), '旅游景点-特点': Value(dtype='string', id=None), '餐厅-名称': Value(dtype='string', id=None), '餐厅-区域': Value(dtype='string', id=None), '餐厅-菜系': Value(dtype='string', id=None), '餐厅-价位': Value(dtype='string', id=None), '餐厅-是否地铁直达': Value(dtype='string', id=None), '餐厅-人均消费': Value(dtype='string', id=None), '餐厅-地址': Value(dtype='string', id=None), '餐厅-电话号码': Value(dtype='string', id=None), '餐厅-评分': Value(dtype='string', id=None), '餐厅-营业时间': Value(dtype='string', id=None), '餐厅-推荐菜': Value(dtype='string', id=None), '酒店-名称': Value(dtype='string', id=None), '酒店-区域': Value(dtype='string', id=None), '酒店-星级': Value(dtype='string', id=None), '酒店-价位': Value(dtype='string', id=None), '酒店-酒店类型': Value(dtype='string', id=None), '酒店-房型': Value(dtype='string', id=None), '酒店-停车场': Value(dtype='string', id=None), '酒店-房费': Value(dtype='string', id=None), '酒店-地址': Value(dtype='string', id=None), '酒店-电话号码': Value(dtype='string', id=None), '酒店-评分': Value(dtype='string', id=None), '电脑-品牌': Value(dtype='string', id=None), '电脑-产品类别': Value(dtype='string', id=None), '电脑-分类': Value(dtype='string', id=None), '电脑-内存容量': Value(dtype='string', id=None), '电脑-屏幕尺寸': Value(dtype='string', id=None), '电脑-CPU': Value(dtype='string', id=None), '电脑-价格区间': Value(dtype='string', id=None), '电脑-系列': Value(dtype='string', id=None), '电脑-商品名称': Value(dtype='string', id=None), '电脑-系统': Value(dtype='string', id=None), '电脑-游戏性能': Value(dtype='string', id=None), '电脑-CPU型号': Value(dtype='string', id=None), '电脑-裸机重量': Value(dtype='string', id=None), '电脑-显卡类别': Value(dtype='string', id=None), '电脑-显卡型号': Value(dtype='string', id=None), '电脑-特性': Value(dtype='string', id=None), '电脑-色系': Value(dtype='string', id=None), '电脑-待机时长': Value(dtype='string', id=None), '电脑-硬盘容量': Value(dtype='string', id=None), '电脑-价格': Value(dtype='string', id=None), '火车-出发地': Value(dtype='string', id=None), '火车-目的地': Value(dtype='string', id=None), '火车-日期': Value(dtype='string', id=None), '火车-车型': Value(dtype='string', id=None), '火车-坐席': Value(dtype='string', id=None), '火车-车次信息': Value(dtype='string', id=None), '火车-时长': Value(dtype='string', id=None), '火车-出发时间': Value(dtype='string', id=None), '火车-到达时间': Value(dtype='string', id=None), '火车-票价': Value(dtype='string', id=None), '飞机-出发地': Value(dtype='string', id=None), '飞机-目的地': Value(dtype='string', id=None), '飞机-日期': Value(dtype='string', id=None), '飞机-舱位档次': Value(dtype='string', id=None), '飞机-航班信息': Value(dtype='string', id=None), '飞机-起飞时间': Value(dtype='string', id=None), '飞机-到达时间': Value(dtype='string', id=None), '飞机-票价': Value(dtype='string', id=None), '飞机-准点率': Value(dtype='string', id=None), '天气-城市': Value(dtype='string', id=None), '天气-日期': Value(dtype='string', id=None), '天气-天气': Value(dtype='string', id=None), '天气-温度': Value(dtype='string', id=None), '天气-风力风向': Value(dtype='string', id=None), '天气-紫外线强度': Value(dtype='string', id=None), '电影-制片国家/地区': Value(dtype='string', id=None), '电影-类型': Value(dtype='string', id=None), '电影-年代': Value(dtype='string', id=None), '电影-主演': Value(dtype='string', id=None), '电影-导演': Value(dtype='string', id=None), '电影-片名': Value(dtype='string', id=None), '电影-主演名单': Value(dtype='string', id=None), '电影-具体上映时间': Value(dtype='string', id=None), '电影-片长': Value(dtype='string', id=None), '电影-豆瓣评分': Value(dtype='string', id=None), '电视剧-制片国家/地区': Value(dtype='string', id=None), '电视剧-类型': Value(dtype='string', id=None), '电视剧-年代': Value(dtype='string', id=None), '电视剧-主演': Value(dtype='string', id=None), '电视剧-导演': Value(dtype='string', id=None), '电视剧-片名': Value(dtype='string', id=None), '电视剧-主演名单': Value(dtype='string', id=None), '电视剧-首播时间': Value(dtype='string', id=None), '电视剧-集数': Value(dtype='string', id=None), '电视剧-单集片长': Value(dtype='string', id=None), '电视剧-豆瓣评分': Value(dtype='string', id=None), '辅导班-班号': Value(dtype='string', id=None), '辅导班-难度': Value(dtype='string', id=None), '辅导班-科目': Value(dtype='string', id=None), '辅导班-年级': Value(dtype='string', id=None), '辅导班-区域': Value(dtype='string', id=None), '辅导班-校区': Value(dtype='string', id=None), '辅导班-上课方式': Value(dtype='string', id=None), '辅导班-开始日期': Value(dtype='string', id=None), '辅导班-结束日期': Value(dtype='string', id=None), '辅导班-每周': Value(dtype='string', id=None), '辅导班-上课时间': Value(dtype='string', id=None), '辅导班-下课时间': Value(dtype='string', id=None), '辅导班-时段': Value(dtype='string', id=None), '辅导班-课次': Value(dtype='string', id=None), '辅导班-课时': Value(dtype='string', id=None), '辅导班-教室地点': Value(dtype='string', id=None), '辅导班-教师': Value(dtype='string', id=None), '辅导班-价格': Value(dtype='string', id=None), '辅导班-课程网址': Value(dtype='string', id=None), '辅导班-教师网址': Value(dtype='string', id=None), '汽车-名称': Value(dtype='string', id=None), '汽车-车型': Value(dtype='string', id=None), '汽车-级别': Value(dtype='string', id=None), '汽车-座位数': Value(dtype='string', id=None), '汽车-车身尺寸(mm)': Value(dtype='string', id=None), '汽车-厂商': Value(dtype='string', id=None), '汽车-能源类型': Value(dtype='string', id=None), '汽车-发动机排量(L)': Value(dtype='string', id=None), '汽车-发动机马力(Ps)': Value(dtype='string', id=None), '汽车-驱动方式': Value(dtype='string', id=None), '汽车-综合油耗(L/100km)': Value(dtype='string', id=None), '汽车-环保标准': Value(dtype='string', id=None), '汽车-驾驶辅助影像': Value(dtype='string', id=None), '汽车-巡航系统': Value(dtype='string', id=None), '汽车-价格(万元)': Value(dtype='string', id=None), '汽车-车系': Value(dtype='string', id=None), '汽车-动力水平': Value(dtype='string', id=None), '汽车-油耗水平': Value(dtype='string', id=None), '汽车-倒车影像': Value(dtype='string', id=None), '汽车-定速巡航': Value(dtype='string', id=None), '汽车-座椅加热': Value(dtype='string', id=None), '汽车-座椅通风': Value(dtype='string', id=None), '汽车-所属价格区间': Value(dtype='string', id=None), '医院-名称': Value(dtype='string', id=None), '医院-等级': Value(dtype='string', id=None), '医院-类别': Value(dtype='string', id=None), '医院-性质': Value(dtype='string', id=None), '医院-区域': Value(dtype='string', id=None), '医院-地址': Value(dtype='string', id=None), '医院-电话': Value(dtype='string', id=None), '医院-挂号时间': Value(dtype='string', id=None), '医院-门诊时间': Value(dtype='string', id=None), '医院-公交线路': Value(dtype='string', id=None), '医院-地铁可达': Value(dtype='string', id=None), '医院-地铁线路': Value(dtype='string', id=None), '医院-重点科室': Value(dtype='string', id=None), '医院-CT': Value(dtype='string', id=None), '医院-3.0T MRI': Value(dtype='string', id=None), '医院-DSA': Value(dtype='string', id=None)} During handling of the above exception, another exception occurred: TypeError Traceback (most recent call last) /var/folders/28/k4cy5q7s2hs92xq7_h89_vgm0000gn/T/ipykernel_44306/2896005239.py in <module> ----> 1 dset = load_dataset("GEM/RiSAWOZ") 2 dset ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, script_version, **config_kwargs) 1692 1693 # Download and prepare data -> 1694 builder_instance.download_and_prepare( 1695 download_config=download_config, 1696 download_mode=download_mode, ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs) 593 logger.warning("HF google storage unreachable. Downloading and preparing it from source") 594 if not downloaded_from_gcs: --> 595 self._download_and_prepare( 596 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 597 ) ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 682 try: 683 # Prepare split will record examples associated to the split --> 684 self._prepare_split(split_generator, **prepare_split_kwargs) 685 except OSError as e: 686 raise OSError( ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/builder.py in _prepare_split(self, split_generator) 1084 writer.write(example, key) 1085 finally: -> 1086 num_examples, num_bytes = writer.finalize() 1087 1088 split_generator.split_info.num_examples = num_examples ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/arrow_writer.py in finalize(self, close_stream) 525 # Re-intializing to empty list for next batch 526 self.hkey_record = [] --> 527 self.write_examples_on_file() 528 if self.pa_writer is None: 529 if self.schema: ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/arrow_writer.py in write_examples_on_file(self) 402 # Since current_examples contains (example, key) tuples 403 batch_examples[col] = [row[0][col] for row in self.current_examples] --> 404 self.write_batch(batch_examples=batch_examples) 405 self.current_examples = [] 406 ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/arrow_writer.py in write_batch(self, batch_examples, writer_batch_size) 495 col_try_type = try_features[col] if try_features is not None and col in try_features else None 496 typed_sequence = OptimizedTypedSequence(batch_examples[col], type=col_type, try_type=col_try_type, col=col) --> 497 arrays.append(pa.array(typed_sequence)) 498 inferred_features[col] = typed_sequence.get_inferred_type() 499 schema = inferred_features.arrow_schema if self.pa_writer is None else self.schema ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/pyarrow/array.pxi in pyarrow.lib.array() ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/pyarrow/array.pxi in pyarrow.lib._handle_arrow_array_protocol() ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/arrow_writer.py in __arrow_array__(self, type) 203 # Also, when trying type "string", we don't want to convert integers or floats to "string". 204 # We only do it if trying_type is False - since this is what the user asks for. --> 205 out = cast_array_to_feature(out, type, allow_number_to_str=not self.trying_type) 206 return out 207 except (TypeError, pa.lib.ArrowInvalid) as e: # handle type errors and overflows ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs) 942 if pa.types.is_list(array.type) and config.PYARROW_VERSION < version.parse("4.0.0"): 943 array = _sanitize(array) --> 944 return func(array, *args, **kwargs) 945 946 return wrapper ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs) 918 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) 919 else: --> 920 return func(array, *args, **kwargs) 921 922 return wrapper ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in cast_array_to_feature(array, feature, allow_number_to_str) 1063 # feature must be either [subfeature] or Sequence(subfeature) 1064 if isinstance(feature, list): -> 1065 return pa.ListArray.from_arrays(array.offsets, _c(array.values, feature[0])) 1066 elif isinstance(feature, Sequence): 1067 if feature.length > -1: ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs) 942 if pa.types.is_list(array.type) and config.PYARROW_VERSION < version.parse("4.0.0"): 943 array = _sanitize(array) --> 944 return func(array, *args, **kwargs) 945 946 return wrapper ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs) 918 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) 919 else: --> 920 return func(array, *args, **kwargs) 921 922 return wrapper ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in cast_array_to_feature(array, feature, allow_number_to_str) 1058 } 1059 if isinstance(feature, dict) and set(field.name for field in array.type) == set(feature): -> 1060 arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()] 1061 return pa.StructArray.from_arrays(arrays, names=list(feature)) 1062 elif pa.types.is_list(array.type): ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in <listcomp>(.0) 1058 } 1059 if isinstance(feature, dict) and set(field.name for field in array.type) == set(feature): -> 1060 arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()] 1061 return pa.StructArray.from_arrays(arrays, names=list(feature)) 1062 elif pa.types.is_list(array.type): ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs) 942 if pa.types.is_list(array.type) and config.PYARROW_VERSION < version.parse("4.0.0"): 943 array = _sanitize(array) --> 944 return func(array, *args, **kwargs) 945 946 return wrapper ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs) 918 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) 919 else: --> 920 return func(array, *args, **kwargs) 921 922 return wrapper ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in cast_array_to_feature(array, feature, allow_number_to_str) 1058 } 1059 if isinstance(feature, dict) and set(field.name for field in array.type) == set(feature): -> 1060 arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()] 1061 return pa.StructArray.from_arrays(arrays, names=list(feature)) 1062 elif pa.types.is_list(array.type): ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in <listcomp>(.0) 1058 } 1059 if isinstance(feature, dict) and set(field.name for field in array.type) == set(feature): -> 1060 arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()] 1061 return pa.StructArray.from_arrays(arrays, names=list(feature)) 1062 elif pa.types.is_list(array.type): ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs) 942 if pa.types.is_list(array.type) and config.PYARROW_VERSION < version.parse("4.0.0"): 943 array = _sanitize(array) --> 944 return func(array, *args, **kwargs) 945 946 return wrapper ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs) 918 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) 919 else: --> 920 return func(array, *args, **kwargs) 921 922 return wrapper ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in cast_array_to_feature(array, feature, allow_number_to_str) 1085 elif not isinstance(feature, (Sequence, dict, list, tuple)): 1086 return array_cast(array, feature(), allow_number_to_str=allow_number_to_str) -> 1087 raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}") 1088 1089 TypeError: Couldn't cast array of type struct<医院-3.0T MRI: string, 医院-CT: string, 医院-DSA: string, 医院-公交线路: string, 医院-区域: string, 医院-名称: string, 医院-地址: string, 医院-地铁可达: string, 医院-地铁线路: string, 医院-性质: string, 医院-挂号时间: string, 医院-电话: string, 医院-等级: string, 医院-类别: string, 医院-重点科室: string, 医院-门诊时间: string, 天气-城市: string, 天气-天气: string, 天气-日期: string, 天气-温度: string, 天气-紫外线强度: string, 天气-风力风向: string, 旅游景点-区域: string, 旅游景点-名称: string, 旅游景点-地址: string, 旅游景点-开放时间: string, 旅游景点-是否地铁直达: string, 旅游景点-景点类型: string, 旅游景点-最适合人群: string, 旅游景点-消费: string, 旅游景点-特点: string, 旅游景点-电话号码: string, 旅游景点-评分: string, 旅游景点-门票价格: string, 汽车-价格(万元): string, 汽车-倒车影像: string, 汽车-动力水平: string, 汽车-厂商: string, 汽车-发动机排量(L): string, 汽车-发动机马力(Ps): string, 汽车-名称: string, 汽车-定速巡航: string, 汽车-巡航系统: string, 汽车-座位数: string, 汽车-座椅加热: string, 汽车-座椅通风: string, 汽车-所属价格区间: string, 汽车-油耗水平: string, 汽车-环保标准: string, 汽车-级别: string, 汽车-综合油耗(L/100km): string, 汽车-能源类型: string, 汽车-车型: string, 汽车-车系: string, 汽车-车身尺寸(mm): string, 汽车-驱动方式: string, 汽车-驾驶辅助影像: string, 火车-出发地: string, 火车-出发时间: string, 火车-到达时间: string, 火车-坐席: string, 火车-日期: string, 火车-时长: string, 火车-目的地: string, 火车-票价: string, 火车-舱位档次: string, 火车-车型: string, 火车-车次信息: string, 电影-主演: string, 电影-主演名单: string, 电影-具体上映时间: string, 电影-制片国家/地区: string, 电影-导演: string, 电影-年代: string, 电影-片名: string, 电影-片长: string, 电影-类型: string, 电影-豆瓣评分: string, 电脑-CPU: string, 电脑-CPU型号: string, 电脑-产品类别: string, 电脑-价格: string, 电脑-价格区间: string, 电脑-内存容量: string, 电脑-分类: string, 电脑-品牌: string, 电脑-商品名称: string, 电脑-屏幕尺寸: string, 电脑-待机时长: string, 电脑-显卡型号: string, 电脑-显卡类别: string, 电脑-游戏性能: string, 电脑-特性: string, 电脑-硬盘容量: string, 电脑-系列: string, 电脑-系统: string, 电脑-色系: string, 电脑-裸机重量: string, 电视剧-主演: string, 电视剧-主演名单: string, 电视剧-制片国家/地区: string, 电视剧-单集片长: string, 电视剧-导演: string, 电视剧-年代: string, 电视剧-片名: string, 电视剧-类型: string, 电视剧-豆瓣评分: string, 电视剧-集数: string, 电视剧-首播时间: string, 辅导班-上课方式: string, 辅导班-上课时间: string, 辅导班-下课时间: string, 辅导班-价格: string, 辅导班-区域: string, 辅导班-年级: string, 辅导班-开始日期: string, 辅导班-教室地点: string, 辅导班-教师: string, 辅导班-教师网址: string, 辅导班-时段: string, 辅导班-校区: string, 辅导班-每周: string, 辅导班-班号: string, 辅导班-科目: string, 辅导班-结束日期: string, 辅导班-课时: string, 辅导班-课次: string, 辅导班-课程网址: string, 辅导班-难度: string, 通用-产品类别: string, 通用-价格区间: string, 通用-品牌: string, 通用-系列: string, 酒店-价位: string, 酒店-停车场: string, 酒店-区域: string, 酒店-名称: string, 酒店-地址: string, 酒店-房型: string, 酒店-房费: string, 酒店-星级: string, 酒店-电话号码: string, 酒店-评分: string, 酒店-酒店类型: string, 飞机-准点率: string, 飞机-出发地: string, 飞机-到达时间: string, 飞机-日期: string, 飞机-目的地: string, 飞机-票价: string, 飞机-航班信息: string, 飞机-舱位档次: string, 飞机-起飞时间: string, 餐厅-人均消费: string, 餐厅-价位: string, 餐厅-区域: string, 餐厅-名称: string, 餐厅-地址: string, 餐厅-推荐菜: string, 餐厅-是否地铁直达: string, 餐厅-电话号码: string, 餐厅-菜系: string, 餐厅-营业时间: string, 餐厅-评分: string> to {'旅游景点-名称': Value(dtype='string', id=None), '旅游景点-区域': Value(dtype='string', id=None), '旅游景点-景点类型': Value(dtype='string', id=None), '旅游景点-最适合人群': Value(dtype='string', id=None), '旅游景点-消费': Value(dtype='string', id=None), '旅游景点-是否地铁直达': Value(dtype='string', id=None), '旅游景点-门票价格': Value(dtype='string', id=None), '旅游景点-电话号码': Value(dtype='string', id=None), '旅游景点-地址': Value(dtype='string', id=None), '旅游景点-评分': Value(dtype='string', id=None), '旅游景点-开放时间': Value(dtype='string', id=None), '旅游景点-特点': Value(dtype='string', id=None), '餐厅-名称': Value(dtype='string', id=None), '餐厅-区域': Value(dtype='string', id=None), '餐厅-菜系': Value(dtype='string', id=None), '餐厅-价位': Value(dtype='string', id=None), '餐厅-是否地铁直达': Value(dtype='string', id=None), '餐厅-人均消费': Value(dtype='string', id=None), '餐厅-地址': Value(dtype='string', id=None), '餐厅-电话号码': Value(dtype='string', id=None), '餐厅-评分': Value(dtype='string', id=None), '餐厅-营业时间': Value(dtype='string', id=None), '餐厅-推荐菜': Value(dtype='string', id=None), '酒店-名称': Value(dtype='string', id=None), '酒店-区域': Value(dtype='string', id=None), '酒店-星级': Value(dtype='string', id=None), '酒店-价位': Value(dtype='string', id=None), '酒店-酒店类型': Value(dtype='string', id=None), '酒店-房型': Value(dtype='string', id=None), '酒店-停车场': Value(dtype='string', id=None), '酒店-房费': Value(dtype='string', id=None), '酒店-地址': Value(dtype='string', id=None), '酒店-电话号码': Value(dtype='string', id=None), '酒店-评分': Value(dtype='string', id=None), '电脑-品牌': Value(dtype='string', id=None), '电脑-产品类别': Value(dtype='string', id=None), '电脑-分类': Value(dtype='string', id=None), '电脑-内存容量': Value(dtype='string', id=None), '电脑-屏幕尺寸': Value(dtype='string', id=None), '电脑-CPU': Value(dtype='string', id=None), '电脑-价格区间': Value(dtype='string', id=None), '电脑-系列': Value(dtype='string', id=None), '电脑-商品名称': Value(dtype='string', id=None), '电脑-系统': Value(dtype='string', id=None), '电脑-游戏性能': Value(dtype='string', id=None), '电脑-CPU型号': Value(dtype='string', id=None), '电脑-裸机重量': Value(dtype='string', id=None), '电脑-显卡类别': Value(dtype='string', id=None), '电脑-显卡型号': Value(dtype='string', id=None), '电脑-特性': Value(dtype='string', id=None), '电脑-色系': Value(dtype='string', id=None), '电脑-待机时长': Value(dtype='string', id=None), '电脑-硬盘容量': Value(dtype='string', id=None), '电脑-价格': Value(dtype='string', id=None), '火车-出发地': Value(dtype='string', id=None), '火车-目的地': Value(dtype='string', id=None), '火车-日期': Value(dtype='string', id=None), '火车-车型': Value(dtype='string', id=None), '火车-坐席': Value(dtype='string', id=None), '火车-车次信息': Value(dtype='string', id=None), '火车-时长': Value(dtype='string', id=None), '火车-出发时间': Value(dtype='string', id=None), '火车-到达时间': Value(dtype='string', id=None), '火车-票价': Value(dtype='string', id=None), '飞机-出发地': Value(dtype='string', id=None), '飞机-目的地': Value(dtype='string', id=None), '飞机-日期': Value(dtype='string', id=None), '飞机-舱位档次': Value(dtype='string', id=None), '飞机-航班信息': Value(dtype='string', id=None), '飞机-起飞时间': Value(dtype='string', id=None), '飞机-到达时间': Value(dtype='string', id=None), '飞机-票价': Value(dtype='string', id=None), '飞机-准点率': Value(dtype='string', id=None), '天气-城市': Value(dtype='string', id=None), '天气-日期': Value(dtype='string', id=None), '天气-天气': Value(dtype='string', id=None), '天气-温度': Value(dtype='string', id=None), '天气-风力风向': Value(dtype='string', id=None), '天气-紫外线强度': Value(dtype='string', id=None), '电影-制片国家/地区': Value(dtype='string', id=None), '电影-类型': Value(dtype='string', id=None), '电影-年代': Value(dtype='string', id=None), '电影-主演': Value(dtype='string', id=None), '电影-导演': Value(dtype='string', id=None), '电影-片名': Value(dtype='string', id=None), '电影-主演名单': Value(dtype='string', id=None), '电影-具体上映时间': Value(dtype='string', id=None), '电影-片长': Value(dtype='string', id=None), '电影-豆瓣评分': Value(dtype='string', id=None), '电视剧-制片国家/地区': Value(dtype='string', id=None), '电视剧-类型': Value(dtype='string', id=None), '电视剧-年代': Value(dtype='string', id=None), '电视剧-主演': Value(dtype='string', id=None), '电视剧-导演': Value(dtype='string', id=None), '电视剧-片名': Value(dtype='string', id=None), '电视剧-主演名单': Value(dtype='string', id=None), '电视剧-首播时间': Value(dtype='string', id=None), '电视剧-集数': Value(dtype='string', id=None), '电视剧-单集片长': Value(dtype='string', id=None), '电视剧-豆瓣评分': Value(dtype='string', id=None), '辅导班-班号': Value(dtype='string', id=None), '辅导班-难度': Value(dtype='string', id=None), '辅导班-科目': Value(dtype='string', id=None), '辅导班-年级': Value(dtype='string', id=None), '辅导班-区域': Value(dtype='string', id=None), '辅导班-校区': Value(dtype='string', id=None), '辅导班-上课方式': Value(dtype='string', id=None), '辅导班-开始日期': Value(dtype='string', id=None), '辅导班-结束日期': Value(dtype='string', id=None), '辅导班-每周': Value(dtype='string', id=None), '辅导班-上课时间': Value(dtype='string', id=None), '辅导班-下课时间': Value(dtype='string', id=None), '辅导班-时段': Value(dtype='string', id=None), '辅导班-课次': Value(dtype='string', id=None), '辅导班-课时': Value(dtype='string', id=None), '辅导班-教室地点': Value(dtype='string', id=None), '辅导班-教师': Value(dtype='string', id=None), '辅导班-价格': Value(dtype='string', id=None), '辅导班-课程网址': Value(dtype='string', id=None), '辅导班-教师网址': Value(dtype='string', id=None), '汽车-名称': Value(dtype='string', id=None), '汽车-车型': Value(dtype='string', id=None), '汽车-级别': Value(dtype='string', id=None), '汽车-座位数': Value(dtype='string', id=None), '汽车-车身尺寸(mm)': Value(dtype='string', id=None), '汽车-厂商': Value(dtype='string', id=None), '汽车-能源类型': Value(dtype='string', id=None), '汽车-发动机排量(L)': Value(dtype='string', id=None), '汽车-发动机马力(Ps)': Value(dtype='string', id=None), '汽车-驱动方式': Value(dtype='string', id=None), '汽车-综合油耗(L/100km)': Value(dtype='string', id=None), '汽车-环保标准': Value(dtype='string', id=None), '汽车-驾驶辅助影像': Value(dtype='string', id=None), '汽车-巡航系统': Value(dtype='string', id=None), '汽车-价格(万元)': Value(dtype='string', id=None), '汽车-车系': Value(dtype='string', id=None), '汽车-动力水平': Value(dtype='string', id=None), '汽车-油耗水平': Value(dtype='string', id=None), '汽车-倒车影像': Value(dtype='string', id=None), '汽车-定速巡航': Value(dtype='string', id=None), '汽车-座椅加热': Value(dtype='string', id=None), '汽车-座椅通风': Value(dtype='string', id=None), '汽车-所属价格区间': Value(dtype='string', id=None), '医院-名称': Value(dtype='string', id=None), '医院-等级': Value(dtype='string', id=None), '医院-类别': Value(dtype='string', id=None), '医院-性质': Value(dtype='string', id=None), '医院-区域': Value(dtype='string', id=None), '医院-地址': Value(dtype='string', id=None), '医院-电话': Value(dtype='string', id=None), '医院-挂号时间': Value(dtype='string', id=None), '医院-门诊时间': Value(dtype='string', id=None), '医院-公交线路': Value(dtype='string', id=None), '医院-地铁可达': Value(dtype='string', id=None), '医院-地铁线路': Value(dtype='string', id=None), '医院-重点科室': Value(dtype='string', id=None), '医院-CT': Value(dtype='string', id=None), '医院-3.0T MRI': Value(dtype='string', id=None), '医院-DSA': Value(dtype='string', id=None)} ``` </details> ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.1 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.8.10 - PyArrow version: 3.0.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3637/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3637/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
13 days, 18:37:51
https://api.github.com/repos/huggingface/datasets/issues/3634
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3634/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3634/comments
https://api.github.com/repos/huggingface/datasets/issues/3634/events
https://github.com/huggingface/datasets/issues/3634
1,115,133,279
I_kwDODunzps5Cd5Vf
3,634
Dataset.shuffle(seed=None) gives fixed row permutation
{ "avatar_url": "https://avatars.githubusercontent.com/u/18127060?v=4", "events_url": "https://api.github.com/users/elisno/events{/privacy}", "followers_url": "https://api.github.com/users/elisno/followers", "following_url": "https://api.github.com/users/elisno/following{/other_user}", "gists_url": "https://api.github.com/users/elisno/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/elisno", "id": 18127060, "login": "elisno", "node_id": "MDQ6VXNlcjE4MTI3MDYw", "organizations_url": "https://api.github.com/users/elisno/orgs", "received_events_url": "https://api.github.com/users/elisno/received_events", "repos_url": "https://api.github.com/users/elisno/repos", "site_admin": false, "starred_url": "https://api.github.com/users/elisno/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/elisno/subscriptions", "type": "User", "url": "https://api.github.com/users/elisno", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" } ]
[ "I'm not sure if this is expected behavior.\r\n\r\nAm I supposed to work with a copy of the dataset, i.e. `shuffled_dataset = data.shuffle(seed=None)`?\r\n\r\n```diff\r\nimport datasets\r\n\r\n# Some toy example\r\ndata = datasets.Dataset.from_dict(\r\n {\"feature\": [1, 2, 3, 4, 5], \"label\": [\"a\", \"b\", \"c\", \"d\", \"e\"]}\r\n)\r\n\r\n+shuffled_data = data.shuffle(seed=None)\r\n\r\n# Doesn't work as expected\r\nprint(\"Shuffle dataset\")\r\nfor _ in range(3):\r\n+ shuffled_data = shuffled_data.shuffle(seed=None)\r\n+ print(shuffled_data[:])\r\n- print(data.shuffle(seed=None)[:])\r\n\r\n# This seems to work with pandas\r\nprint(\"\\nShuffle via pandas\")\r\nfor _ in range(3):\r\n df = data.to_pandas().sample(frac=1.0)\r\n print(datasets.Dataset.from_pandas(df, preserve_index=False)[:])\r\n\r\n```\r\n\r\nor provide a `generator` instead?\r\n\r\n```diff\r\nimport datasets\r\n+from numpy.random import default_rng\r\n\r\n# Some toy example\r\ndata = datasets.Dataset.from_dict(\r\n {\"feature\": [1, 2, 3, 4, 5], \"label\": [\"a\", \"b\", \"c\", \"d\", \"e\"]}\r\n)\r\n\r\n+rng = default_rng()\r\n\r\n# Doesn't work as expected\r\nprint(\"Shuffle dataset\")\r\nfor _ in range(3):\r\n+ print(data.shuffle(generator=rng)[:])\r\n- print(data.shuffle(seed=None)[:])\r\n\r\n# This seems to work with pandas\r\nprint(\"\\nShuffle via pandas\")\r\nfor _ in range(3):\r\n df = data.to_pandas().sample(frac=1.0)\r\n print(datasets.Dataset.from_pandas(df, preserve_index=False)[:])\r\n\r\n```", "Hi! Thanks for reporting! Yes, this is not expected behavior. I've opened a PR with the fix." ]
2022-01-26T15:13:08
2022-01-27T18:16:07
2022-01-27T18:16:07
NONE
null
null
null
null
## Describe the bug Repeated attempts to `shuffle` a dataset without specifying a seed give the same results. ## Steps to reproduce the bug ```python import datasets # Some toy example data = datasets.Dataset.from_dict( {"feature": [1, 2, 3, 4, 5], "label": ["a", "b", "c", "d", "e"]} ) # Doesn't work as expected print("Shuffle dataset") for _ in range(3): print(data.shuffle(seed=None)[:]) # This seems to work with pandas print("\nShuffle via pandas") for _ in range(3): df = data.to_pandas().sample(frac=1.0) print(datasets.Dataset.from_pandas(df, preserve_index=False)[:]) ``` ## Expected results I assumed that the default setting would initialize a new/random state of a `np.random.BitGenerator` (see [docs](https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=shuffle#datasets.Dataset.shuffle)). Wouldn't that reshuffle the rows each time I call `data.shuffle()`? ## Actual results ```bash Shuffle dataset {'feature': [5, 1, 3, 2, 4], 'label': ['e', 'a', 'c', 'b', 'd']} {'feature': [5, 1, 3, 2, 4], 'label': ['e', 'a', 'c', 'b', 'd']} {'feature': [5, 1, 3, 2, 4], 'label': ['e', 'a', 'c', 'b', 'd']} Shuffle via pandas {'feature': [4, 2, 3, 1, 5], 'label': ['d', 'b', 'c', 'a', 'e']} {'feature': [2, 5, 3, 4, 1], 'label': ['b', 'e', 'c', 'd', 'a']} {'feature': [5, 2, 3, 1, 4], 'label': ['e', 'b', 'c', 'a', 'd']} ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.0 - Platform: Linux-5.13.0-27-generic-x86_64-with-glibc2.17 - Python version: 3.8.12 - PyArrow version: 6.0.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3634/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3634/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
1 day, 3:02:59
https://api.github.com/repos/huggingface/datasets/issues/3632
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3632/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3632/comments
https://api.github.com/repos/huggingface/datasets/issues/3632/events
https://github.com/huggingface/datasets/issues/3632
1,115,027,185
I_kwDODunzps5Cdfbx
3,632
Adding CC-100: Monolingual Datasets from Web Crawl Data (Datasets links are invalid)
{ "avatar_url": "https://avatars.githubusercontent.com/u/55232459?v=4", "events_url": "https://api.github.com/users/AnzorGozalishvili/events{/privacy}", "followers_url": "https://api.github.com/users/AnzorGozalishvili/followers", "following_url": "https://api.github.com/users/AnzorGozalishvili/following{/other_user}", "gists_url": "https://api.github.com/users/AnzorGozalishvili/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/AnzorGozalishvili", "id": 55232459, "login": "AnzorGozalishvili", "node_id": "MDQ6VXNlcjU1MjMyNDU5", "organizations_url": "https://api.github.com/users/AnzorGozalishvili/orgs", "received_events_url": "https://api.github.com/users/AnzorGozalishvili/received_events", "repos_url": "https://api.github.com/users/AnzorGozalishvili/repos", "site_admin": false, "starred_url": "https://api.github.com/users/AnzorGozalishvili/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AnzorGozalishvili/subscriptions", "type": "User", "url": "https://api.github.com/users/AnzorGozalishvili", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
[ "Hi @AnzorGozalishvili,\r\n\r\nMaybe their site was temporarily down, but it seems to work fine now.\r\n\r\nCould you please try again and confirm if the problem persists? ", "Hi @albertvillanova \r\nI checked and it works. \r\nIt seems that it was really temporarily down.\r\nThanks!" ]
2022-01-26T13:35:37
2022-02-10T06:58:11
2022-02-10T06:58:11
CONTRIBUTOR
null
null
null
null
## Describe the bug The dataset links are no longer valid for CC-100. It seems that the website which was keeping these files are no longer accessible and therefore this dataset became unusable. Check out the dataset [homepage](http://data.statmt.org/cc-100/) which isn't accessible. Also the URLs for dataset file per language isn't accessible: http://data.statmt.org/cc-100/<language code here>.txt.xz (language codes: am, sr, ka, etc.) ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("cc100", "ka") ``` It throws 503 error. ## Expected results It should successfully download and load dataset but it throws an exception because the dataset files are no longer accessible. ## Environment info Run from google colab. Just installed the library using pip: ```!pip install -U datasets```
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3632/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3632/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
14 days, 17:22:34
https://api.github.com/repos/huggingface/datasets/issues/3631
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3631/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3631/comments
https://api.github.com/repos/huggingface/datasets/issues/3631/events
https://github.com/huggingface/datasets/issues/3631
1,114,833,662
I_kwDODunzps5CcwL-
3,631
Labels conflict when loading a local CSV file.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8571301?v=4", "events_url": "https://api.github.com/users/pichljan/events{/privacy}", "followers_url": "https://api.github.com/users/pichljan/followers", "following_url": "https://api.github.com/users/pichljan/following{/other_user}", "gists_url": "https://api.github.com/users/pichljan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/pichljan", "id": 8571301, "login": "pichljan", "node_id": "MDQ6VXNlcjg1NzEzMDE=", "organizations_url": "https://api.github.com/users/pichljan/orgs", "received_events_url": "https://api.github.com/users/pichljan/received_events", "repos_url": "https://api.github.com/users/pichljan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/pichljan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pichljan/subscriptions", "type": "User", "url": "https://api.github.com/users/pichljan", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
[ "Hi @pichljan, thanks for reporting.\r\n\r\nThis should be fixed. I'm looking at it. " ]
2022-01-26T10:00:33
2022-02-11T23:02:31
2022-02-11T23:02:31
NONE
null
null
null
null
## Describe the bug I am trying to load a local CSV file with a separate file containing label names. It is successfully loaded for the first time, but when I try to load it again, there is a conflict between provided labels and the cached dataset info. Disabling caching globally and/or using `download_mode="force_redownload"` did not help. ## Steps to reproduce the bug ```python load_dataset('csv', data_files='data/my_data.csv', features=Features(text=Value(dtype='string'), label=ClassLabel(names_file='data/my_data_labels.txt'))) ``` `my_data.csv` file has the following structure: ``` text,label "example1",0 "example2",1 ... ``` and the `my_data_labels.txt` looks like this: ``` label1 label2 ... ``` ## Expected results Successfully loaded dataset. ## Actual results ```python File "/usr/local/lib/python3.8/site-packages/datasets/load.py", line 1706, in load_dataset ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications, in_memory=keep_in_memory) File "/usr/local/lib/python3.8/site-packages/datasets/builder.py", line 766, in as_dataset datasets = utils.map_nested( File "/usr/local/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 261, in map_nested mapped = [ File "/usr/local/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 262, in <listcomp> _single_map_nested((function, obj, types, None, True)) File "/usr/local/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 197, in _single_map_nested return function(data_struct) File "/usr/local/lib/python3.8/site-packages/datasets/builder.py", line 797, in _build_single_dataset ds = self._as_dataset( File "/usr/local/lib/python3.8/site-packages/datasets/builder.py", line 872, in _as_dataset return Dataset(fingerprint=fingerprint, **dataset_kwargs) File "/usr/local/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 638, in __init__ inferred_features = Features.from_arrow_schema(arrow_table.schema) File "/usr/local/lib/python3.8/site-packages/datasets/features/features.py", line 1242, in from_arrow_schema return Features.from_dict(metadata["info"]["features"]) File "/usr/local/lib/python3.8/site-packages/datasets/features/features.py", line 1271, in from_dict obj = generate_from_dict(dic) File "/usr/local/lib/python3.8/site-packages/datasets/features/features.py", line 1076, in generate_from_dict return {key: generate_from_dict(value) for key, value in obj.items()} File "/usr/local/lib/python3.8/site-packages/datasets/features/features.py", line 1076, in <dictcomp> return {key: generate_from_dict(value) for key, value in obj.items()} File "/usr/local/lib/python3.8/site-packages/datasets/features/features.py", line 1083, in generate_from_dict return class_type(**{k: v for k, v in obj.items() if k in field_names}) File "<string>", line 7, in __init__ File "/usr/local/lib/python3.8/site-packages/datasets/features/features.py", line 776, in __post_init__ raise ValueError("Please provide either names or names_file but not both.") ValueError: Please provide either names or names_file but not both. ``` ## Environment info - `datasets` version: 1.18.0 - Python version: 3.8.2
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3631/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3631/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
16 days, 13:01:58
https://api.github.com/repos/huggingface/datasets/issues/3630
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3630/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3630/comments
https://api.github.com/repos/huggingface/datasets/issues/3630/events
https://github.com/huggingface/datasets/issues/3630
1,114,578,625
I_kwDODunzps5Cbx7B
3,630
DuplicatedKeysError of NewsQA dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/37647985?v=4", "events_url": "https://api.github.com/users/StevenTang1998/events{/privacy}", "followers_url": "https://api.github.com/users/StevenTang1998/followers", "following_url": "https://api.github.com/users/StevenTang1998/following{/other_user}", "gists_url": "https://api.github.com/users/StevenTang1998/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/StevenTang1998", "id": 37647985, "login": "StevenTang1998", "node_id": "MDQ6VXNlcjM3NjQ3OTg1", "organizations_url": "https://api.github.com/users/StevenTang1998/orgs", "received_events_url": "https://api.github.com/users/StevenTang1998/received_events", "repos_url": "https://api.github.com/users/StevenTang1998/repos", "site_admin": false, "starred_url": "https://api.github.com/users/StevenTang1998/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/StevenTang1998/subscriptions", "type": "User", "url": "https://api.github.com/users/StevenTang1998", "user_view_type": "public" }
[ { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
[ "Thanks for reporting, @StevenTang1998.\r\n\r\nI'm fixing it. " ]
2022-01-26T03:05:49
2022-02-14T08:37:19
2022-02-14T08:37:19
NONE
null
null
null
null
After processing the dataset following official [NewsQA](https://github.com/Maluuba/newsqa), I used datasets to load it: ``` a = load_dataset('newsqa', data_dir='news') ``` and the following error occurred: ``` Using custom data configuration default-data_dir=news Downloading and preparing dataset newsqa/default to /root/.cache/huggingface/datasets/newsqa/default-data_dir=news/1.0.0/b0b23e22d94a3d352ad9d75aff2b71375264a122fae301463079ee8595e05ab9... Traceback (most recent call last): File "/usr/local/lib/python3.8/dist-packages/datasets/builder.py", line 1084, in _prepare_split writer.write(example, key) File "/usr/local/lib/python3.8/dist-packages/datasets/arrow_writer.py", line 442, in write self.check_duplicate_keys() File "/usr/local/lib/python3.8/dist-packages/datasets/arrow_writer.py", line 453, in check_duplicate_keys raise DuplicatedKeysError(key) datasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET ! Found duplicate Key: ./cnn/stories/6a0f9c8a5d0c6e8949b37924163c92923fe5770d.story Keys should be unique and deterministic in nature During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.8/dist-packages/datasets/load.py", line 1694, in load_dataset builder_instance.download_and_prepare( File "/usr/local/lib/python3.8/dist-packages/datasets/builder.py", line 595, in download_and_prepare self._download_and_prepare( File "/usr/local/lib/python3.8/dist-packages/datasets/builder.py", line 684, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/usr/local/lib/python3.8/dist-packages/datasets/builder.py", line 1086, in _prepare_split num_examples, num_bytes = writer.finalize() File "/usr/local/lib/python3.8/dist-packages/datasets/arrow_writer.py", line 524, in finalize self.check_duplicate_keys() File "/usr/local/lib/python3.8/dist-packages/datasets/arrow_writer.py", line 453, in check_duplicate_keys raise DuplicatedKeysError(key) datasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET ! Found duplicate Key: ./cnn/stories/6a0f9c8a5d0c6e8949b37924163c92923fe5770d.story Keys should be unique and deterministic in nature ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3630/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3630/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
19 days, 5:31:30
https://api.github.com/repos/huggingface/datasets/issues/3628
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3628/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3628/comments
https://api.github.com/repos/huggingface/datasets/issues/3628/events
https://github.com/huggingface/datasets/issues/3628
1,113,930,644
I_kwDODunzps5CZTuU
3,628
Dataset Card Creator drops information for "Additional Information" Section
{ "avatar_url": "https://avatars.githubusercontent.com/u/26013491?v=4", "events_url": "https://api.github.com/users/dennlinger/events{/privacy}", "followers_url": "https://api.github.com/users/dennlinger/followers", "following_url": "https://api.github.com/users/dennlinger/following{/other_user}", "gists_url": "https://api.github.com/users/dennlinger/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dennlinger", "id": 26013491, "login": "dennlinger", "node_id": "MDQ6VXNlcjI2MDEzNDkx", "organizations_url": "https://api.github.com/users/dennlinger/orgs", "received_events_url": "https://api.github.com/users/dennlinger/received_events", "repos_url": "https://api.github.com/users/dennlinger/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dennlinger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dennlinger/subscriptions", "type": "User", "url": "https://api.github.com/users/dennlinger", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
[]
[]
2022-01-25T14:06:17
2022-01-25T14:09:01
null
NONE
null
null
null
null
First of all, the card creator is a great addition and really helpful for streamlining dataset cards! ## Describe the bug I encountered an inconvenient bug when entering "Additional Information" in the react app, which drops already entered text when switching to a previous section, and then back again to "Additional Information". I was able to reproduce the issue in both Firefox and Chrome, so I suspect a problem with the React logic that doesn't expect users to switch back in the final section. Edit: I'm also not sure whether this is the right place to open the bug report on, since it's not clear to me which particular project it belongs to, or where I could find associated source code. ## Steps to reproduce the bug 1. Navigate to the Section "Additional Information" in the [dataset card creator](https://huggingface.co/datasets/card-creator/) 2. Enter text in an arbitrary field, e.g., "Dataset Curators". 3. Switch back to a previous section, like "Dataset Creation". 4. When switching back again to "Additional Information", the text has been deleted. Notably, this behavior can be reproduced again and again, it's not just problematic for the first "switch-back" from Additional Information. ## Expected results For step 4, the previously entered information should still be present in the boxes, similar to the behavior to all other sections (switching back there works as expected) ## Actual results The text boxes are empty again, and previously entered text got deleted. ## Environment info - `datasets` version: N/A - Platform: Firefox 96.0 / Chrome 97.0 - Python version: N/A - PyArrow version: N/A
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3628/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3628/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
https://api.github.com/repos/huggingface/datasets/issues/3626
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3626/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3626/comments
https://api.github.com/repos/huggingface/datasets/issues/3626/events
https://github.com/huggingface/datasets/issues/3626
1,113,534,436
I_kwDODunzps5CXy_k
3,626
The Pile cannot connect to host
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
[]
2022-01-25T07:43:33
2022-02-14T08:40:58
2022-02-14T08:40:58
MEMBER
null
null
null
null
## Describe the bug The Pile had issues with their previous host server and have mirrored its content to another server. The new URL server should be updated.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3626/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3626/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
20 days, 0:57:25
https://api.github.com/repos/huggingface/datasets/issues/3625
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3625/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3625/comments
https://api.github.com/repos/huggingface/datasets/issues/3625/events
https://github.com/huggingface/datasets/issues/3625
1,113,017,522
I_kwDODunzps5CV0yy
3,625
Add a metadata field for when source data was produced
{ "avatar_url": "https://avatars.githubusercontent.com/u/8995957?v=4", "events_url": "https://api.github.com/users/davanstrien/events{/privacy}", "followers_url": "https://api.github.com/users/davanstrien/followers", "following_url": "https://api.github.com/users/davanstrien/following{/other_user}", "gists_url": "https://api.github.com/users/davanstrien/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/davanstrien", "id": 8995957, "login": "davanstrien", "node_id": "MDQ6VXNlcjg5OTU5NTc=", "organizations_url": "https://api.github.com/users/davanstrien/orgs", "received_events_url": "https://api.github.com/users/davanstrien/received_events", "repos_url": "https://api.github.com/users/davanstrien/repos", "site_admin": false, "starred_url": "https://api.github.com/users/davanstrien/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/davanstrien/subscriptions", "type": "User", "url": "https://api.github.com/users/davanstrien", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
[ "A question to the datasets maintainers: is there a policy about how the set of allowed metadata fields is maintained and expanded?\r\n\r\nMetadata are very important, but defining the standard is always a struggle between allowing exhaustivity without being too complex. Archivists have Dublin Core, open data has https://frictionlessdata.io/, geo has ISO 19139 and INSPIRE, etc. and it's always a mess! I'm not sure we want to dig too much into it, but I'm curious to know if there has been some work on the metadata standard.", "> Metadata are very important, but defining the standard is always a struggle between allowing exhaustivity without being too complex. Archivists have Dublin Core, open data has [frictionlessdata.io](https://frictionlessdata.io/), geo has ISO 19139 and INSPIRE, etc. and it's always a mess! I'm not sure we want to dig too much into it, but I'm curious to know if there has been some work on the metadata standard.\r\n\r\n\r\nI thought this is a potential issue with adding this field since it might be hard to define what is general enough to be useful for most data vs what becomes very domain-specific. Potentially adding one extra field leads to more and more fields in the future. \r\n\r\nAnother issue is that there are some metadata standards around data i.e. [datacite](https://schema.datacite.org/meta/kernel-4.4/), but not many aimed explicitly at ML data afaik. Some of the discussions around metadata for ML are also more focused on versioning/managing data in production environments. My thinking is that here, some reference to the time of production would also often be tracked/relevant, i.e. for triggering model training, so having this information available in the hub would also help address this use case. ", "Adding a relevant paper related to this topic: [TimeLMs: Diachronic Language Models from Twitter](https://arxiv.org/abs/2202.03829)\r\n\r\n", "Related: https://github.com/huggingface/datasets/issues/3877", "Also related: the [Data Catalog Vocabulary - DCAT](https://www.w3.org/TR/vocab-dcat/) standard will be discussed in a new Working Group at the W3C: https://www.w3.org/2022/06/dx-wg-charter.html" ]
2022-01-24T18:52:39
2022-06-28T13:54:49
null
MEMBER
null
null
null
null
**Is your feature request related to a problem? Please describe.** The current problem is that information about when source data was produced is not easily visible. Though there are a variety of metadata fields available in the dataset viewer, time period information is not included. This feature request suggests making metadata relating to the time that the underlying *source* data was produced more prominent and outlines why this specific information is of particular importance, both in domain-specific historic research and more broadly. **Describe the solution you'd like** There are a variety of metadata fields exposed in the dataset viewer (license, task categories, etc.) These fields make this metadata more prominent both for human users and as potentially machine-actionable information (for example, through the API). I would propose to add a metadata field that says when some underlying data was produced. For example, a dataset would be labelled as being produced between `1800-1900`. **Describe alternatives you've considered** This information is sometimes available in the Datacard or a paper describing the dataset. However, it's often not that easy to identify or extract this information, particularly if you want to use this field as a filter to identify relevant datasets. **Additional context** I believe this feature is relevant for a number of reasons: - Increasingly, there is an interest in using historical data for training language models (for example, https://huggingface.co/dbmdz/bert-base-historic-dutch-cased), and datasets to support this task (for example, https://huggingface.co/datasets/bnl_newspapers). For these datasets, indicating the time periods covered is particularly relevant. - More broadly, time is likely a common source of domain drift. Datasets of movie reviews from the 90s may not work well for recent movie reviews. As the documentation and long-term management of ML data become more of a priority, quickly understanding the time when the underlying text (or other data types) is arguably more important. - time-series data: datasets are adding more support for time series data. Again, the periods covered might be particularly relevant here. **open questions** - I think some of my points above apply not only to the underlying data but also to annotations. As a result, there could also be an argument for encoding this information somewhere. However, I would argue (but could be persuaded otherwise) that this is probably less important for filtering. This type of context is already addressed in the datasheets template and often requires more narrative to discuss. - what level of granularity would make sense for this? e.g. assigning a decade, century or year? - how to encode this information? What formatting makes sense - what specific time to encode; a data range? (mean, modal, min, max value?) This is a slightly amorphous feature request - I would be happy to discuss further/try and propose a more concrete solution if this seems like something that could be worth considering. I realise this might also touch on other parts of the 🤗 hubs ecosystem.
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3625/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3625/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
https://api.github.com/repos/huggingface/datasets/issues/3622
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3622/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3622/comments
https://api.github.com/repos/huggingface/datasets/issues/3622/events
https://github.com/huggingface/datasets/issues/3622
1,112,831,661
I_kwDODunzps5CVHat
3,622
Extend support for streaming datasets that use os.path.relpath
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
[]
2022-01-24T15:58:23
2022-02-04T14:03:54
2022-02-04T14:03:54
MEMBER
null
null
null
null
Extend support for streaming datasets that use `os.path.relpath`. This feature will also be useful to yield the relative path of audio or image files.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3622/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3622/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
10 days, 22:05:31
https://api.github.com/repos/huggingface/datasets/issues/3621
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3621/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3621/comments
https://api.github.com/repos/huggingface/datasets/issues/3621/events
https://github.com/huggingface/datasets/issues/3621
1,112,720,434
I_kwDODunzps5CUsQy
3,621
Consider adding `ipywidgets` as a dependency.
{ "avatar_url": "https://avatars.githubusercontent.com/u/1019791?v=4", "events_url": "https://api.github.com/users/koaning/events{/privacy}", "followers_url": "https://api.github.com/users/koaning/followers", "following_url": "https://api.github.com/users/koaning/following{/other_user}", "gists_url": "https://api.github.com/users/koaning/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/koaning", "id": 1019791, "login": "koaning", "node_id": "MDQ6VXNlcjEwMTk3OTE=", "organizations_url": "https://api.github.com/users/koaning/orgs", "received_events_url": "https://api.github.com/users/koaning/received_events", "repos_url": "https://api.github.com/users/koaning/repos", "site_admin": false, "starred_url": "https://api.github.com/users/koaning/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/koaning/subscriptions", "type": "User", "url": "https://api.github.com/users/koaning", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "Hi! We use `tqdm` to display progress bars, so I suggest you open this issue in their repo.", "It depends on how you use `tqdm`, no? \r\n\r\nDoesn't this library import via; \r\n\r\n```\r\nfrom tqdm.notebook import tqdm\r\n```", "Hi! Sorry for the late reply. We import `tqdm` as `from tqdm.auto import tqdm`, which should be equal to `from tqdm.notebook import tqdm` in Jupyter.", "Any objection if I make a PR that checks if the widgets library is installed beforehand? " ]
2022-01-24T14:27:11
2022-02-24T09:04:36
2022-02-24T09:04:36
NONE
null
null
null
null
When I install `datasets` in a fresh virtualenv with jupyterlab I always see this error. ``` ImportError: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html ``` It's a bit of a nuisance, because I need to run shut down the jupyterlab server in order to install the required dependency. Might it be an option to just include it as a dependency here?
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3621/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3621/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
30 days, 18:37:25
https://api.github.com/repos/huggingface/datasets/issues/3618
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3618/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3618/comments
https://api.github.com/repos/huggingface/datasets/issues/3618/events
https://github.com/huggingface/datasets/issues/3618
1,112,123,365
I_kwDODunzps5CSafl
3,618
TIMIT Dataset not working with GPU
{ "avatar_url": "https://avatars.githubusercontent.com/u/3227869?v=4", "events_url": "https://api.github.com/users/TheSeamau5/events{/privacy}", "followers_url": "https://api.github.com/users/TheSeamau5/followers", "following_url": "https://api.github.com/users/TheSeamau5/following{/other_user}", "gists_url": "https://api.github.com/users/TheSeamau5/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/TheSeamau5", "id": 3227869, "login": "TheSeamau5", "node_id": "MDQ6VXNlcjMyMjc4Njk=", "organizations_url": "https://api.github.com/users/TheSeamau5/orgs", "received_events_url": "https://api.github.com/users/TheSeamau5/received_events", "repos_url": "https://api.github.com/users/TheSeamau5/repos", "site_admin": false, "starred_url": "https://api.github.com/users/TheSeamau5/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TheSeamau5/subscriptions", "type": "User", "url": "https://api.github.com/users/TheSeamau5", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "Hi ! I think you should avoid calling `timit_train['audio']`. Indeed by doing so you're **loading all the audio column in memory**. This is problematic in your case because the TIMIT dataset is huge.\r\n\r\nIf you want to access the audio data of some samples, you should do this instead `timit_train[:10][\"train\"]` for example.\r\n\r\nOther than that, I'm not sure why you get a `TypeError: string indices must be integers`, do you have a code snippet that reproduces the issue that you can share here ?", "I get the same error when I try to do `timit_train[0]` or really any indexing into the whole thing. \r\n\r\nReally, that IS the code snippet that reproduces the issue. If you index into other fields like 'file' or whatever, it works. As soon as one of the fields you're looking into is 'audio', you get that issue. It's a weird issue and I suspect it's Sagemaker/environment related, maybe the mix of libraries and dependencies are not good. \r\n\r\n\r\nExample code snippet with issue. \r\n```python\r\nfrom datasets import load_dataset\r\n\r\ntimit_train = load_dataset('timit_asr', split='train')\r\nprint(timit_train[0])\r\n```", "Ok I see ! From the error you got, it looks like the `value` encoded in the arrow file of the TIMIT dataset you loaded is a string instead of a dictionary with keys \"path\" and \"bytes\" but we don't support this since 1.18\r\n\r\nCan you try regenerating the dataset with `load_dataset('timit_asr', download_mode=\"force_redownload\")` please ? I think it should fix the issue." ]
2022-01-24T03:26:03
2023-07-25T15:20:20
2023-07-25T15:20:20
NONE
null
null
null
null
## Describe the bug I am working trying to use the TIMIT dataset in order to fine-tune Wav2Vec2 model and I am unable to load the "audio" column from the dataset when working with a GPU. I am working on Amazon Sagemaker Studio, on the Python 3 (PyTorch 1.8 Python 3.6 GPU Optimized) environment, with a single ml.g4dn.xlarge instance (corresponds to a Tesla T4 GPU). I don't know if the issue is GPU related or Python environment related because everything works when I work off of the CPU Optimized environment with a non-GPU instance. My code also works on Google Colab with a GPU instance. This issue is blocking because I cannot get the 'audio' column in any way due to this error, which means that I can't pass it to any functions. I later use the dataset.map function and that is where I originally noticed this error. ## Steps to reproduce the bug ```python from datasets import load_dataset timit_train = load_dataset('timit_asr', split='train') print(timit_train['audio']) ``` ## Expected results Expected to see inside the 'audio' column, which contains an 'array' nested field with the array data I actually need. ## Actual results Traceback ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-6-ceeac555e921> in <module> ----> 1 timit_train['audio'] /opt/conda/lib/python3.6/site-packages/datasets/arrow_dataset.py in __getitem__(self, key) 1917 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools).""" 1918 return self._getitem( -> 1919 key, 1920 ) 1921 /opt/conda/lib/python3.6/site-packages/datasets/arrow_dataset.py in _getitem(self, key, decoded, **kwargs) 1902 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None) 1903 formatted_output = format_table( -> 1904 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns 1905 ) 1906 return formatted_output /opt/conda/lib/python3.6/site-packages/datasets/formatting/formatting.py in format_table(table, key, formatter, format_columns, output_all_columns) 529 python_formatter = PythonFormatter(features=None) 530 if format_columns is None: --> 531 return formatter(pa_table, query_type=query_type) 532 elif query_type == "column": 533 if key in format_columns: /opt/conda/lib/python3.6/site-packages/datasets/formatting/formatting.py in __call__(self, pa_table, query_type) 280 return self.format_row(pa_table) 281 elif query_type == "column": --> 282 return self.format_column(pa_table) 283 elif query_type == "batch": 284 return self.format_batch(pa_table) /opt/conda/lib/python3.6/site-packages/datasets/formatting/formatting.py in format_column(self, pa_table) 315 column = self.python_arrow_extractor().extract_column(pa_table) 316 if self.decoded: --> 317 column = self.python_features_decoder.decode_column(column, pa_table.column_names[0]) 318 return column 319 /opt/conda/lib/python3.6/site-packages/datasets/formatting/formatting.py in decode_column(self, column, column_name) 221 222 def decode_column(self, column: list, column_name: str) -> list: --> 223 return self.features.decode_column(column, column_name) if self.features else column 224 225 def decode_batch(self, batch: dict) -> dict: /opt/conda/lib/python3.6/site-packages/datasets/features/features.py in decode_column(self, column, column_name) 1337 return ( 1338 [self[column_name].decode_example(value) if value is not None else None for value in column] -> 1339 if self._column_requires_decoding[column_name] 1340 else column 1341 ) /opt/conda/lib/python3.6/site-packages/datasets/features/features.py in <listcomp>(.0) 1336 """ 1337 return ( -> 1338 [self[column_name].decode_example(value) if value is not None else None for value in column] 1339 if self._column_requires_decoding[column_name] 1340 else column /opt/conda/lib/python3.6/site-packages/datasets/features/audio.py in decode_example(self, value) 85 dict 86 """ ---> 87 path, file = (value["path"], BytesIO(value["bytes"])) if value["bytes"] is not None else (value["path"], None) 88 if path is None and file is None: 89 raise ValueError(f"An audio sample should have one of 'path' or 'bytes' but both are None in {value}.") TypeError: string indices must be integers ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.0 - Platform: Linux-4.14.256-197.484.amzn2.x86_64-x86_64-with-debian-buster-sid - Python version: 3.6.13 - PyArrow version: 6.0.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3618/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3618/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
547 days, 11:54:17
https://api.github.com/repos/huggingface/datasets/issues/3615
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3615/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3615/comments
https://api.github.com/repos/huggingface/datasets/issues/3615/events
https://github.com/huggingface/datasets/issues/3615
1,111,576,876
I_kwDODunzps5CQVEs
3,615
Dataset BnL Historical Newspapers does not work in streaming mode
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
[ "@albertvillanova let me know if there is anything I can do to help with this. I had a quick look at the code again and though I could try the following changes:\r\n- use `download` instead of `download_and_extract`\r\nhttps://github.com/huggingface/datasets/blob/d3d339fb86d378f4cb3c5d1de423315c07a466c6/datasets/bnl_newspapers/bnl_newspapers.py#L136\r\n- swith to using `iter_archive` to loop through downloaded data to replace\r\nhttps://github.com/huggingface/datasets/blob/d3d339fb86d378f4cb3c5d1de423315c07a466c6/datasets/bnl_newspapers/bnl_newspapers.py#L159\r\n\r\nLet me know if it's useful for me to try and make those changes. ", "Thanks @davanstrien.\r\n\r\nI have already been working on it so that it can be used in the BigScience workshop.\r\n\r\nI agree that the `rglob()` is not efficient in this case.\r\n\r\nI tried different solutions without success:\r\n- `iter_archive` cannot be used in this case because it does not support ZIP files yet\r\n\r\nFinally I have used `iter_files()`.", "I see this is fixed now 🙂. I also picked up a few other tips from your redactors so hopefully my next attempts will support streaming from the start. " ]
2022-01-22T14:12:59
2022-02-04T14:05:21
2022-02-04T14:05:21
MEMBER
null
null
null
null
## Describe the bug When trying to load in streaming mode, it "hangs"... ## Steps to reproduce the bug ```python ds = load_dataset("bnl_newspapers", split="train", streaming=True) ``` ## Expected results The code should be optimized, so that it works fast in streaming mode. CC: @davanstrien
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3615/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3615/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
12 days, 23:52:22
https://api.github.com/repos/huggingface/datasets/issues/3613
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3613/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3613/comments
https://api.github.com/repos/huggingface/datasets/issues/3613/events
https://github.com/huggingface/datasets/issues/3613
1,110,684,015
I_kwDODunzps5CM7Fv
3,613
Files not updating in dataset viewer
{ "avatar_url": "https://avatars.githubusercontent.com/u/1778297?v=4", "events_url": "https://api.github.com/users/abidlabs/events{/privacy}", "followers_url": "https://api.github.com/users/abidlabs/followers", "following_url": "https://api.github.com/users/abidlabs/following{/other_user}", "gists_url": "https://api.github.com/users/abidlabs/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/abidlabs", "id": 1778297, "login": "abidlabs", "node_id": "MDQ6VXNlcjE3NzgyOTc=", "organizations_url": "https://api.github.com/users/abidlabs/orgs", "received_events_url": "https://api.github.com/users/abidlabs/received_events", "repos_url": "https://api.github.com/users/abidlabs/repos", "site_admin": false, "starred_url": "https://api.github.com/users/abidlabs/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abidlabs/subscriptions", "type": "User", "url": "https://api.github.com/users/abidlabs", "user_view_type": "public" }
[ { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
closed
false
null
[]
[ "Yes. The jobs queue is full right now, following an upgrade... Back to normality in the next hours hopefully. I'll look at your datasets to be sure the dataset viewer works as expected on them.", "Should have been fixed now." ]
2022-01-21T16:47:20
2022-01-22T08:13:13
2022-01-22T08:13:13
MEMBER
null
null
null
null
## Dataset viewer issue for '*name of the dataset*' **Link:** Some examples: * https://huggingface.co/datasets/abidlabs/crowdsourced-speech4 * https://huggingface.co/datasets/abidlabs/test-audio-13 *short description of the issue* It seems that the dataset viewer is reading a cached version of the dataset and it is not updating to reflect new files that are added to the dataset. I get this error: ![image](https://user-images.githubusercontent.com/1778297/150566660-30dc0dcd-18fd-4471-b70c-7c4bdc6a23c6.png) Am I the one who added this dataset? Yes
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3613/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3613/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
15:25:53
https://api.github.com/repos/huggingface/datasets/issues/3611
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3611/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3611/comments
https://api.github.com/repos/huggingface/datasets/issues/3611/events
https://github.com/huggingface/datasets/issues/3611
1,110,399,096
I_kwDODunzps5CL1h4
3,611
Indexing bug after dataset.select()
{ "avatar_url": "https://avatars.githubusercontent.com/u/17096858?v=4", "events_url": "https://api.github.com/users/kamalkraj/events{/privacy}", "followers_url": "https://api.github.com/users/kamalkraj/followers", "following_url": "https://api.github.com/users/kamalkraj/following{/other_user}", "gists_url": "https://api.github.com/users/kamalkraj/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kamalkraj", "id": 17096858, "login": "kamalkraj", "node_id": "MDQ6VXNlcjE3MDk2ODU4", "organizations_url": "https://api.github.com/users/kamalkraj/orgs", "received_events_url": "https://api.github.com/users/kamalkraj/received_events", "repos_url": "https://api.github.com/users/kamalkraj/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kamalkraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kamalkraj/subscriptions", "type": "User", "url": "https://api.github.com/users/kamalkraj", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" } ]
[ "Hi! Thanks for reporting! I've opened a PR with the fix." ]
2022-01-21T12:09:30
2022-01-27T18:16:22
2022-01-27T18:16:22
NONE
null
null
null
null
## Describe the bug A clear and concise description of what the bug is. Dataset indexing is not working as expected after `dataset.select(range(100))` ## Steps to reproduce the bug ```python # Sample code to reproduce the bug import datasets task_to_keys = { "cola": ("sentence", None), "mnli": ("premise", "hypothesis"), "mrpc": ("sentence1", "sentence2"), "qnli": ("question", "sentence"), "qqp": ("question1", "question2"), "rte": ("sentence1", "sentence2"), "sst2": ("sentence", None), "stsb": ("sentence1", "sentence2"), "wnli": ("sentence1", "sentence2"), } task_name = "sst2" raw_datasets = datasets.load_dataset("glue", task_name) train_dataset = raw_datasets["train"] print("before select: ",train_dataset[-2:]) # before select: {'sentence': ['a patient viewer ', 'this new jangle of noise , mayhem and stupidity must be a serious contender for the title . '], 'label': [1, 0], 'idx': [67347, 67348]} train_dataset = train_dataset.select(range(100)) print("after select: ",train_dataset[-2:]) # after select: {'sentence': [], 'label': [], 'idx': []} ``` link to colab: https://colab.research.google.com/drive/1LngeRC9f0jE7eSQ4Kh1cIeb411lRXQD-?usp=sharing ## Expected results A clear and concise description of the expected results. showing 98, 99 index data ## Actual results Specify the actual results or traceback. empty ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.17.0 - Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.12 - PyArrow version: 3.0.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3611/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3611/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
6 days, 6:06:52
https://api.github.com/repos/huggingface/datasets/issues/3610
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3610/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3610/comments
https://api.github.com/repos/huggingface/datasets/issues/3610/events
https://github.com/huggingface/datasets/issues/3610
1,109,777,314
I_kwDODunzps5CJdui
3,610
Checksum error when trying to load amazon_review dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4", "events_url": "https://api.github.com/users/ghost/events{/privacy}", "followers_url": "https://api.github.com/users/ghost/followers", "following_url": "https://api.github.com/users/ghost/following{/other_user}", "gists_url": "https://api.github.com/users/ghost/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ghost", "id": 10137, "login": "ghost", "node_id": "MDQ6VXNlcjEwMTM3", "organizations_url": "https://api.github.com/users/ghost/orgs", "received_events_url": "https://api.github.com/users/ghost/received_events", "repos_url": "https://api.github.com/users/ghost/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ghost/subscriptions", "type": "User", "url": "https://api.github.com/users/ghost", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "It is solved now" ]
2022-01-20T21:20:32
2022-01-21T13:22:31
2022-01-21T13:22:31
NONE
null
null
null
null
## Describe the bug A clear and concise description of what the bug is. ## Steps to reproduce the bug I am getting the issue when trying to load dataset using ``` dataset = load_dataset("amazon_polarity") ``` ## Expected results dataset loaded ## Actual results ``` --------------------------------------------------------------------------- NonMatchingChecksumError Traceback (most recent call last) <ipython-input-3-b4758ba980ae> in <module>() ----> 1 dataset = load_dataset("amazon_polarity") 2 dataset.set_format(type='pandas') 3 content_series = dataset['train']['content'] 4 label_series = dataset['train']['label'] 5 df = pd.concat([content_series, label_series], axis=1) 3 frames /usr/local/lib/python3.7/dist-packages/datasets/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name) 38 if len(bad_urls) > 0: 39 error_msg = "Checksums didn't match" + for_verification_name + ":\n" ---> 40 raise NonMatchingChecksumError(error_msg + str(bad_urls)) 41 logger.info("All the checksums matched successfully" + for_verification_name) 42 NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://drive.google.com/u/0/uc?id=0Bz8a_Dbh9QhbaW12WVVZS2drcnM&export=download'] ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.17.0 - Platform: Google colab - Python version: 3.7.12
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3610/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3610/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
16:01:59
https://api.github.com/repos/huggingface/datasets/issues/3608
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3608/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3608/comments
https://api.github.com/repos/huggingface/datasets/issues/3608/events
https://github.com/huggingface/datasets/issues/3608
1,109,310,981
I_kwDODunzps5CHr4F
3,608
Add support for continuous metrics (RMSE, MAE)
{ "avatar_url": "https://avatars.githubusercontent.com/u/50770?v=4", "events_url": "https://api.github.com/users/ck37/events{/privacy}", "followers_url": "https://api.github.com/users/ck37/followers", "following_url": "https://api.github.com/users/ck37/following{/other_user}", "gists_url": "https://api.github.com/users/ck37/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ck37", "id": 50770, "login": "ck37", "node_id": "MDQ6VXNlcjUwNzcw", "organizations_url": "https://api.github.com/users/ck37/orgs", "received_events_url": "https://api.github.com/users/ck37/received_events", "repos_url": "https://api.github.com/users/ck37/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ck37/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ck37/subscriptions", "type": "User", "url": "https://api.github.com/users/ck37", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "7057ff", "default": true, "description": "Good for newcomers", "id": 1935892877, "name": "good first issue", "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue" } ]
closed
false
null
[]
[ "Hey @ck37 \r\n\r\nYou can always use a custom metric as explained [in this guide from HF](https://huggingface.co/docs/datasets/master/loading_metrics.html#using-a-custom-metric-script).\r\n\r\nIf this issue needs to be contributed to (for enhancing the metric API) I think [this link](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.mean_absolute_error.html) would be helpful for the `MAE` metric.", "You can use a local metric script just by providing its path instead of the usual shortcut name ", "#self-assign I have starting working on this issue to enhance the metric API." ]
2022-01-20T13:35:36
2022-03-09T17:18:20
2022-03-09T17:18:20
NONE
null
null
null
null
**Is your feature request related to a problem? Please describe.** I am uploading our dataset and models for the "Constructing interval measures" method we've developed, which uses item response theory to convert multiple discrete labels into a continuous spectrum for hate speech. Once we have this outcome our NLP models conduct regression rather than classification, so binary metrics are not relevant. The only continuous metrics available at https://huggingface.co/metrics are pearson & spearman correlation, which don't ensure that the prediction is on the same scale as the outcome. **Describe the solution you'd like** I would like to be able to tag our models on the Hub with the following metrics: - RMSE - MAE **Describe alternatives you've considered** I don't know if there are any alternatives. **Additional context** Our preprint is available here: https://arxiv.org/abs/2009.10277 . We are making it available for use in Jigsaw's Toxic Severity Rating Kaggle competition: https://www.kaggle.com/c/jigsaw-toxic-severity-rating/overview . I have our first model uploaded to the Hub at https://huggingface.co/ucberkeley-dlab/hate-measure-roberta-large Thanks, Chris
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3608/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3608/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
48 days, 3:42:44
https://api.github.com/repos/huggingface/datasets/issues/3606
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3606/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3606/comments
https://api.github.com/repos/huggingface/datasets/issues/3606/events
https://github.com/huggingface/datasets/issues/3606
1,108,918,701
I_kwDODunzps5CGMGt
3,606
audio column not saved correctly after resampling
{ "avatar_url": "https://avatars.githubusercontent.com/u/24724502?v=4", "events_url": "https://api.github.com/users/laphang/events{/privacy}", "followers_url": "https://api.github.com/users/laphang/followers", "following_url": "https://api.github.com/users/laphang/following{/other_user}", "gists_url": "https://api.github.com/users/laphang/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/laphang", "id": 24724502, "login": "laphang", "node_id": "MDQ6VXNlcjI0NzI0NTAy", "organizations_url": "https://api.github.com/users/laphang/orgs", "received_events_url": "https://api.github.com/users/laphang/received_events", "repos_url": "https://api.github.com/users/laphang/repos", "site_admin": false, "starred_url": "https://api.github.com/users/laphang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/laphang/subscriptions", "type": "User", "url": "https://api.github.com/users/laphang", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "Hi ! We just released a new version of `datasets` that should fix this.\r\n\r\nI tested resampling and using save/load_from_disk afterwards and it seems to be fixed now", "Hi @lhoestq, \r\n\r\nJust tested the latest datasets version, and confirming that this is fixed for me. \r\n\r\nThanks!", "Also, just an FYI, data that I had saved (with save_to_disk) previously from common voice using datasets==1.17.0 now give the error below when loading (with load_from disk) using datasets==1.18.0. \r\n\r\nHowever, when starting fresh using load_dataset, then doing the resampling, the save/load_from disk worked fine. \r\n\r\n```\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n<timed exec> in <module>\r\n\r\n/opt/conda/lib/python3.7/site-packages/datasets/load.py in load_from_disk(dataset_path, fs, keep_in_memory)\r\n 1747 return Dataset.load_from_disk(dataset_path, fs, keep_in_memory=keep_in_memory)\r\n 1748 elif fs.isfile(Path(dest_dataset_path, config.DATASETDICT_JSON_FILENAME).as_posix()):\r\n-> 1749 return DatasetDict.load_from_disk(dataset_path, fs, keep_in_memory=keep_in_memory)\r\n 1750 else:\r\n 1751 raise FileNotFoundError(\r\n\r\n/opt/conda/lib/python3.7/site-packages/datasets/dataset_dict.py in load_from_disk(dataset_dict_path, fs, keep_in_memory)\r\n 769 else Path(dest_dataset_dict_path, k).as_posix()\r\n 770 )\r\n--> 771 dataset_dict[k] = Dataset.load_from_disk(dataset_dict_split_path, fs, keep_in_memory=keep_in_memory)\r\n 772 return dataset_dict\r\n 773 \r\n\r\n/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py in load_from_disk(dataset_path, fs, keep_in_memory)\r\n 1118 info=dataset_info,\r\n 1119 split=split,\r\n-> 1120 fingerprint=state[\"_fingerprint\"],\r\n 1121 )\r\n 1122 \r\n\r\n/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py in __init__(self, arrow_table, info, split, indices_table, fingerprint)\r\n 655 if self.info.features.type != inferred_features.type:\r\n 656 raise ValueError(\r\n--> 657 f\"External features info don't match the dataset:\\nGot\\n{self.info.features}\\nwith type\\n{self.info.features.type}\\n\\nbut expected something like\\n{inferred_features}\\nwith type\\n{inferred_features.type}\"\r\n 658 )\r\n 659 \r\n\r\nValueError: External features info don't match the dataset:\r\nGot\r\n{'accent': Value(dtype='string', id=None), 'age': Value(dtype='string', id=None), 'audio': Audio(sampling_rate=48000, mono=True, id=None), 'client_id': Value(dtype='string', id=None), 'down_votes': Value(dtype='int64', id=None), 'gender': Value(dtype='string', id=None), 'locale': Value(dtype='string', id=None), 'path': Value(dtype='string', id=None), 'segment': Value(dtype='string', id=None), 'sentence': Value(dtype='string', id=None), 'up_votes': Value(dtype='int64', id=None)}\r\nwith type\r\nstruct<accent: string, age: string, audio: struct<bytes: binary, path: string>, client_id: string, down_votes: int64, gender: string, locale: string, path: string, segment: string, sentence: string, up_votes: int64>\r\n\r\nbut expected something like\r\n{'accent': Value(dtype='string', id=None), 'age': Value(dtype='string', id=None), 'audio': {'path': Value(dtype='string', id=None), 'bytes': Value(dtype='binary', id=None)}, 'client_id': Value(dtype='string', id=None), 'down_votes': Value(dtype='int64', id=None), 'gender': Value(dtype='string', id=None), 'locale': Value(dtype='string', id=None), 'path': Value(dtype='string', id=None), 'segment': Value(dtype='string', id=None), 'sentence': Value(dtype='string', id=None), 'up_votes': Value(dtype='int64', id=None)}\r\nwith type\r\nstruct<accent: string, age: string, audio: struct<path: string, bytes: binary>, client_id: string, down_votes: int64, gender: string, locale: string, path: string, segment: string, sentence: string, up_votes: int64> \r\n```" ]
2022-01-20T06:37:10
2022-01-23T01:41:01
2022-01-23T01:24:14
NONE
null
null
null
null
## Describe the bug After resampling the audio column, saving with save_to_disk doesn't seem to save with the correct type. ## Steps to reproduce the bug - load a subset of common voice dataset (48Khz) - resample audio column to 16Khz - save with save_to_disk() - load with load_from_disk() ## Expected results I expected that after saving the data, and then loading it back in, the audio column has the correct dataset.Audio type (i.e. same as before saving it) {'accent': Value(dtype='string', id=None), 'age': Value(dtype='string', id=None), 'audio': Audio(sampling_rate=16000, mono=True, _storage_dtype='string', id=None), 'client_id': Value(dtype='string', id=None), 'down_votes': Value(dtype='int64', id=None), 'gender': Value(dtype='string', id=None), 'locale': Value(dtype='string', id=None), 'path': Value(dtype='string', id=None), 'segment': Value(dtype='string', id=None), 'sentence': Value(dtype='string', id=None), 'up_votes': Value(dtype='int64', id=None)} ## Actual results Audio column does not have the right type {'accent': Value(dtype='string', id=None), 'age': Value(dtype='string', id=None), 'audio': {'bytes': Value(dtype='binary', id=None), 'path': Value(dtype='string', id=None)}, 'client_id': Value(dtype='string', id=None), 'down_votes': Value(dtype='int64', id=None), 'gender': Value(dtype='string', id=None), 'locale': Value(dtype='string', id=None), 'path': Value(dtype='string', id=None), 'segment': Value(dtype='string', id=None), 'sentence': Value(dtype='string', id=None), 'up_votes': Value(dtype='int64', id=None)} ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.17.0 - Platform: linux - Python version: - PyArrow version:
{ "avatar_url": "https://avatars.githubusercontent.com/u/24724502?v=4", "events_url": "https://api.github.com/users/laphang/events{/privacy}", "followers_url": "https://api.github.com/users/laphang/followers", "following_url": "https://api.github.com/users/laphang/following{/other_user}", "gists_url": "https://api.github.com/users/laphang/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/laphang", "id": 24724502, "login": "laphang", "node_id": "MDQ6VXNlcjI0NzI0NTAy", "organizations_url": "https://api.github.com/users/laphang/orgs", "received_events_url": "https://api.github.com/users/laphang/received_events", "repos_url": "https://api.github.com/users/laphang/repos", "site_admin": false, "starred_url": "https://api.github.com/users/laphang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/laphang/subscriptions", "type": "User", "url": "https://api.github.com/users/laphang", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3606/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3606/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
2 days, 18:47:04
https://api.github.com/repos/huggingface/datasets/issues/3604
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3604/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3604/comments
https://api.github.com/repos/huggingface/datasets/issues/3604/events
https://github.com/huggingface/datasets/issues/3604
1,108,477,316
I_kwDODunzps5CEgWE
3,604
Dataset Viewer not showing Previews for Private Datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/1778297?v=4", "events_url": "https://api.github.com/users/abidlabs/events{/privacy}", "followers_url": "https://api.github.com/users/abidlabs/followers", "following_url": "https://api.github.com/users/abidlabs/following{/other_user}", "gists_url": "https://api.github.com/users/abidlabs/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/abidlabs", "id": 1778297, "login": "abidlabs", "node_id": "MDQ6VXNlcjE3NzgyOTc=", "organizations_url": "https://api.github.com/users/abidlabs/orgs", "received_events_url": "https://api.github.com/users/abidlabs/received_events", "repos_url": "https://api.github.com/users/abidlabs/repos", "site_admin": false, "starred_url": "https://api.github.com/users/abidlabs/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abidlabs/subscriptions", "type": "User", "url": "https://api.github.com/users/abidlabs", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo", "user_view_type": "public" } ]
[ "Sure, it's on the roadmap.", "Closing in favor of https://github.com/huggingface/datasets-server/issues/39." ]
2022-01-19T19:29:26
2022-09-26T08:04:43
2022-09-26T08:04:43
MEMBER
null
null
null
null
## Dataset viewer issue for 'abidlabs/test-audio-13' It seems that the dataset viewer does not show previews for `private` datasets, even for the user who's private dataset it is. See [1] for example. If I change the visibility to public, then it does show, but it would be useful to have the viewer even for private datasets. ![image](https://user-images.githubusercontent.com/1778297/150200515-93ff1545-11fd-4793-be64-6bed3cd895e2.png) **Link:** [1] https://huggingface.co/datasets/abidlabs/test-audio-13 **Am I the one who added this dataset?** Yes
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3604/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3604/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
249 days, 12:35:17
https://api.github.com/repos/huggingface/datasets/issues/3599
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3599/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3599/comments
https://api.github.com/repos/huggingface/datasets/issues/3599/events
https://github.com/huggingface/datasets/issues/3599
1,108,111,607
I_kwDODunzps5CDHD3
3,599
The `add_column()` method does not work if used on dataset sliced with `select()`
{ "avatar_url": "https://avatars.githubusercontent.com/u/59422506?v=4", "events_url": "https://api.github.com/users/ThGouzias/events{/privacy}", "followers_url": "https://api.github.com/users/ThGouzias/followers", "following_url": "https://api.github.com/users/ThGouzias/following{/other_user}", "gists_url": "https://api.github.com/users/ThGouzias/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ThGouzias", "id": 59422506, "login": "ThGouzias", "node_id": "MDQ6VXNlcjU5NDIyNTA2", "organizations_url": "https://api.github.com/users/ThGouzias/orgs", "received_events_url": "https://api.github.com/users/ThGouzias/received_events", "repos_url": "https://api.github.com/users/ThGouzias/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ThGouzias/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ThGouzias/subscriptions", "type": "User", "url": "https://api.github.com/users/ThGouzias", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" } ]
[ "similar #3611 " ]
2022-01-19T13:36:50
2022-01-28T15:35:57
2022-01-28T15:35:57
NONE
null
null
null
null
Hello, I posted this as a question on the forums ([here](https://discuss.huggingface.co/t/add-column-does-not-work-if-used-on-dataset-sliced-with-select/13893)): I have a dataset with 2000 entries > dataset = Dataset.from_dict({'colA': list(range(2000))}) and from which I want to extract the first one thousand rows, create a new dataset with these and also add a new column to it: > dataset2 = dataset.select(list(range(1000))) > final_dataset = dataset2.add_column('colB', list(range(1000))) This gives an error >ArrowInvalid: Added column's length must match table's length. Expected length 2000 but got length 1000 So it looks like even though it is a dataset with 1000 rows, it "remembers" the shape of the one it was sliced from. ## Actual results ``` ArrowInvalid Traceback (most recent call last) <ipython-input-138-e806860f3ce3> in <module> ----> 1 final_dataset = dataset2.add_column('colB', list(range(1000))) ~/.local/lib/python3.8/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs) 468 } 469 # apply actual function --> 470 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 471 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 472 # re-apply format to the output ~/.local/lib/python3.8/site-packages/datasets/fingerprint.py in wrapper(*args, **kwargs) 404 # Call actual function 405 --> 406 out = func(self, *args, **kwargs) 407 408 # Update fingerprint of in-place transforms + update in-place history of transforms ~/.local/lib/python3.8/site-packages/datasets/arrow_dataset.py in add_column(self, name, column, new_fingerprint) 3343 column_table = InMemoryTable.from_pydict({name: column}) 3344 # Concatenate tables horizontally -> 3345 table = ConcatenationTable.from_tables([self._data, column_table], axis=1) 3346 # Update features 3347 info = self.info.copy() ~/.local/lib/python3.8/site-packages/datasets/table.py in from_tables(cls, tables, axis) 729 table_blocks = to_blocks(table) 730 blocks = _extend_blocks(blocks, table_blocks, axis=axis) --> 731 return cls.from_blocks(blocks) 732 733 @property ~/.local/lib/python3.8/site-packages/datasets/table.py in from_blocks(cls, blocks) 668 @classmethod 669 def from_blocks(cls, blocks: TableBlockContainer) -> "ConcatenationTable": --> 670 blocks = cls._consolidate_blocks(blocks) 671 if isinstance(blocks, TableBlock): 672 table = blocks ~/.local/lib/python3.8/site-packages/datasets/table.py in _consolidate_blocks(cls, blocks) 664 return cls._merge_blocks(blocks, axis=0) 665 else: --> 666 return cls._merge_blocks(blocks) 667 668 @classmethod ~/.local/lib/python3.8/site-packages/datasets/table.py in _merge_blocks(cls, blocks, axis) 650 merged_blocks += list(block_group) 651 else: # both --> 652 merged_blocks = [cls._merge_blocks(row_block, axis=1) for row_block in blocks] 653 if all(len(row_block) == 1 for row_block in merged_blocks): 654 merged_blocks = cls._merge_blocks( ~/.local/lib/python3.8/site-packages/datasets/table.py in <listcomp>(.0) 650 merged_blocks += list(block_group) 651 else: # both --> 652 merged_blocks = [cls._merge_blocks(row_block, axis=1) for row_block in blocks] 653 if all(len(row_block) == 1 for row_block in merged_blocks): 654 merged_blocks = cls._merge_blocks( ~/.local/lib/python3.8/site-packages/datasets/table.py in _merge_blocks(cls, blocks, axis) 647 for is_in_memory, block_group in groupby(blocks, key=lambda x: isinstance(x, InMemoryTable)): 648 if is_in_memory: --> 649 block_group = [InMemoryTable(cls._concat_blocks(list(block_group), axis=axis))] 650 merged_blocks += list(block_group) 651 else: # both ~/.local/lib/python3.8/site-packages/datasets/table.py in _concat_blocks(blocks, axis) 626 else: 627 for name, col in zip(table.column_names, table.columns): --> 628 pa_table = pa_table.append_column(name, col) 629 return pa_table 630 else: ~/.local/lib/python3.8/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.append_column() ~/.local/lib/python3.8/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.add_column() ~/.local/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status() ~/.local/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status() ArrowInvalid: Added column's length must match table's length. Expected length 2000 but got length 1000 ``` A solution provided by @mariosasko is to use `dataset2.flatten_indices()` after the `select()` and before attempting to add the new column: > dataset = Dataset.from_dict({'colA': list(range(2000))}) > dataset2 = dataset.select(list(range(1000))) > dataset2 = dataset2.flatten_indices() > final_dataset = dataset2.add_column('colB', list(range(1000))) which works. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.13.2 (note: also checked with version 1.17.0, still the same error) - Platform: Ubuntu 20.04.3 - Python version: 3.8.10 - PyArrow version: 6.0.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/3599/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3599/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
9 days, 1:59:07
https://api.github.com/repos/huggingface/datasets/issues/3598
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3598/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3598/comments
https://api.github.com/repos/huggingface/datasets/issues/3598/events
https://github.com/huggingface/datasets/issues/3598
1,108,107,199
I_kwDODunzps5CDF-_
3,598
Readme info not being parsed to show on Dataset card page
{ "avatar_url": "https://avatars.githubusercontent.com/u/79796807?v=4", "events_url": "https://api.github.com/users/davidcanovas/events{/privacy}", "followers_url": "https://api.github.com/users/davidcanovas/followers", "following_url": "https://api.github.com/users/davidcanovas/following{/other_user}", "gists_url": "https://api.github.com/users/davidcanovas/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/davidcanovas", "id": 79796807, "login": "davidcanovas", "node_id": "MDQ6VXNlcjc5Nzk2ODA3", "organizations_url": "https://api.github.com/users/davidcanovas/orgs", "received_events_url": "https://api.github.com/users/davidcanovas/received_events", "repos_url": "https://api.github.com/users/davidcanovas/repos", "site_admin": false, "starred_url": "https://api.github.com/users/davidcanovas/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/davidcanovas/subscriptions", "type": "User", "url": "https://api.github.com/users/davidcanovas", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "i suspect a markdown parsing error, @severo do you want to take a quick look at it when you have some time?", "# Problem\r\nThe issue seems to coming from the front matter of the README\r\n```---\r\nannotations_creators:\r\n- no-annotation\r\nlanguage_creators:\r\n- machine-generated\r\nlanguages:\r\n- 'ca'\r\n- 'de'\r\nlicenses:\r\n- cc-by-4.0\r\nmultilinguality:\r\n- translation\r\npretty_name: Catalan-German aligned corpora to train NMT systems.\r\nsize_categories:\r\n- \"1M<n<10M\" \r\nsource_datasets:\r\n- extended|tilde_model\r\ntask_categories:\r\n- machine-translation\r\ntask_ids:\r\n- machine-translation\r\n---\r\n``` \r\n# Solution\r\nThe fix is to correctly style the README as explained [here](https://huggingface.co/docs/datasets/v1.12.0/dataset_card.html). I have also correctly parsed the font matter as shown below:\r\n```\r\n---\r\nannotations_creators: []\r\nlanguage_creators: [machine-generated]\r\nlanguages: ['ca', 'de']\r\nlicenses: []\r\nmultilinguality:\r\n- multilingual\r\npretty_name: 'Catalan-German aligned corpora to train NMT systems.'\r\nsize_categories: \r\n- 1M<n<10M\r\nsource_datasets: ['extended|tilde_model']\r\ntask_categories: ['machine-translation']\r\ntask_ids: ['machine-translation']\r\n---\r\n```\r\nYou can find the README for a sample dataset [here](https://huggingface.co/datasets/ritwikraha/Test)", "Thank you. It finally worked implementing your changes and leaving a white line between title and text in the description.", "Thanks, if this solves your issue, can you please close it?" ]
2022-01-19T13:32:29
2022-01-21T10:20:01
2022-01-21T10:20:01
NONE
null
null
null
null
## Describe the bug The info contained in the README.md file is not being shown in the dataset main page. Basic info and table of contents are properly formatted in the README. ## Steps to reproduce the bug # Sample code to reproduce the bug The README file is this one: https://huggingface.co/datasets/softcatala/Tilde-MODEL-Catalan/blob/main/README.md ## Expected results README info should appear in the Dataset card page. ## Actual results Nothing is shown. However, labels are parsed and shown successfully.
{ "avatar_url": "https://avatars.githubusercontent.com/u/79796807?v=4", "events_url": "https://api.github.com/users/davidcanovas/events{/privacy}", "followers_url": "https://api.github.com/users/davidcanovas/followers", "following_url": "https://api.github.com/users/davidcanovas/following{/other_user}", "gists_url": "https://api.github.com/users/davidcanovas/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/davidcanovas", "id": 79796807, "login": "davidcanovas", "node_id": "MDQ6VXNlcjc5Nzk2ODA3", "organizations_url": "https://api.github.com/users/davidcanovas/orgs", "received_events_url": "https://api.github.com/users/davidcanovas/received_events", "repos_url": "https://api.github.com/users/davidcanovas/repos", "site_admin": false, "starred_url": "https://api.github.com/users/davidcanovas/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/davidcanovas/subscriptions", "type": "User", "url": "https://api.github.com/users/davidcanovas", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3598/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3598/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
1 day, 20:47:32
https://api.github.com/repos/huggingface/datasets/issues/3597
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3597/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3597/comments
https://api.github.com/repos/huggingface/datasets/issues/3597/events
https://github.com/huggingface/datasets/issues/3597
1,108,092,864
I_kwDODunzps5CDCfA
3,597
ERROR: File "setup.py" or "setup.cfg" not found. Directory cannot be installed in editable mode: /content
{ "avatar_url": "https://avatars.githubusercontent.com/u/49492030?v=4", "events_url": "https://api.github.com/users/amitkml/events{/privacy}", "followers_url": "https://api.github.com/users/amitkml/followers", "following_url": "https://api.github.com/users/amitkml/following{/other_user}", "gists_url": "https://api.github.com/users/amitkml/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/amitkml", "id": 49492030, "login": "amitkml", "node_id": "MDQ6VXNlcjQ5NDkyMDMw", "organizations_url": "https://api.github.com/users/amitkml/orgs", "received_events_url": "https://api.github.com/users/amitkml/received_events", "repos_url": "https://api.github.com/users/amitkml/repos", "site_admin": false, "starred_url": "https://api.github.com/users/amitkml/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amitkml/subscriptions", "type": "User", "url": "https://api.github.com/users/amitkml", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "Hi! The `cd` command in Jupyer/Colab needs to start with `%`, so this should work:\r\n```\r\n!git clone https://github.com/huggingface/datasets.git\r\n%cd datasets\r\n!pip install -e \".[streaming]\"\r\n```", "thanks @mariosasko i had the same mistake and your solution is what was needed" ]
2022-01-19T13:19:28
2022-08-05T12:35:51
2022-02-14T08:46:34
NONE
null
null
null
null
## Bug The install of streaming dataset is giving following error. ## Steps to reproduce the bug ```python ! git clone https://github.com/huggingface/datasets.git ! cd datasets ! pip install -e ".[streaming]" ``` ## Actual results Cloning into 'datasets'... remote: Enumerating objects: 50816, done. remote: Counting objects: 100% (2356/2356), done. remote: Compressing objects: 100% (1606/1606), done. remote: Total 50816 (delta 834), reused 1741 (delta 525), pack-reused 48460 Receiving objects: 100% (50816/50816), 72.47 MiB | 27.68 MiB/s, done. Resolving deltas: 100% (22541/22541), done. Checking out files: 100% (6722/6722), done. ERROR: File "setup.py" or "setup.cfg" not found. Directory cannot be installed in editable mode: /content
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3597/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3597/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
25 days, 19:27:06
https://api.github.com/repos/huggingface/datasets/issues/3596
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3596/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3596/comments
https://api.github.com/repos/huggingface/datasets/issues/3596/events
https://github.com/huggingface/datasets/issues/3596
1,107,345,338
I_kwDODunzps5CAL-6
3,596
Loss of cast `Image` feature on certain dataset method
{ "avatar_url": "https://avatars.githubusercontent.com/u/8995957?v=4", "events_url": "https://api.github.com/users/davanstrien/events{/privacy}", "followers_url": "https://api.github.com/users/davanstrien/followers", "following_url": "https://api.github.com/users/davanstrien/following{/other_user}", "gists_url": "https://api.github.com/users/davanstrien/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/davanstrien", "id": 8995957, "login": "davanstrien", "node_id": "MDQ6VXNlcjg5OTU5NTc=", "organizations_url": "https://api.github.com/users/davanstrien/orgs", "received_events_url": "https://api.github.com/users/davanstrien/received_events", "repos_url": "https://api.github.com/users/davanstrien/repos", "site_admin": false, "starred_url": "https://api.github.com/users/davanstrien/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/davanstrien/subscriptions", "type": "User", "url": "https://api.github.com/users/davanstrien", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "Hi! Thanks for reporting! The issue with `cast_column` should be fixed by #3575 and after we merge that PR I'll start working on the `push_to_hub` support for the `Image`/`Audio` feature.", "> Hi! Thanks for reporting! The issue with `cast_column` should be fixed by #3575 and after we merge that PR I'll start working on the `push_to_hub` support for the `Image`/`Audio` feature.\r\n\r\nThanks, I'll keep an eye out for #3575 getting merged. I managed to use `push_to_hub` sucesfully with images when they were loaded via `map` - something like `ds.map(lambda example: {\"img\": load_image_function(example['fname']})`, this only pushed the images to the hub if the `load_image_function` return a PIL Image without the filename attribute though. I guess this might often be the prefered behaviour though. \r\n", "Hi ! We merged the PR and did a release of `datasets` that includes the changes. Can you try updating `datasets` and try again ?", "> Hi ! We merged the PR and did a release of `datasets` that includes the changes. Can you try updating `datasets` and try again ?\r\n\r\nThanks for checking. There is no longer an error when calling `select` but it appears the cast value isn't preserved. Before `select`\r\n\r\n```python\r\ndataset.features\r\n{'url': Image(id=None)}\r\n```\r\n\r\nafter select:\r\n```\r\n{'url': Value(dtype='string', id=None)}\r\n```\r\n\r\nUpdated Colab example [here](https://colab.research.google.com/gist/davanstrien/4e88f55a3675c279b5c2f64299ae5c6f/potential_casting_bug.ipynb) ", "Hmmm, if I re-run your google colab I'm getting the right type at the end:\r\n```\r\nsample.features\r\n# {'url': Image(id=None)}\r\n```", "Appolgies - I've just run again and also got this output. I have also sucesfully used the `push_to_hub` method. I think this is fixed now so will close this issue. ", "Fixed in #3575 " ]
2022-01-18T20:44:01
2022-01-21T18:07:28
2022-01-21T18:07:28
MEMBER
null
null
null
null
## Describe the bug When an a column is cast to an `Image` feature, the cast type appears to be lost during certain operations. I first noticed this when using the `push_to_hub` method on a dataset that contained urls pointing to images which had been cast to an `image`. This also happens when using select on a dataset which has had a column cast to an `Image`. I suspect this might be related to https://github.com/huggingface/datasets/pull/3556 but I don't believe that pull request fixes this issue. ## Steps to reproduce the bug An example of casting a url to an image followed by using the `select` method: ```python from datasets import Dataset from datasets import features url = "https://cf.ltkcdn.net/cats/images/std-lg/246866-1200x816-grey-white-kitten.webp" data_dict = {"url": [url]*2} dataset = Dataset.from_dict(data_dict) dataset = dataset.cast_column('url',features.Image()) sample = dataset.select([1]) ``` [example notebook](https://gist.github.com/davanstrien/06e53f4383c28ae77ce1b30d0eaf0d70#file-potential_casting_bug-ipynb) ## Expected results The cast value is maintained when further methods are applied to the dataset. ## Actual results ```python --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-12-47f393bc2d0d> in <module>() ----> 1 sample = dataset.select([1]) 4 frames /usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs) 487 } 488 # apply actual function --> 489 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 490 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 491 # re-apply format to the output /usr/local/lib/python3.7/dist-packages/datasets/fingerprint.py in wrapper(*args, **kwargs) 409 # Call actual function 410 --> 411 out = func(self, *args, **kwargs) 412 413 # Update fingerprint of in-place transforms + update in-place history of transforms /usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in select(self, indices, keep_in_memory, indices_cache_file_name, writer_batch_size, new_fingerprint) 2772 ) 2773 else: -> 2774 return self._new_dataset_with_indices(indices_buffer=buf_writer.getvalue(), fingerprint=new_fingerprint) 2775 2776 @transmit_format /usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in _new_dataset_with_indices(self, indices_cache_file_name, indices_buffer, fingerprint) 2688 split=self.split, 2689 indices_table=indices_table, -> 2690 fingerprint=fingerprint, 2691 ) 2692 /usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in __init__(self, arrow_table, info, split, indices_table, fingerprint) 664 if self.info.features.type != inferred_features.type: 665 raise ValueError( --> 666 f"External features info don't match the dataset:\nGot\n{self.info.features}\nwith type\n{self.info.features.type}\n\nbut expected something like\n{inferred_features}\nwith type\n{inferred_features.type}" 667 ) 668 ValueError: External features info don't match the dataset: Got {'url': Image(id=None)} with type struct<url: extension<arrow.py_extension_type<ImageExtensionType>>> but expected something like {'url': Value(dtype='string', id=None)} with type struct<url: string> ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.17.1.dev0 - Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.12 - PyArrow version: 3.0.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/8995957?v=4", "events_url": "https://api.github.com/users/davanstrien/events{/privacy}", "followers_url": "https://api.github.com/users/davanstrien/followers", "following_url": "https://api.github.com/users/davanstrien/following{/other_user}", "gists_url": "https://api.github.com/users/davanstrien/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/davanstrien", "id": 8995957, "login": "davanstrien", "node_id": "MDQ6VXNlcjg5OTU5NTc=", "organizations_url": "https://api.github.com/users/davanstrien/orgs", "received_events_url": "https://api.github.com/users/davanstrien/received_events", "repos_url": "https://api.github.com/users/davanstrien/repos", "site_admin": false, "starred_url": "https://api.github.com/users/davanstrien/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/davanstrien/subscriptions", "type": "User", "url": "https://api.github.com/users/davanstrien", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3596/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3596/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
2 days, 21:23:27
https://api.github.com/repos/huggingface/datasets/issues/3587
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3587/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3587/comments
https://api.github.com/repos/huggingface/datasets/issues/3587/events
https://github.com/huggingface/datasets/issues/3587
1,106,719,182
I_kwDODunzps5B9zHO
3,587
No module named 'fsspec.archive'
{ "avatar_url": "https://avatars.githubusercontent.com/u/13246825?v=4", "events_url": "https://api.github.com/users/shuuchen/events{/privacy}", "followers_url": "https://api.github.com/users/shuuchen/followers", "following_url": "https://api.github.com/users/shuuchen/following{/other_user}", "gists_url": "https://api.github.com/users/shuuchen/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/shuuchen", "id": 13246825, "login": "shuuchen", "node_id": "MDQ6VXNlcjEzMjQ2ODI1", "organizations_url": "https://api.github.com/users/shuuchen/orgs", "received_events_url": "https://api.github.com/users/shuuchen/received_events", "repos_url": "https://api.github.com/users/shuuchen/repos", "site_admin": false, "starred_url": "https://api.github.com/users/shuuchen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shuuchen/subscriptions", "type": "User", "url": "https://api.github.com/users/shuuchen", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[]
2022-01-18T10:17:01
2022-08-11T09:57:54
2022-01-18T10:33:10
NONE
null
null
null
null
## Describe the bug Cannot import datasets after installation. ## Steps to reproduce the bug ```shell $ python Python 3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0] :: Anaconda, Inc. on linux Type "help", "copyright", "credits" or "license" for more information. >>> import datasets Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/shuchen/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/__init__.py", line 34, in <module> from .arrow_dataset import Dataset, concatenate_datasets File "/home/shuchen/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 61, in <module> from .arrow_writer import ArrowWriter, OptimizedTypedSequence File "/home/shuchen/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/arrow_writer.py", line 28, in <module> from .features import ( File "/home/shuchen/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/features/__init__.py", line 2, in <module> from .audio import Audio File "/home/shuchen/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/features/audio.py", line 7, in <module> from ..utils.streaming_download_manager import xopen File "/home/shuchen/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/utils/streaming_download_manager.py", line 18, in <module> from ..filesystems import COMPRESSION_FILESYSTEMS File "/home/shuchen/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/filesystems/__init__.py", line 6, in <module> from . import compression File "/home/shuchen/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/filesystems/compression.py", line 5, in <module> from fsspec.archive import AbstractArchiveFileSystem ModuleNotFoundError: No module named 'fsspec.archive' ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/13246825?v=4", "events_url": "https://api.github.com/users/shuuchen/events{/privacy}", "followers_url": "https://api.github.com/users/shuuchen/followers", "following_url": "https://api.github.com/users/shuuchen/following{/other_user}", "gists_url": "https://api.github.com/users/shuuchen/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/shuuchen", "id": 13246825, "login": "shuuchen", "node_id": "MDQ6VXNlcjEzMjQ2ODI1", "organizations_url": "https://api.github.com/users/shuuchen/orgs", "received_events_url": "https://api.github.com/users/shuuchen/received_events", "repos_url": "https://api.github.com/users/shuuchen/repos", "site_admin": false, "starred_url": "https://api.github.com/users/shuuchen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shuuchen/subscriptions", "type": "User", "url": "https://api.github.com/users/shuuchen", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3587/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3587/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
0:16:09
https://api.github.com/repos/huggingface/datasets/issues/3586
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3586/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3586/comments
https://api.github.com/repos/huggingface/datasets/issues/3586/events
https://github.com/huggingface/datasets/issues/3586
1,106,455,672
I_kwDODunzps5B8yx4
3,586
Revisit `enable/disable_` toggle function prefix
{ "avatar_url": "https://avatars.githubusercontent.com/u/25360440?v=4", "events_url": "https://api.github.com/users/jaketae/events{/privacy}", "followers_url": "https://api.github.com/users/jaketae/followers", "following_url": "https://api.github.com/users/jaketae/following{/other_user}", "gists_url": "https://api.github.com/users/jaketae/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jaketae", "id": 25360440, "login": "jaketae", "node_id": "MDQ6VXNlcjI1MzYwNDQw", "organizations_url": "https://api.github.com/users/jaketae/orgs", "received_events_url": "https://api.github.com/users/jaketae/received_events", "repos_url": "https://api.github.com/users/jaketae/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jaketae/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jaketae/subscriptions", "type": "User", "url": "https://api.github.com/users/jaketae", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" } ]
[]
2022-01-18T04:09:55
2022-03-14T15:01:08
2022-03-14T15:01:08
CONTRIBUTOR
null
null
null
null
As discussed in https://github.com/huggingface/transformers/pull/15167, we should revisit the `enable/disable_` toggle function prefix, potentially in favor of `set_enabled_`. Concretely, this translates to - De-deprecating `disable_progress_bar()` - Adding `enable_progress_bar()` - On the caching side, adding `enable_caching` and `disable_caching` Additional decisions have to be made with regards to the existing `set_enabled_X` functions; that is, whether to keep them as is or deprecate them in favor of the aforementioned functions. cc @mariosasko @lhoestq
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/3586/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3586/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
55 days, 10:51:13
https://api.github.com/repos/huggingface/datasets/issues/3585
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3585/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3585/comments
https://api.github.com/repos/huggingface/datasets/issues/3585/events
https://github.com/huggingface/datasets/issues/3585
1,105,821,470
I_kwDODunzps5B6X8e
3,585
Datasets streaming + map doesn't work for `Audio`
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" }, { "color": "cfd3d7", "default": true, "description": "This issue or pull request already exists", "id": 1935892865, "name": "duplicate", "node_id": "MDU6TGFiZWwxOTM1ODkyODY1", "url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }, { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" } ]
[ "This seems related to https://github.com/huggingface/datasets/issues/3505." ]
2022-01-17T12:55:42
2022-01-20T13:28:00
2022-01-20T13:28:00
CONTRIBUTOR
null
null
null
null
## Describe the bug When using audio datasets in streaming mode, applying a `map(...)` before iterating leads to an error as the key `array` does not exist anymore. ## Steps to reproduce the bug ```python from datasets import load_dataset ds = load_dataset("common_voice", "en", streaming=True, split="train") def map_fn(batch): print("audio keys", batch["audio"].keys()) batch["audio"] = batch["audio"]["array"][:100] return batch ds = ds.map(map_fn) sample = next(iter(ds)) ``` I think the audio is somehow decoded before `.map(...)` is actually called. ## Expected results IMO, the above code snippet should work. ## Actual results ```bash audio keys dict_keys(['path', 'bytes']) Traceback (most recent call last): File "./run_audio.py", line 15, in <module> sample = next(iter(ds)) File "/home/patrick/python_bin/datasets/iterable_dataset.py", line 341, in __iter__ for key, example in self._iter(): File "/home/patrick/python_bin/datasets/iterable_dataset.py", line 338, in _iter yield from ex_iterable File "/home/patrick/python_bin/datasets/iterable_dataset.py", line 192, in __iter__ yield key, self.function(example) File "./run_audio.py", line 9, in map_fn batch["input"] = batch["audio"]["array"][:100] KeyError: 'array' ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.17.1.dev0 - Platform: Linux-5.3.0-64-generic-x86_64-with-glibc2.17 - Python version: 3.8.12 - PyArrow version: 6.0.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3585/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3585/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
3 days, 0:32:18
https://api.github.com/repos/huggingface/datasets/issues/3584
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3584/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3584/comments
https://api.github.com/repos/huggingface/datasets/issues/3584/events
https://github.com/huggingface/datasets/issues/3584
1,105,231,768
I_kwDODunzps5B4H-Y
3,584
https://huggingface.co/datasets/huggingface/transformers-metadata
{ "avatar_url": "https://avatars.githubusercontent.com/u/37082592?v=4", "events_url": "https://api.github.com/users/ecankirkic/events{/privacy}", "followers_url": "https://api.github.com/users/ecankirkic/followers", "following_url": "https://api.github.com/users/ecankirkic/following{/other_user}", "gists_url": "https://api.github.com/users/ecankirkic/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ecankirkic", "id": 37082592, "login": "ecankirkic", "node_id": "MDQ6VXNlcjM3MDgyNTky", "organizations_url": "https://api.github.com/users/ecankirkic/orgs", "received_events_url": "https://api.github.com/users/ecankirkic/received_events", "repos_url": "https://api.github.com/users/ecankirkic/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ecankirkic/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ecankirkic/subscriptions", "type": "User", "url": "https://api.github.com/users/ecankirkic", "user_view_type": "public" }
[ { "color": "ffffff", "default": true, "description": "This will not be worked on", "id": 1935892913, "name": "wontfix", "node_id": "MDU6TGFiZWwxOTM1ODkyOTEz", "url": "https://api.github.com/repos/huggingface/datasets/labels/wontfix" }, { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
closed
false
null
[]
[]
2022-01-17T00:18:14
2022-02-14T08:51:27
2022-02-14T08:51:27
NONE
null
null
null
null
## Dataset viewer issue for '*name of the dataset*' **Link:** *link to the dataset viewer page* *short description of the issue* Am I the one who added this dataset ? Yes-No
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3584/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3584/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
28 days, 8:33:13
https://api.github.com/repos/huggingface/datasets/issues/3583
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3583/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3583/comments
https://api.github.com/repos/huggingface/datasets/issues/3583/events
https://github.com/huggingface/datasets/issues/3583
1,105,195,144
I_kwDODunzps5B3_CI
3,583
Add The Medical Segmentation Decathlon Dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4", "events_url": "https://api.github.com/users/omarespejel/events{/privacy}", "followers_url": "https://api.github.com/users/omarespejel/followers", "following_url": "https://api.github.com/users/omarespejel/following{/other_user}", "gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/omarespejel", "id": 4755430, "login": "omarespejel", "node_id": "MDQ6VXNlcjQ3NTU0MzA=", "organizations_url": "https://api.github.com/users/omarespejel/orgs", "received_events_url": "https://api.github.com/users/omarespejel/received_events", "repos_url": "https://api.github.com/users/omarespejel/repos", "site_admin": false, "starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions", "type": "User", "url": "https://api.github.com/users/omarespejel", "user_view_type": "public" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" }, { "color": "bfdadc", "default": false, "description": "Vision datasets", "id": 3608941089, "name": "vision", "node_id": "LA_kwDODunzps7XHBIh", "url": "https://api.github.com/repos/huggingface/datasets/labels/vision" } ]
open
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/64613009?v=4", "events_url": "https://api.github.com/users/pri1311/events{/privacy}", "followers_url": "https://api.github.com/users/pri1311/followers", "following_url": "https://api.github.com/users/pri1311/following{/other_user}", "gists_url": "https://api.github.com/users/pri1311/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/pri1311", "id": 64613009, "login": "pri1311", "node_id": "MDQ6VXNlcjY0NjEzMDA5", "organizations_url": "https://api.github.com/users/pri1311/orgs", "received_events_url": "https://api.github.com/users/pri1311/received_events", "repos_url": "https://api.github.com/users/pri1311/repos", "site_admin": false, "starred_url": "https://api.github.com/users/pri1311/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pri1311/subscriptions", "type": "User", "url": "https://api.github.com/users/pri1311", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/64613009?v=4", "events_url": "https://api.github.com/users/pri1311/events{/privacy}", "followers_url": "https://api.github.com/users/pri1311/followers", "following_url": "https://api.github.com/users/pri1311/following{/other_user}", "gists_url": "https://api.github.com/users/pri1311/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/pri1311", "id": 64613009, "login": "pri1311", "node_id": "MDQ6VXNlcjY0NjEzMDA5", "organizations_url": "https://api.github.com/users/pri1311/orgs", "received_events_url": "https://api.github.com/users/pri1311/received_events", "repos_url": "https://api.github.com/users/pri1311/repos", "site_admin": false, "starred_url": "https://api.github.com/users/pri1311/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pri1311/subscriptions", "type": "User", "url": "https://api.github.com/users/pri1311", "user_view_type": "public" } ]
[ "Hello! I have recently been involved with a medical image segmentation project myself and was going through the `The Medical Segmentation Decathlon Dataset` as well. \r\nI haven't yet had experience adding datasets to this repository yet but would love to get started. Should I take this issue?\r\nIf yes, I've got two questions -\r\n1. There are 10 different datasets available, so are all datasets to be added in a single PR, or one at a time? \r\n2. Since it's a competition, masks for the test-set are not available. How is that to be tackled? Sorry if it's a silly question, I have recently started exploring `datasets`.", "Hi! Sure, feel free to take this issue. You can self-assign the issue by commenting `#self-assign`.\r\n\r\nTo answer your questions:\r\n1. It makes the most sense to add each one as a separate config, so one dataset script with 10 configs in a single PR.\r\n2. Just set masks in the test set to `None`.\r\n\r\nNote that the images/masks in this dataset are in NIfTI format, which our `Image` feature currently doesn't support, so I think it's best to yield the paths to the images/masks in the script and add a preprocessing section to the card where we explain how to load/process the images/masks with `nibabel` (I can help with that). \r\n\r\n", "> Note that the images/masks in this dataset are in NIfTI format, which our `Image` feature currently doesn't support, so I think it's best to yield the paths to the images/masks in the script and add a preprocessing section to the card where we explain how to load/process the images/masks with `nibabel` (I can help with that).\r\n\r\nGotcha, thanks. Will start working on the issue and let you know in case of any doubt.", "#self-assign", "This is great! There is a first model on the HUb that uses this dataset! https://huggingface.co/MONAI/example_spleen_segmentation" ]
2022-01-16T21:42:25
2022-03-18T10:44:42
null
NONE
null
null
null
null
## Adding a Dataset - **Name:** *The Medical Segmentation Decathlon Dataset* - **Description:** The underlying data set was designed to explore the axis of difficulties typically encountered when dealing with medical images, such as small data sets, unbalanced labels, multi-site data, and small objects. - **Paper:** [link to the dataset paper if available](https://arxiv.org/abs/2106.05735) - **Data:** http://medicaldecathlon.com/ - **Motivation:** Hugging Face seeks to democratize ML for society. One of the growing niches within ML is the ML + Medicine community. Key data sets will help increase the supply of HF resources for starting an initial community. (cc @osanseviero @abidlabs ) Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3583/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3583/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
https://api.github.com/repos/huggingface/datasets/issues/3582
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3582/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3582/comments
https://api.github.com/repos/huggingface/datasets/issues/3582/events
https://github.com/huggingface/datasets/issues/3582
1,104,877,303
I_kwDODunzps5B2xb3
3,582
conll 2003 dataset source url is no longer valid
{ "avatar_url": "https://avatars.githubusercontent.com/u/303900?v=4", "events_url": "https://api.github.com/users/rcanand/events{/privacy}", "followers_url": "https://api.github.com/users/rcanand/followers", "following_url": "https://api.github.com/users/rcanand/following{/other_user}", "gists_url": "https://api.github.com/users/rcanand/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/rcanand", "id": 303900, "login": "rcanand", "node_id": "MDQ6VXNlcjMwMzkwMA==", "organizations_url": "https://api.github.com/users/rcanand/orgs", "received_events_url": "https://api.github.com/users/rcanand/received_events", "repos_url": "https://api.github.com/users/rcanand/repos", "site_admin": false, "starred_url": "https://api.github.com/users/rcanand/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rcanand/subscriptions", "type": "User", "url": "https://api.github.com/users/rcanand", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" }, { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" } ]
[ "I came to open the same issue.", "Thanks for reporting !\r\n\r\nI pushed a temporary fix on `master` that uses an URL from a previous commit to access the dataset for now, until we have a better solution", "I changed the URL again to use another host, the fix is available on `master` and we'll probably do a new release of `datasets` tomorrow.\r\n\r\nIn the meantime, feel free to do `load_dataset(..., revision=\"master\")` to use the fixed script", "We just released a new version of `datasets` with a working URL. Feel free to update `datasets` and try again :)", "Hello! Unfortunately, this URL does not work for me. \r\nCould you please tell me how I can solve the problem?\r\n\r\n`>>> from datasets import load_dataset\r\n>>> dataset = load_dataset(\"conll2003\")\r\nDownloading and preparing dataset conll2003/conll2003 (download: 4.63 MiB, generated: 9.78 MiB, post-processed: Unknown size, total: 14.41 MiB) to /home/dafedo/.cache/huggingface/datasets/conll2003/conll2003/1.0.0/40e7cb6bcc374f7c349c83acd1e9352a4f09474eb691f64f364ee62eb65d0ca6...\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/dafedo/efficient-task-transfer/venv/lib/python3.8/site-packages/datasets/load.py\", line 745, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/home/dafedo/efficient-task-transfer/venv/lib/python3.8/site-packages/datasets/builder.py\", line 574, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/dafedo/efficient-task-transfer/venv/lib/python3.8/site-packages/datasets/builder.py\", line 630, in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n File \"/home/dafedo/.cache/huggingface/modules/datasets_modules/datasets/conll2003/40e7cb6bcc374f7c349c83acd1e9352a4f09474eb691f64f364ee62eb65d0ca6/conll2003.py\", line 196, in _split_generators\r\n downloaded_files = dl_manager.download_and_extract(urls_to_download)\r\n File \"/home/dafedo/efficient-task-transfer/venv/lib/python3.8/site-packages/datasets/utils/download_manager.py\", line 287, in download_and_extract\r\n return self.extract(self.download(url_or_urls))\r\n File \"/home/dafedo/efficient-task-transfer/venv/lib/python3.8/site-packages/datasets/utils/download_manager.py\", line 195, in download\r\n downloaded_path_or_paths = map_nested(\r\n File \"/home/dafedo/efficient-task-transfer/venv/lib/python3.8/site-packages/datasets/utils/py_utils.py\", line 203, in map_nested\r\n mapped = [\r\n File \"/home/dafedo/efficient-task-transfer/venv/lib/python3.8/site-packages/datasets/utils/py_utils.py\", line 204, in <listcomp>\r\n _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)\r\n File \"/home/dafedo/efficient-task-transfer/venv/lib/python3.8/site-packages/datasets/utils/py_utils.py\", line 142, in _single_map_nested\r\n return function(data_struct)\r\n File \"/home/dafedo/efficient-task-transfer/venv/lib/python3.8/site-packages/datasets/utils/download_manager.py\", line 218, in _download\r\n return cached_path(url_or_filename, download_config=download_config)\r\n File \"/home/dafedo/efficient-task-transfer/venv/lib/python3.8/site-packages/datasets/utils/file_utils.py\", line 281, in cached_path\r\n output_path = get_from_cache(\r\n File \"/home/dafedo/efficient-task-transfer/venv/lib/python3.8/site-packages/datasets/utils/file_utils.py\", line 621, in get_from_cache\r\n raise FileNotFoundError(\"Couldn't find file at {}\".format(url))\r\nFileNotFoundError: Couldn't find file at https://github.com/davidsbatista/NER-datasets/raw/master/CONLL2003/train.txt\r\n`\r\n\r\nI receive the same error when I run \"itrain run_configs/conll2003.json\" from https://github.com/adapter-hub/efficient-task-transfer\r\n\r\nThank you very much in advance!\r\n\r\nRegards, \r\nDaria\r\n", "Can you try updating `datasets` and try again ?\r\n```\r\npip install -U datasets\r\n```", "@lhoestq Thank you very much for your answer! \r\n\r\nIt works this way, but for my research I need datasets==1.6.3 or closest to it because otherwise the other package would not work as it is built on this version.\r\nDo you have any other suggestion? I would really appreciate it. Maybe which version of the datasets is without hard-coded link but closest to 1.6.3\r\n", "No problem, I have solved it. \r\nThank you anyway.", "Out of curiosity, which package has the `datasets==1.6.3` requirement ?" ]
2022-01-15T23:04:17
2022-07-20T13:06:40
2022-01-21T16:57:32
NONE
null
null
null
null
## Describe the bug Loading `conll2003` dataset fails because it was removed (just yesterday 1/14/2022) from the location it is looking for. ## Steps to reproduce the bug ```python from datasets import load_dataset load_dataset("conll2003") ``` ## Expected results The dataset should load. ## Actual results It is looking for the dataset at `https://github.com/davidsbatista/NER-datasets/raw/master/CONLL2003/train.txt` but it was removed from there yesterday (see [commit](https://github.com/davidsbatista/NER-datasets/commit/9d8f45cc7331569af8eb3422bbe1c97cbebd5690) that removed the file and related [issue](https://github.com/davidsbatista/NER-datasets/issues/8)). - We should replace this with an alternate valid location. - this is being referenced in the huggingface course chapter 7 [colab notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/master/course/chapter7/section2_pt.ipynb), which is also broken. ```python FileNotFoundError Traceback (most recent call last) <ipython-input-4-27c956bec93c> in <module>() 1 from datasets import load_dataset 2 ----> 3 raw_datasets = load_dataset("conll2003") 11 frames /usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token, ignore_url_params) 610 ) 611 elif response is not None and response.status_code == 404: --> 612 raise FileNotFoundError(f"Couldn't find file at {url}") 613 _raise_if_offline_mode_is_enabled(f"Tried to reach {url}") 614 if head_error is not None: FileNotFoundError: Couldn't find file at https://github.com/davidsbatista/NER-datasets/raw/master/CONLL2003/train.txt ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - Platform: - Python version: - PyArrow version:
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 5, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 5, "url": "https://api.github.com/repos/huggingface/datasets/issues/3582/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3582/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
5 days, 17:53:15
https://api.github.com/repos/huggingface/datasets/issues/3581
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3581/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3581/comments
https://api.github.com/repos/huggingface/datasets/issues/3581/events
https://github.com/huggingface/datasets/issues/3581
1,104,857,822
I_kwDODunzps5B2sre
3,581
Unable to create a dataset from a parquet file in S3
{ "avatar_url": "https://avatars.githubusercontent.com/u/18012903?v=4", "events_url": "https://api.github.com/users/regCode/events{/privacy}", "followers_url": "https://api.github.com/users/regCode/followers", "following_url": "https://api.github.com/users/regCode/following{/other_user}", "gists_url": "https://api.github.com/users/regCode/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/regCode", "id": 18012903, "login": "regCode", "node_id": "MDQ6VXNlcjE4MDEyOTAz", "organizations_url": "https://api.github.com/users/regCode/orgs", "received_events_url": "https://api.github.com/users/regCode/received_events", "repos_url": "https://api.github.com/users/regCode/repos", "site_admin": false, "starred_url": "https://api.github.com/users/regCode/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/regCode/subscriptions", "type": "User", "url": "https://api.github.com/users/regCode", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" }, { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
[ "Hi ! Currently it only works with local paths, file-like objects are not supported yet" ]
2022-01-15T21:34:16
2022-02-14T08:52:57
null
NONE
null
null
null
null
## Describe the bug Trying to create a dataset from a parquet file in S3. ## Steps to reproduce the bug ```python import s3fs from datasets import Dataset s3 = s3fs.S3FileSystem(anon=False) with s3.open(PATH_LTR_TOY_CLEAN_DATASET, 'rb') as s3file: dataset = Dataset.from_parquet(s3file) ``` ## Expected results A new Dataset object ## Actual results ```AttributeError: 'S3File' object has no attribute 'decode'``` ``` AttributeError Traceback (most recent call last) <command-2452877612515691> in <module> 5 6 with s3.open(PATH_LTR_TOY_CLEAN_DATASET, 'rb') as s3file: ----> 7 dataset = Dataset.from_parquet(s3file) /databricks/python/lib/python3.8/site-packages/datasets/arrow_dataset.py in from_parquet(path_or_paths, split, features, cache_dir, keep_in_memory, columns, **kwargs) 907 from .io.parquet import ParquetDatasetReader 908 --> 909 return ParquetDatasetReader( 910 path_or_paths, 911 split=split, /databricks/python/lib/python3.8/site-packages/datasets/io/parquet.py in __init__(self, path_or_paths, split, features, cache_dir, keep_in_memory, **kwargs) 28 path_or_paths = path_or_paths if isinstance(path_or_paths, dict) else {self.split: path_or_paths} 29 hash = _PACKAGED_DATASETS_MODULES["parquet"][1] ---> 30 self.builder = Parquet( 31 cache_dir=cache_dir, 32 data_files=path_or_paths, /databricks/python/lib/python3.8/site-packages/datasets/builder.py in __init__(self, cache_dir, name, hash, base_path, info, features, use_auth_token, namespace, data_files, data_dir, **config_kwargs) 246 247 if data_files is not None and not isinstance(data_files, DataFilesDict): --> 248 data_files = DataFilesDict.from_local_or_remote( 249 sanitize_patterns(data_files), base_path=base_path, use_auth_token=use_auth_token 250 ) /databricks/python/lib/python3.8/site-packages/datasets/data_files.py in from_local_or_remote(cls, patterns, base_path, allowed_extensions, use_auth_token) 576 for key, patterns_for_key in patterns.items(): 577 out[key] = ( --> 578 DataFilesList.from_local_or_remote( 579 patterns_for_key, 580 base_path=base_path, /databricks/python/lib/python3.8/site-packages/datasets/data_files.py in from_local_or_remote(cls, patterns, base_path, allowed_extensions, use_auth_token) 544 ) -> "DataFilesList": 545 base_path = base_path if base_path is not None else str(Path().resolve()) --> 546 data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions) 547 origin_metadata = _get_origin_metadata_locally_or_by_urls(data_files, use_auth_token=use_auth_token) 548 return cls(data_files, origin_metadata) /databricks/python/lib/python3.8/site-packages/datasets/data_files.py in resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions) 191 data_files = [] 192 for pattern in patterns: --> 193 if is_remote_url(pattern): 194 data_files.append(Url(pattern)) 195 else: /databricks/python/lib/python3.8/site-packages/datasets/utils/file_utils.py in is_remote_url(url_or_filename) 115 116 def is_remote_url(url_or_filename: str) -> bool: --> 117 parsed = urlparse(url_or_filename) 118 return parsed.scheme in ("http", "https", "s3", "gs", "hdfs", "ftp") 119 /usr/lib/python3.8/urllib/parse.py in urlparse(url, scheme, allow_fragments) 370 Note that we don't break the components up in smaller bits 371 (e.g. netloc is a single string) and we don't expand % escapes.""" --> 372 url, scheme, _coerce_result = _coerce_args(url, scheme) 373 splitresult = urlsplit(url, scheme, allow_fragments) 374 scheme, netloc, url, query, fragment = splitresult /usr/lib/python3.8/urllib/parse.py in _coerce_args(*args) 122 if str_input: 123 return args + (_noop,) --> 124 return _decode_args(args) + (_encode_result,) 125 126 # Result objects are more helpful than simple tuples /usr/lib/python3.8/urllib/parse.py in _decode_args(args, encoding, errors) 106 def _decode_args(args, encoding=_implicit_encoding, 107 errors=_implicit_errors): --> 108 return tuple(x.decode(encoding, errors) if x else '' for x in args) 109 110 def _coerce_args(*args): /usr/lib/python3.8/urllib/parse.py in <genexpr>(.0) 106 def _decode_args(args, encoding=_implicit_encoding, 107 errors=_implicit_errors): --> 108 return tuple(x.decode(encoding, errors) if x else '' for x in args) 109 110 def _coerce_args(*args): AttributeError: 'S3File' object has no attribute 'decode' ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.17.0 - Platform: Ubuntu 20.04.3 LTS - Python version: 3.8.10 - PyArrow version: 6.0.1
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3581/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3581/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
https://api.github.com/repos/huggingface/datasets/issues/3580
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3580/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3580/comments
https://api.github.com/repos/huggingface/datasets/issues/3580/events
https://github.com/huggingface/datasets/issues/3580
1,104,663,242
I_kwDODunzps5B19LK
3,580
Bug in wiki bio load
{ "avatar_url": "https://avatars.githubusercontent.com/u/3104771?v=4", "events_url": "https://api.github.com/users/tuhinjubcse/events{/privacy}", "followers_url": "https://api.github.com/users/tuhinjubcse/followers", "following_url": "https://api.github.com/users/tuhinjubcse/following{/other_user}", "gists_url": "https://api.github.com/users/tuhinjubcse/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/tuhinjubcse", "id": 3104771, "login": "tuhinjubcse", "node_id": "MDQ6VXNlcjMxMDQ3NzE=", "organizations_url": "https://api.github.com/users/tuhinjubcse/orgs", "received_events_url": "https://api.github.com/users/tuhinjubcse/received_events", "repos_url": "https://api.github.com/users/tuhinjubcse/repos", "site_admin": false, "starred_url": "https://api.github.com/users/tuhinjubcse/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tuhinjubcse/subscriptions", "type": "User", "url": "https://api.github.com/users/tuhinjubcse", "user_view_type": "public" }
[ { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
closed
false
null
[]
[ "+1, here's the error I got: \r\n\r\n```\r\n>>> from datasets import load_dataset\r\n>>>\r\n>>> load_dataset(\"wiki_bio\")\r\nDownloading: 7.58kB [00:00, 4.42MB/s]\r\nDownloading: 2.71kB [00:00, 1.30MB/s]\r\nUsing custom data configuration default\r\nDownloading and preparing dataset wiki_bio/default (download: 318.53 MiB, generated: 736.94 MiB, post-processed: Unknown size, total: 1.03 GiB) to /home/jxm3/.cache/huggingface/datasets/wiki_bio/default/1.1.0/5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9...\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/jxm3/.conda/envs/torch/lib/python3.9/site-packages/datasets/load.py\", line 1694, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/home/jxm3/.conda/envs/torch/lib/python3.9/site-packages/datasets/builder.py\", line 595, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/jxm3/.conda/envs/torch/lib/python3.9/site-packages/datasets/builder.py\", line 662, in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n File \"/home/jxm3/.cache/huggingface/modules/datasets_modules/datasets/wiki_bio/5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9/wiki_bio.py\", line 125, in _split_generators\r\n data_dir = dl_manager.download_and_extract(my_urls)\r\n File \"/home/jxm3/.conda/envs/torch/lib/python3.9/site-packages/datasets/utils/download_manager.py\", line 308, in download_and_extract\r\n return self.extract(self.download(url_or_urls))\r\n File \"/home/jxm3/.conda/envs/torch/lib/python3.9/site-packages/datasets/utils/download_manager.py\", line 196, in download\r\n downloaded_path_or_paths = map_nested(\r\n File \"/home/jxm3/.conda/envs/torch/lib/python3.9/site-packages/datasets/utils/py_utils.py\", line 251, in map_nested\r\n return function(data_struct)\r\n File \"/home/jxm3/.conda/envs/torch/lib/python3.9/site-packages/datasets/utils/download_manager.py\", line 217, in _download\r\n return cached_path(url_or_filename, download_config=download_config)\r\n File \"/home/jxm3/.conda/envs/torch/lib/python3.9/site-packages/datasets/utils/file_utils.py\", line 298, in cached_path\r\n output_path = get_from_cache(\r\n File \"/home/jxm3/.conda/envs/torch/lib/python3.9/site-packages/datasets/utils/file_utils.py\", line 612, in get_from_cache\r\n raise FileNotFoundError(f\"Couldn't find file at {url}\")\r\nFileNotFoundError: Couldn't find file at https://drive.google.com/uc?export=download&id=1L7aoUXzHPzyzQ0ns4ApBbYepsjFOtXil\r\n>>>\r\n```\r\n", "@alejandrocros and @lhoestq - you added the wiki_bio dataset in #1173. It doesn't work anymore. Can you take a look at this?", "And if something is wrong with Google Drive, you could try to download (and collate and unzip) from here: https://github.com/DavidGrangier/wikipedia-biography-dataset", "Hi ! Thanks for reporting. I've downloaded the data and concatenated them into a zip file available here: https://huggingface.co/datasets/wiki_bio/tree/main/data\r\n\r\nI guess we can update the dataset script to use this zip file now :)" ]
2022-01-15T10:04:33
2022-01-31T08:38:09
2022-01-31T08:38:09
NONE
null
null
null
null
wiki_bio is failing to load because of a failing drive link . Can someone fix this ? ![7E90023B-A3B1-4930-BA25-45CCCB4E1710](https://user-images.githubusercontent.com/3104771/149617870-5a32a2da-2c78-483b-bff6-d7534215a423.png) ![653C1C76-C725-4A04-A0D8-084373BA612F](https://user-images.githubusercontent.com/3104771/149617875-ef0e30b0-b76e-48cf-b3eb-93ba8e6e5465.png) a
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/3580/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3580/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
15 days, 22:33:36
https://api.github.com/repos/huggingface/datasets/issues/3578
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3578/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3578/comments
https://api.github.com/repos/huggingface/datasets/issues/3578/events
https://github.com/huggingface/datasets/issues/3578
1,103,403,287
I_kwDODunzps5BxJkX
3,578
label information get lost after parquet serialization
{ "avatar_url": "https://avatars.githubusercontent.com/u/56633664?v=4", "events_url": "https://api.github.com/users/Tudyx/events{/privacy}", "followers_url": "https://api.github.com/users/Tudyx/followers", "following_url": "https://api.github.com/users/Tudyx/following{/other_user}", "gists_url": "https://api.github.com/users/Tudyx/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Tudyx", "id": 56633664, "login": "Tudyx", "node_id": "MDQ6VXNlcjU2NjMzNjY0", "organizations_url": "https://api.github.com/users/Tudyx/orgs", "received_events_url": "https://api.github.com/users/Tudyx/received_events", "repos_url": "https://api.github.com/users/Tudyx/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Tudyx/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Tudyx/subscriptions", "type": "User", "url": "https://api.github.com/users/Tudyx", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "Hi ! We did a release of `datasets` today that may fix this issue. Can you try updating `datasets` and trying again ?\r\n\r\nEDIT: the issue is still there actually\r\n\r\nI think we can fix that by storing the Features in the parquet schema metadata, and then reload them when loading the parquet file", "This info is stored in the Parquet schema metadata as of https://github.com/huggingface/datasets/pull/5516" ]
2022-01-14T10:10:38
2023-07-25T15:44:53
2023-07-25T15:44:53
NONE
null
null
null
null
## Describe the bug In *dataset_info.json* file, information about the label get lost after the dataset serialization. ## Steps to reproduce the bug ```python from datasets import load_dataset # normal save dataset = load_dataset('glue', 'sst2', split='train') dataset.save_to_disk("normal_save") # save after parquet serialization dataset.to_parquet("glue-sst2-train.parquet") dataset = load_dataset("parquet", data_files='glue-sst2-train.parquet') dataset.save_to_disk("save_after_parquet") ``` ## Expected results I expected to keep label information in *dataset_info.json* file even after parquet serialization ## Actual results In the normal serialization i got ```json "label": { "num_classes": 2, "names": [ "negative", "positive" ], "names_file": null, "id": null, "_type": "ClassLabel" }, ``` And after parquet serialization i got ```json "label": { "dtype": "int64", "id": null, "_type": "Value" }, ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.0 - Platform: ubuntu 20.04 - Python version: 3.8.10 - PyArrow version: 6.0.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3578/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3578/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
557 days, 5:34:15
https://api.github.com/repos/huggingface/datasets/issues/3577
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3577/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3577/comments
https://api.github.com/repos/huggingface/datasets/issues/3577/events
https://github.com/huggingface/datasets/issues/3577
1,102,598,241
I_kwDODunzps5BuFBh
3,577
Add The Mexican Emotional Speech Database (MESD)
{ "avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4", "events_url": "https://api.github.com/users/omarespejel/events{/privacy}", "followers_url": "https://api.github.com/users/omarespejel/followers", "following_url": "https://api.github.com/users/omarespejel/following{/other_user}", "gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/omarespejel", "id": 4755430, "login": "omarespejel", "node_id": "MDQ6VXNlcjQ3NTU0MzA=", "organizations_url": "https://api.github.com/users/omarespejel/orgs", "received_events_url": "https://api.github.com/users/omarespejel/received_events", "repos_url": "https://api.github.com/users/omarespejel/repos", "site_admin": false, "starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions", "type": "User", "url": "https://api.github.com/users/omarespejel", "user_view_type": "public" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" }, { "color": "d93f0b", "default": false, "description": "", "id": 2725241052, "name": "speech", "node_id": "MDU6TGFiZWwyNzI1MjQxMDUy", "url": "https://api.github.com/repos/huggingface/datasets/labels/speech" } ]
open
false
null
[]
[]
2022-01-13T23:49:36
2022-01-27T14:14:38
null
NONE
null
null
null
null
## Adding a Dataset - **Name:** *The Mexican Emotional Speech Database (MESD)* - **Description:** *Contains 864 voice recordings with six different prosodies: anger, disgust, fear, happiness, neutral, and sadness. Furthermore, three voice categories are included: female adult, male adult, and child. * - **Paper:** *[Paper](https://ieeexplore.ieee.org/abstract/document/9629934/authors#authors)* - **Data:** *[link to the Github repository or current dataset location](https://data.mendeley.com/datasets/cy34mh68j9/3)* - **Motivation:** *Would add Spanish speech data to the HF datasets :) * Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3577/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3577/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
https://api.github.com/repos/huggingface/datasets/issues/3572
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3572/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3572/comments
https://api.github.com/repos/huggingface/datasets/issues/3572/events
https://github.com/huggingface/datasets/issues/3572
1,100,634,244
I_kwDODunzps5BmliE
3,572
ConnectionError in IndicGLUE dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/79107194?v=4", "events_url": "https://api.github.com/users/sahoodib/events{/privacy}", "followers_url": "https://api.github.com/users/sahoodib/followers", "following_url": "https://api.github.com/users/sahoodib/following{/other_user}", "gists_url": "https://api.github.com/users/sahoodib/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sahoodib", "id": 79107194, "login": "sahoodib", "node_id": "MDQ6VXNlcjc5MTA3MTk0", "organizations_url": "https://api.github.com/users/sahoodib/orgs", "received_events_url": "https://api.github.com/users/sahoodib/received_events", "repos_url": "https://api.github.com/users/sahoodib/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sahoodib/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sahoodib/subscriptions", "type": "User", "url": "https://api.github.com/users/sahoodib", "user_view_type": "public" }
[ { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
[ "@sahoodib, thanks for reporting.\r\n\r\nIndeed, none of the data links appearing in the IndicGLUE website are working, e.g.: https://storage.googleapis.com/ai4bharat-public-indic-nlp-corpora/evaluations/soham-articles.tar.gz\r\n```\r\n<Error>\r\n<Code>UserProjectAccountProblem</Code>\r\n<Message>User project billing account not in good standing.</Message>\r\n<Details>\r\nThe billing account for the owning project is disabled in state delinquent\r\n</Details>\r\n</Error>\r\n```\r\n\r\nWe have contacted the data owners to inform them about their issue and ask them if they plan to fix it.", "Yesterday I resent a reminder email with more AI4Bharat-related people in the loop.\r\n\r\nI also opened an issue in their repos:\r\n- https://github.com/AI4Bharat/indicnlp_corpus/issues/14\r\n- https://github.com/AI4Bharat/ai4bharat.org/issues/71", "We have received a reply from the authors reporting they have updated the URLs of their data files and opened a PR. See:\r\n- #4978 " ]
2022-01-12T17:59:36
2022-09-15T21:57:34
2022-09-15T21:57:34
NONE
null
null
null
null
While I am trying to load IndicGLUE dataset (https://huggingface.co/datasets/indic_glue) it is giving me with the error: ``` ConnectionError: Couldn't reach https://storage.googleapis.com/ai4bharat-public-indic-nlp-corpora/evaluations/wikiann-ner.tar.gz (error 403)
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3572/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3572/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
246 days, 3:57:58
https://api.github.com/repos/huggingface/datasets/issues/3568
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3568/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3568/comments
https://api.github.com/repos/huggingface/datasets/issues/3568/events
https://github.com/huggingface/datasets/issues/3568
1,100,380,631
I_kwDODunzps5BlnnX
3,568
Downloading Hugging Face Medical Dialog Dataset NonMatchingSplitsSizesError
{ "avatar_url": "https://avatars.githubusercontent.com/u/49265757?v=4", "events_url": "https://api.github.com/users/fabianslife/events{/privacy}", "followers_url": "https://api.github.com/users/fabianslife/followers", "following_url": "https://api.github.com/users/fabianslife/following{/other_user}", "gists_url": "https://api.github.com/users/fabianslife/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/fabianslife", "id": 49265757, "login": "fabianslife", "node_id": "MDQ6VXNlcjQ5MjY1NzU3", "organizations_url": "https://api.github.com/users/fabianslife/orgs", "received_events_url": "https://api.github.com/users/fabianslife/received_events", "repos_url": "https://api.github.com/users/fabianslife/repos", "site_admin": false, "starred_url": "https://api.github.com/users/fabianslife/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fabianslife/subscriptions", "type": "User", "url": "https://api.github.com/users/fabianslife", "user_view_type": "public" }
[ { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
[ "Hi @fabianslife, thanks for reporting.\r\n\r\nI think you were using an old version of `datasets` because this bug was already fixed in version `1.13.0` (13 Oct 2021):\r\n- Fix: 55fd140a63b8f03a0e72985647e498f1fc799d3f\r\n- PR: #3046\r\n- Issue: #2969 \r\n\r\nPlease, feel free to update the library: `pip install -U datasets`." ]
2022-01-12T14:03:44
2022-02-14T09:32:34
2022-02-14T09:32:34
NONE
null
null
null
null
I wanted to download the Nedical Dialog Dataset from huggingface, using this github link: https://github.com/huggingface/datasets/tree/master/datasets/medical_dialog After downloading the raw datasets from google drive, i unpacked everything and put it in the same folder as the medical_dialog.py which is: ``` import copy import os import re import datasets _CITATION = """\ @article{chen2020meddiag, title={MedDialog: a large-scale medical dialogue dataset}, author={Chen, Shu and Ju, Zeqian and Dong, Xiangyu and Fang, Hongchao and Wang, Sicheng and Yang, Yue and Zeng, Jiaqi and Zhang, Ruisi and Zhang, Ruoyu and Zhou, Meng and Zhu, Penghui and Xie, Pengtao}, journal={arXiv preprint arXiv:2004.03329}, year={2020} } """ _DESCRIPTION = """\ The MedDialog dataset (English) contains conversations (in English) between doctors and patients.\ It has 0.26 million dialogues. The data is continuously growing and more dialogues will be added. \ The raw dialogues are from healthcaremagic.com and icliniq.com.\ All copyrights of the data belong to healthcaremagic.com and icliniq.com. """ _HOMEPAGE = "https://github.com/UCSD-AI4H/Medical-Dialogue-System" _LICENSE = "" class MedicalDialog(datasets.GeneratorBasedBuilder): VERSION = datasets.Version("1.0.0") BUILDER_CONFIGS = [ datasets.BuilderConfig(name="en", description="The dataset of medical dialogs in English.", version=VERSION), datasets.BuilderConfig(name="zh", description="The dataset of medical dialogs in Chinese.", version=VERSION), ] @property def manual_download_instructions(self): return """\ \n For English:\nYou need to go to https://drive.google.com/drive/folders/1g29ssimdZ6JzTST6Y8g6h-ogUNReBtJD?usp=sharing,\ and manually download the dataset from Google Drive. Once it is completed, a file named Medical-Dialogue-Dataset-English-<timestamp-info>.zip will appear in your Downloads folder( or whichever folder your browser chooses to save files to). Unzip the folder to obtain a folder named "Medical-Dialogue-Dataset-English" several text files. Now, you can specify the path to this folder for the data_dir argument in the datasets.load_dataset(...) option. The <path/to/folder> can e.g. be "/Downloads/Medical-Dialogue-Dataset-English". The data can then be loaded using the below command:\ datasets.load_dataset("medical_dialog", name="en", data_dir="/Downloads/Medical-Dialogue-Dataset-English")`. \n For Chinese:\nFollow the above process. Change the 'name' to 'zh'.The download link is https://drive.google.com/drive/folders/1r09_i8nJ9c1nliXVGXwSqRYqklcHd9e2 **NOTE** - A caution while downloading from drive. It is better to download single files since creating a zip might not include files <500 MB. This has been observed mutiple times. - After downloading the files and adding them to the appropriate folder, the path of the folder can be given as input tu the data_dir path. """ datasets.load_dataset("medical_dialog", name="en", data_dir="Medical-Dialogue-Dataset-English") def _info(self): if self.config.name == "zh": features = datasets.Features( { "file_name": datasets.Value("string"), "dialogue_id": datasets.Value("int32"), "dialogue_url": datasets.Value("string"), "dialogue_turns": datasets.Sequence( { "speaker": datasets.ClassLabel(names=["病人", "医生"]), "utterance": datasets.Value("string"), } ), } ) if self.config.name == "en": features = datasets.Features( { "file_name": datasets.Value("string"), "dialogue_id": datasets.Value("int32"), "dialogue_url": datasets.Value("string"), "dialogue_turns": datasets.Sequence( { "speaker": datasets.ClassLabel(names=["Patient", "Doctor"]), "utterance": datasets.Value("string"), } ), } ) return datasets.DatasetInfo( # This is the description that will appear on the datasets page. description=_DESCRIPTION, features=features, supervised_keys=None, # Homepage of the dataset for documentation homepage=_HOMEPAGE, # License for the dataset if available license=_LICENSE, # Citation for the dataset citation=_CITATION, ) def _split_generators(self, dl_manager): """Returns SplitGenerators.""" path_to_manual_file = os.path.abspath(os.path.expanduser(dl_manager.manual_dir)) if not os.path.exists(path_to_manual_file): raise FileNotFoundError( f"{path_to_manual_file} does not exist. Make sure you insert a manual dir via `datasets.load_dataset('medical_dialog', data_dir=...)`. Manual download instructions: {self.manual_download_instructions})" ) filepaths = [ os.path.join(path_to_manual_file, txt_file_name) for txt_file_name in sorted(os.listdir(path_to_manual_file)) if txt_file_name.endswith("txt") ] return [datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepaths": filepaths})] def _generate_examples(self, filepaths): """Yields examples. Iterates over each file and give the creates the corresponding features. NOTE: - The code makes some assumption on the structure of the raw .txt file. - There are some checks to separate different id's. Hopefully, should not cause further issues later when more txt files are added. """ data_lang = self.config.name id_ = -1 for filepath in filepaths: with open(filepath, encoding="utf-8") as f_in: # Parameters to just "sectionize" the raw data last_part = "" last_dialog = {} last_list = [] last_user = "" check_list = [] # These flags are present to have a single function address both chinese and english data # English data is a little hahazard (i.e. the sentences spans multiple different lines), # Chinese is compact with one line for doctor and patient. conv_flag = False des_flag = False while True: line = f_in.readline() if not line: break # Extracting the dialog id if line[:2] == "id": # Hardcode alert! # Handling ID references that may come in the description # These were observed in the Chinese dataset and were not # followed by numbers try: dialogue_id = int(re.findall(r"\d+", line)[0]) except IndexError: continue # Extracting the url if line[:4] == "http": # Hardcode alert! dialogue_url = line.rstrip() # Extracting the patient info from description. if line[:11] == "Description": # Hardcode alert! last_part = "description" last_dialog = {} last_list = [] last_user = "" last_conv = {"speaker": "", "utterance": ""} while True: line = f_in.readline() if (not line) or (line in ["\n", "\n\r"]): break else: if data_lang == "zh": # Condition in chinese if line[:5] == "病情描述:": # Hardcode alert! last_user = "病人" sen = f_in.readline().rstrip() des_flag = True if data_lang == "en": last_user = "Patient" sen = line.rstrip() des_flag = True if des_flag: if sen == "": continue if sen in check_list: last_conv["speaker"] = "" last_conv["utterance"] = "" else: last_conv["speaker"] = last_user last_conv["utterance"] = sen check_list.append(sen) des_flag = False break # Extracting the conversation info from dialogue. elif line[:8] == "Dialogue": # Hardcode alert! if last_part == "description" and len(last_conv["utterance"]) > 0: last_part = "dialogue" if data_lang == "zh": last_user = "病人" if data_lang == "en": last_user = "Patient" while True: line = f_in.readline() if (not line) or (line in ["\n", "\n\r"]): conv_flag = False last_user = "" last_list.append(copy.deepcopy(last_conv)) # To ensure close of conversation, only even number of sentences # are extracted last_turn = len(last_list) if int(last_turn / 2) > 0: temp = int(last_turn / 2) id_ += 1 last_dialog["file_name"] = filepath last_dialog["dialogue_id"] = dialogue_id last_dialog["dialogue_url"] = dialogue_url last_dialog["dialogue_turns"] = last_list[: temp * 2] yield id_, last_dialog break if data_lang == "zh": if line[:3] == "病人:" or line[:3] == "医生:": # Hardcode alert! user = line[:2] # Hardcode alert! line = f_in.readline() conv_flag = True # The elif block is to ensure that multi-line sentences are captured. # This has been observed only in english. if data_lang == "en": if line.strip() == "Patient:" or line.strip() == "Doctor:": # Hardcode alert! user = line.replace(":", "").rstrip() line = f_in.readline() conv_flag = True elif line[:2] != "id": # Hardcode alert! conv_flag = True # Continues till the next ID is parsed if conv_flag: sen = line.rstrip() if sen == "": continue if user == last_user: last_conv["utterance"] = last_conv["utterance"] + sen else: last_user = user last_list.append(copy.deepcopy(last_conv)) last_conv["utterance"] = sen last_conv["speaker"] = user ``` running this code gives me the error: ``` File "C:\Users\Fabia\AppData\Local\Programs\Python\Python39\lib\site-packages\datasets\utils\info_utils.py", line 74, in verify_splits raise NonMatchingSplitsSizesError(str(bad_splits)) datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=0, num_examples=0, dataset_name='medical_dialog'), 'recorded': SplitInfo(name='train', num_bytes=292801173, num_examples=229674, dataset_name='medical_dialog')}] ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3568/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3568/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
32 days, 19:28:50
https://api.github.com/repos/huggingface/datasets/issues/3563
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3563/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3563/comments
https://api.github.com/repos/huggingface/datasets/issues/3563/events
https://github.com/huggingface/datasets/issues/3563
1,099,070,368
I_kwDODunzps5Bgnug
3,563
Dataset.from_pandas preserves useless index
{ "avatar_url": "https://avatars.githubusercontent.com/u/20703486?v=4", "events_url": "https://api.github.com/users/Sorrow321/events{/privacy}", "followers_url": "https://api.github.com/users/Sorrow321/followers", "following_url": "https://api.github.com/users/Sorrow321/following{/other_user}", "gists_url": "https://api.github.com/users/Sorrow321/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Sorrow321", "id": 20703486, "login": "Sorrow321", "node_id": "MDQ6VXNlcjIwNzAzNDg2", "organizations_url": "https://api.github.com/users/Sorrow321/orgs", "received_events_url": "https://api.github.com/users/Sorrow321/received_events", "repos_url": "https://api.github.com/users/Sorrow321/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Sorrow321/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Sorrow321/subscriptions", "type": "User", "url": "https://api.github.com/users/Sorrow321", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "Hi! That makes sense. Sure, feel free to open a PR! Just a small suggestion: let's make `preserve_index` a parameter of `Dataset.from_pandas` (which we then pass to `InMemoryTable.from_pandas`) with `None` as a default value to not have this as a breaking change. " ]
2022-01-11T12:07:07
2022-01-12T16:11:27
2022-01-12T16:11:27
CONTRIBUTOR
null
null
null
null
## Describe the bug Let's say that you want to create a Dataset object from pandas dataframe. Most likely you will write something like this: ``` import pandas as pd from datasets import Dataset df = pd.read_csv('some_dataset.csv') # Some DataFrame preprocessing code... dataset = Dataset.from_pandas(df) ``` If your preprocessing code contain indexing operations like this: ``` df = df[df.col1 == some_value] ``` then your df.index can be changed from (default) ```RangeIndex(start=0, stop=16590, step=1)``` to something like this ```Int64Index([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, ... 83979, 83980, 83981, 83982, 83983, 83984, 83985, 83986, 83987, 83988], dtype='int64', length=16590)``` In this case, PyArrow (by default) will preserve this non-standard index. In the result, your dataset object will have the extra field that you likely don't want to have: '__index_level_0__'. You can easily fix this by just adding extra argument ```preserve_index=False``` to call of ```InMemoryTable.from_pandas``` in ```arrow_dataset.py```. If you approve that this isn't desirable behavior, I can make a PR fixing that. ## Environment info - `datasets` version: 1.16.1 - Platform: Linux-5.11.0-44-generic-x86_64-with-glibc2.31 - Python version: 3.9.7 - PyArrow version: 6.0.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/3563/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3563/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
1 day, 4:04:20
https://api.github.com/repos/huggingface/datasets/issues/3561
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3561/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3561/comments
https://api.github.com/repos/huggingface/datasets/issues/3561/events
https://github.com/huggingface/datasets/issues/3561
1,098,328,870
I_kwDODunzps5Bdysm
3,561
Cannot load ‘bookcorpusopen’
{ "avatar_url": "https://avatars.githubusercontent.com/u/54684403?v=4", "events_url": "https://api.github.com/users/HUIYINXUE/events{/privacy}", "followers_url": "https://api.github.com/users/HUIYINXUE/followers", "following_url": "https://api.github.com/users/HUIYINXUE/following{/other_user}", "gists_url": "https://api.github.com/users/HUIYINXUE/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/HUIYINXUE", "id": 54684403, "login": "HUIYINXUE", "node_id": "MDQ6VXNlcjU0Njg0NDAz", "organizations_url": "https://api.github.com/users/HUIYINXUE/orgs", "received_events_url": "https://api.github.com/users/HUIYINXUE/received_events", "repos_url": "https://api.github.com/users/HUIYINXUE/repos", "site_admin": false, "starred_url": "https://api.github.com/users/HUIYINXUE/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HUIYINXUE/subscriptions", "type": "User", "url": "https://api.github.com/users/HUIYINXUE", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" }, { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
[ "The host of this copy of the dataset (https://the-eye.eu) is down and has been down for a good amount of time ([potentially months](https://www.reddit.com/r/Roms/comments/q82s15/theeye_downdied/))\r\n\r\nFinding this dataset is a little esoteric, as the original authors took down the official BookCorpus dataset some time ago.\r\n\r\nThere are community-created versions of BookCorpus, such as the files hosted in the link below.\r\nhttps://battle.shawwn.com/sdb/bookcorpus/\r\n\r\nAnd more discussion here:\r\nhttps://github.com/soskek/bookcorpus\r\n\r\nDo we want to remove this dataset entirely? There's a fair argument for this, given that the official BookCorpus dataset was taken down by the authors. If not, perhaps can open a PR with the link to the community-created tar above and updated dataset description.", "Hi! The `bookcorpusopen` dataset is not working for the same reason as explained in this comment: https://github.com/huggingface/datasets/issues/3504#issuecomment-1004564980", "Hi @HUIYINXUE, it should work now that the data owners created a mirror server with all data, and we updated the URL in our library." ]
2022-01-10T20:17:18
2022-02-14T09:19:27
2022-02-14T09:18:47
NONE
null
null
null
null
## Describe the bug Cannot load 'bookcorpusopen' ## Steps to reproduce the bug ```python dataset = load_dataset('bookcorpusopen') ``` or ```python dataset = load_dataset('bookcorpusopen',script_version='master') ``` ## Actual results ConnectionError: Couldn't reach https://the-eye.eu/public/AI/pile_preliminary_components/books1.tar.gz ## Environment info - `datasets` version: 1.9.0 - Platform: Linux version 3.10.0-1160.45.1.el7.x86_64 - Python version: 3.6.13 - PyArrow version: 6.0.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3561/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3561/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
34 days, 13:01:29
https://api.github.com/repos/huggingface/datasets/issues/3558
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3558/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3558/comments
https://api.github.com/repos/huggingface/datasets/issues/3558/events
https://github.com/huggingface/datasets/issues/3558
1,098,025,866
I_kwDODunzps5BcouK
3,558
Integrate Milvus (pymilvus) library
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/83447078?v=4", "events_url": "https://api.github.com/users/xiaofan-luan/events{/privacy}", "followers_url": "https://api.github.com/users/xiaofan-luan/followers", "following_url": "https://api.github.com/users/xiaofan-luan/following{/other_user}", "gists_url": "https://api.github.com/users/xiaofan-luan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/xiaofan-luan", "id": 83447078, "login": "xiaofan-luan", "node_id": "MDQ6VXNlcjgzNDQ3MDc4", "organizations_url": "https://api.github.com/users/xiaofan-luan/orgs", "received_events_url": "https://api.github.com/users/xiaofan-luan/received_events", "repos_url": "https://api.github.com/users/xiaofan-luan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/xiaofan-luan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xiaofan-luan/subscriptions", "type": "User", "url": "https://api.github.com/users/xiaofan-luan", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/83447078?v=4", "events_url": "https://api.github.com/users/xiaofan-luan/events{/privacy}", "followers_url": "https://api.github.com/users/xiaofan-luan/followers", "following_url": "https://api.github.com/users/xiaofan-luan/following{/other_user}", "gists_url": "https://api.github.com/users/xiaofan-luan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/xiaofan-luan", "id": 83447078, "login": "xiaofan-luan", "node_id": "MDQ6VXNlcjgzNDQ3MDc4", "organizations_url": "https://api.github.com/users/xiaofan-luan/orgs", "received_events_url": "https://api.github.com/users/xiaofan-luan/received_events", "repos_url": "https://api.github.com/users/xiaofan-luan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/xiaofan-luan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xiaofan-luan/subscriptions", "type": "User", "url": "https://api.github.com/users/xiaofan-luan", "user_view_type": "public" } ]
[ "Hi @mariosasko,Just search randomly and I found this issue~ I'm the tech lead of Milvus and we are looking forward to integrate milvus together with huggingface datasets.\r\n\r\nAny suggestion on how we could start?\r\n", "Feel free to assign to me and we probably need some guide on it", "@mariosasko any updates my man?\r\n", "Hi! For starters, I suggest you take a look at this file: https://github.com/huggingface/datasets/blob/master/src/datasets/search.py, which contains all the code for Faiss/ElasticSearch support. We could set up a Slack channel for additional guidance. Let me know what you prefer.", "> Hi! For starters, I suggest you take a look at this file: https://github.com/huggingface/datasets/blob/master/src/datasets/search.py, which contains all the code for Faiss/ElasticSearch support. We could set up a Slack channel for additional guidance. Let me know what you prefer.\r\n\r\nSure, we take a look and do some research" ]
2022-01-10T15:20:29
2022-03-05T12:28:36
null
COLLABORATOR
null
null
null
null
Milvus is a popular open-source vector database. We should add a new vector index to support this project.
null
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/3558/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3558/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
https://api.github.com/repos/huggingface/datasets/issues/3555
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3555/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3555/comments
https://api.github.com/repos/huggingface/datasets/issues/3555/events
https://github.com/huggingface/datasets/issues/3555
1,097,736,982
I_kwDODunzps5BbiMW
3,555
DuplicatedKeysError when loading tweet_qa dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/30300891?v=4", "events_url": "https://api.github.com/users/LeonieWeissweiler/events{/privacy}", "followers_url": "https://api.github.com/users/LeonieWeissweiler/followers", "following_url": "https://api.github.com/users/LeonieWeissweiler/following{/other_user}", "gists_url": "https://api.github.com/users/LeonieWeissweiler/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/LeonieWeissweiler", "id": 30300891, "login": "LeonieWeissweiler", "node_id": "MDQ6VXNlcjMwMzAwODkx", "organizations_url": "https://api.github.com/users/LeonieWeissweiler/orgs", "received_events_url": "https://api.github.com/users/LeonieWeissweiler/received_events", "repos_url": "https://api.github.com/users/LeonieWeissweiler/repos", "site_admin": false, "starred_url": "https://api.github.com/users/LeonieWeissweiler/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LeonieWeissweiler/subscriptions", "type": "User", "url": "https://api.github.com/users/LeonieWeissweiler", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" } ]
[ "Hi, we've just merged the PR with the fix. The fixed version of the dataset can be downloaded as follows:\r\n```python\r\nimport datasets\r\ndset = datasets.load_dataset(\"tweet_qa\", revision=\"master\")\r\n```" ]
2022-01-10T10:53:11
2022-01-12T15:17:33
2022-01-12T15:13:56
NONE
null
null
null
null
When loading the tweet_qa dataset with `load_dataset('tweet_qa')`, the following error occurs: `DuplicatedKeysError: FAILURE TO GENERATE DATASET ! Found duplicate Key: 2a167f9e016ba338e1813fed275a6a1e Keys should be unique and deterministic in nature ` Might be related to issues #2433 and #2333 - `datasets` version: 1.17.0 - Python version: 3.8.5
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3555/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3555/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
2 days, 4:20:45
https://api.github.com/repos/huggingface/datasets/issues/3554
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3554/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3554/comments
https://api.github.com/repos/huggingface/datasets/issues/3554/events
https://github.com/huggingface/datasets/issues/3554
1,097,711,367
I_kwDODunzps5Bbb8H
3,554
ImportError: cannot import name 'is_valid_waiter_error'
{ "avatar_url": "https://avatars.githubusercontent.com/u/84714841?v=4", "events_url": "https://api.github.com/users/danielbellhv/events{/privacy}", "followers_url": "https://api.github.com/users/danielbellhv/followers", "following_url": "https://api.github.com/users/danielbellhv/following{/other_user}", "gists_url": "https://api.github.com/users/danielbellhv/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/danielbellhv", "id": 84714841, "login": "danielbellhv", "node_id": "MDQ6VXNlcjg0NzE0ODQx", "organizations_url": "https://api.github.com/users/danielbellhv/orgs", "received_events_url": "https://api.github.com/users/danielbellhv/received_events", "repos_url": "https://api.github.com/users/danielbellhv/repos", "site_admin": false, "starred_url": "https://api.github.com/users/danielbellhv/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/danielbellhv/subscriptions", "type": "User", "url": "https://api.github.com/users/danielbellhv", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "Hi! I can't reproduce this error in Colab, but I'm assuming you are using Amazon SageMaker Studio Notebooks (you mention the `conda_pytorch_p36` kernel), so maybe @philschmid knows more about what might be causing this issue? ", "Hey @mariosasko. Yes, I am using **Amazon SageMaker Studio Jupyter Labs**. However, I no longer need this notebook; but it would be nice to have this problem solved for others. So don't stress too much if you two can't reproduce error.", "Hey @danielbellhv, \r\n\r\nThis issue might be related to Studio probably not having an up to date `botocore` and `boto3` version. I ran into this as well a while back. My workaround was \r\n```python\r\n# using older dataset due to incompatibility of sagemaker notebook & aws-cli with > s3fs and fsspec to >= 2021.10\r\n!pip install \"datasets==1.13\" --upgrade\r\n```\r\n\r\nIn `datasets` we use the latest `s3fs` and `fsspec` but aws-cli and notebook is not supporting this. You could also update the `aws-cli` and associated packages to get the latest `datasets` version\r\n" ]
2022-01-10T10:32:04
2022-02-14T09:35:57
2022-02-14T09:35:57
NONE
null
null
null
null
Based on [SO post](https://stackoverflow.com/q/70606147/17840900). I'm following along to this [Notebook][1], cell "**Loading the dataset**". Kernel: `conda_pytorch_p36`. I run: ``` ! pip install datasets transformers optimum[intel] ``` Output: ``` Requirement already satisfied: datasets in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (1.17.0) Requirement already satisfied: transformers in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (4.15.0) Requirement already satisfied: optimum[intel] in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (0.1.3) Requirement already satisfied: numpy>=1.17 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (1.19.5) Requirement already satisfied: dill in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (0.3.4) Requirement already satisfied: tqdm>=4.62.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (4.62.3) Requirement already satisfied: huggingface-hub<1.0.0,>=0.1.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (0.2.1) Requirement already satisfied: packaging in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (21.3) Requirement already satisfied: pyarrow!=4.0.0,>=3.0.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (6.0.1) Requirement already satisfied: pandas in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (1.1.5) Requirement already satisfied: xxhash in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (2.0.2) Requirement already satisfied: aiohttp in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (3.8.1) Requirement already satisfied: fsspec[http]>=2021.05.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (2021.11.1) Requirement already satisfied: dataclasses in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (0.8) Requirement already satisfied: multiprocess in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (0.70.12.2) Requirement already satisfied: importlib-metadata in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (4.5.0) Requirement already satisfied: requests>=2.19.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (2.25.1) Requirement already satisfied: pyyaml>=5.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from transformers) (5.4.1) Requirement already satisfied: regex!=2019.12.17 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from transformers) (2021.4.4) Requirement already satisfied: tokenizers<0.11,>=0.10.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from transformers) (0.10.3) Requirement already satisfied: filelock in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from transformers) (3.0.12) Requirement already satisfied: sacremoses in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from transformers) (0.0.46) Requirement already satisfied: torch>=1.9 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from optimum[intel]) (1.10.1) Requirement already satisfied: sympy in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from optimum[intel]) (1.8) Requirement already satisfied: coloredlogs in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from optimum[intel]) (15.0.1) Requirement already satisfied: pycocotools in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from optimum[intel]) (2.0.3) Requirement already satisfied: neural-compressor>=1.7 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from optimum[intel]) (1.9) Requirement already satisfied: typing-extensions>=3.7.4.3 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from huggingface-hub<1.0.0,>=0.1.0->datasets) (3.10.0.0) Requirement already satisfied: sigopt in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (8.2.0) Requirement already satisfied: opencv-python in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (4.5.1.48) Requirement already satisfied: cryptography in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (3.4.7) Requirement already satisfied: py-cpuinfo in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (8.0.0) Requirement already satisfied: gevent in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (21.1.2) Requirement already satisfied: schema in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (0.7.5) Requirement already satisfied: psutil in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (5.8.0) Requirement already satisfied: gevent-websocket in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (0.10.1) Requirement already satisfied: hyperopt in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (0.2.7) Requirement already satisfied: Flask in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (2.0.1) Requirement already satisfied: prettytable in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (2.5.0) Requirement already satisfied: Flask-SocketIO in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (5.1.1) Requirement already satisfied: scikit-learn in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (0.24.2) Requirement already satisfied: Pillow in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (8.4.0) Requirement already satisfied: Flask-Cors in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (3.0.10) Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from packaging->datasets) (2.4.7) Requirement already satisfied: chardet<5,>=3.0.2 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from requests>=2.19.0->datasets) (4.0.0) Requirement already satisfied: certifi>=2017.4.17 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from requests>=2.19.0->datasets) (2021.5.30) Requirement already satisfied: urllib3<1.27,>=1.21.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from requests>=2.19.0->datasets) (1.26.5) Requirement already satisfied: idna<3,>=2.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from requests>=2.19.0->datasets) (2.10) Requirement already satisfied: yarl<2.0,>=1.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from aiohttp->datasets) (1.6.3) Requirement already satisfied: charset-normalizer<3.0,>=2.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from aiohttp->datasets) (2.0.9) Requirement already satisfied: attrs>=17.3.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from aiohttp->datasets) (21.2.0) Requirement already satisfied: asynctest==0.13.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from aiohttp->datasets) (0.13.0) Requirement already satisfied: idna-ssl>=1.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from aiohttp->datasets) (1.1.0) Requirement already satisfied: async-timeout<5.0,>=4.0.0a3 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from aiohttp->datasets) (4.0.1) Requirement already satisfied: aiosignal>=1.1.2 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from aiohttp->datasets) (1.2.0) Requirement already satisfied: frozenlist>=1.1.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from aiohttp->datasets) (1.2.0) Requirement already satisfied: multidict<7.0,>=4.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from aiohttp->datasets) (5.1.0) Requirement already satisfied: humanfriendly>=9.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from coloredlogs->optimum[intel]) (10.0) Requirement already satisfied: zipp>=0.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from importlib-metadata->datasets) (3.4.1) Requirement already satisfied: python-dateutil>=2.7.3 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from pandas->datasets) (2.8.1) Requirement already satisfied: pytz>=2017.2 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from pandas->datasets) (2021.1) Requirement already satisfied: matplotlib>=2.1.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from pycocotools->optimum[intel]) (3.3.4) Requirement already satisfied: cython>=0.27.3 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from pycocotools->optimum[intel]) (0.29.23) Requirement already satisfied: setuptools>=18.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from pycocotools->optimum[intel]) (52.0.0.post20210125) Requirement already satisfied: joblib in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sacremoses->transformers) (1.0.1) Requirement already satisfied: click in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sacremoses->transformers) (8.0.1) Requirement already satisfied: six in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sacremoses->transformers) (1.16.0) Requirement already satisfied: mpmath>=0.19 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sympy->optimum[intel]) (1.2.1) Requirement already satisfied: kiwisolver>=1.0.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from matplotlib>=2.1.0->pycocotools->optimum[intel]) (1.3.1) Requirement already satisfied: cycler>=0.10 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/cycler-0.10.0-py3.6.egg (from matplotlib>=2.1.0->pycocotools->optimum[intel]) (0.10.0) Requirement already satisfied: cffi>=1.12 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from cryptography->neural-compressor>=1.7->optimum[intel]) (1.14.5) Requirement already satisfied: Werkzeug>=2.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from Flask->neural-compressor>=1.7->optimum[intel]) (2.0.2) Requirement already satisfied: Jinja2>=3.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from Flask->neural-compressor>=1.7->optimum[intel]) (3.0.1) Requirement already satisfied: itsdangerous>=2.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from Flask->neural-compressor>=1.7->optimum[intel]) (2.0.1) Requirement already satisfied: python-socketio>=5.0.2 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from Flask-SocketIO->neural-compressor>=1.7->optimum[intel]) (5.5.0) Requirement already satisfied: zope.event in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from gevent->neural-compressor>=1.7->optimum[intel]) (4.5.0) Requirement already satisfied: greenlet<2.0,>=0.4.17 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from gevent->neural-compressor>=1.7->optimum[intel]) (1.1.0) Requirement already satisfied: zope.interface in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from gevent->neural-compressor>=1.7->optimum[intel]) (5.4.0) Requirement already satisfied: future in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from hyperopt->neural-compressor>=1.7->optimum[intel]) (0.18.2) Requirement already satisfied: cloudpickle in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from hyperopt->neural-compressor>=1.7->optimum[intel]) (1.6.0) Requirement already satisfied: networkx>=2.2 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from hyperopt->neural-compressor>=1.7->optimum[intel]) (2.5) Requirement already satisfied: scipy in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from hyperopt->neural-compressor>=1.7->optimum[intel]) (1.5.3) Requirement already satisfied: py4j in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from hyperopt->neural-compressor>=1.7->optimum[intel]) (0.10.7) Requirement already satisfied: wcwidth in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from prettytable->neural-compressor>=1.7->optimum[intel]) (0.2.5) Requirement already satisfied: contextlib2>=0.5.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from schema->neural-compressor>=1.7->optimum[intel]) (0.6.0.post1) Requirement already satisfied: threadpoolctl>=2.0.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from scikit-learn->neural-compressor>=1.7->optimum[intel]) (2.1.0) Requirement already satisfied: pyOpenSSL>=20.0.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sigopt->neural-compressor>=1.7->optimum[intel]) (20.0.1) Requirement already satisfied: pypng>=0.0.20 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sigopt->neural-compressor>=1.7->optimum[intel]) (0.0.21) Requirement already satisfied: kubernetes<13.0.0,>=12.0.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sigopt->neural-compressor>=1.7->optimum[intel]) (12.0.1) Requirement already satisfied: rsa<5.0.0,>=4.7 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sigopt->neural-compressor>=1.7->optimum[intel]) (4.7.2) Requirement already satisfied: boto3<2.0.0,==1.16.34 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sigopt->neural-compressor>=1.7->optimum[intel]) (1.16.34) Requirement already satisfied: Pint<0.17.0,>=0.16.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sigopt->neural-compressor>=1.7->optimum[intel]) (0.16.1) Requirement already satisfied: GitPython>=2.0.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sigopt->neural-compressor>=1.7->optimum[intel]) (3.1.18) Requirement already satisfied: backoff<2.0.0,>=1.10.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sigopt->neural-compressor>=1.7->optimum[intel]) (1.11.1) Requirement already satisfied: ipython>=5.0.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sigopt->neural-compressor>=1.7->optimum[intel]) (7.16.1) Requirement already satisfied: docker<5.0.0,>=4.4.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sigopt->neural-compressor>=1.7->optimum[intel]) (4.4.4) Requirement already satisfied: jmespath<1.0.0,>=0.7.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3<2.0.0,==1.16.34->sigopt->neural-compressor>=1.7->optimum[intel]) (0.10.0) Requirement already satisfied: s3transfer<0.4.0,>=0.3.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3<2.0.0,==1.16.34->sigopt->neural-compressor>=1.7->optimum[intel]) (0.3.7) Requirement already satisfied: botocore<1.20.0,>=1.19.34 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3<2.0.0,==1.16.34->sigopt->neural-compressor>=1.7->optimum[intel]) (1.19.63) Requirement already satisfied: pycparser in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from cffi>=1.12->cryptography->neural-compressor>=1.7->optimum[intel]) (2.20) Requirement already satisfied: websocket-client>=0.32.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from docker<5.0.0,>=4.4.0->sigopt->neural-compressor>=1.7->optimum[intel]) (0.58.0) Requirement already satisfied: gitdb<5,>=4.0.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from GitPython>=2.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (4.0.9) Requirement already satisfied: traitlets>=4.2 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from ipython>=5.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (4.3.3) Requirement already satisfied: jedi>=0.10 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from ipython>=5.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (0.17.2) Requirement already satisfied: prompt-toolkit!=3.0.0,!=3.0.1,<3.1.0,>=2.0.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from ipython>=5.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (3.0.19) Requirement already satisfied: backcall in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from ipython>=5.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (0.2.0) Requirement already satisfied: pygments in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from ipython>=5.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (2.9.0) Requirement already satisfied: pexpect in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from ipython>=5.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (4.8.0) Requirement already satisfied: decorator in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from ipython>=5.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (5.0.9) Requirement already satisfied: pickleshare in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from ipython>=5.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (0.7.5) Requirement already satisfied: MarkupSafe>=2.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from Jinja2>=3.0->Flask->neural-compressor>=1.7->optimum[intel]) (2.0.1) Requirement already satisfied: google-auth>=1.0.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from kubernetes<13.0.0,>=12.0.1->sigopt->neural-compressor>=1.7->optimum[intel]) (1.30.2) Requirement already satisfied: requests-oauthlib in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from kubernetes<13.0.0,>=12.0.1->sigopt->neural-compressor>=1.7->optimum[intel]) (1.3.0) Requirement already satisfied: importlib-resources in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from Pint<0.17.0,>=0.16.0->sigopt->neural-compressor>=1.7->optimum[intel]) (5.4.0) Requirement already satisfied: python-engineio>=4.3.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from python-socketio>=5.0.2->Flask-SocketIO->neural-compressor>=1.7->optimum[intel]) (4.3.0) Requirement already satisfied: bidict>=0.21.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from python-socketio>=5.0.2->Flask-SocketIO->neural-compressor>=1.7->optimum[intel]) (0.21.4) Requirement already satisfied: pyasn1>=0.1.3 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from rsa<5.0.0,>=4.7->sigopt->neural-compressor>=1.7->optimum[intel]) (0.4.8) Requirement already satisfied: smmap<6,>=3.0.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from gitdb<5,>=4.0.1->GitPython>=2.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (5.0.0) Requirement already satisfied: pyasn1-modules>=0.2.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from google-auth>=1.0.1->kubernetes<13.0.0,>=12.0.1->sigopt->neural-compressor>=1.7->optimum[intel]) (0.2.8) Requirement already satisfied: cachetools<5.0,>=2.0.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from google-auth>=1.0.1->kubernetes<13.0.0,>=12.0.1->sigopt->neural-compressor>=1.7->optimum[intel]) (4.2.2) Requirement already satisfied: parso<0.8.0,>=0.7.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from jedi>=0.10->ipython>=5.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (0.7.1) Requirement already satisfied: ipython-genutils in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from traitlets>=4.2->ipython>=5.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (0.2.0) Requirement already satisfied: ptyprocess>=0.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from pexpect->ipython>=5.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (0.7.0) Requirement already satisfied: oauthlib>=3.0.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from requests-oauthlib->kubernetes<13.0.0,>=12.0.1->sigopt->neural-compressor>=1.7->optimum[intel]) (3.1.1) ``` --- **Cell:** ```python from datasets import load_dataset, load_metric ``` OR ```python import datasets ``` **Traceback:** ``` --------------------------------------------------------------------------- ImportError Traceback (most recent call last) <ipython-input-7-34fb7ba3338d> in <module> ----> 1 from datasets import load_dataset, load_metric ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/datasets/__init__.py in <module> 32 ) 33 ---> 34 from .arrow_dataset import Dataset, concatenate_datasets 35 from .arrow_reader import ArrowReader, ReadInstruction 36 from .arrow_writer import ArrowWriter ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/datasets/arrow_dataset.py in <module> 59 from . import config, utils 60 from .arrow_reader import ArrowReader ---> 61 from .arrow_writer import ArrowWriter, OptimizedTypedSequence 62 from .features import ClassLabel, Features, FeatureType, Sequence, Value, _ArrayXD, pandas_types_mapper 63 from .filesystems import extract_path_from_uri, is_remote_filesystem ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/datasets/arrow_writer.py in <module> 26 27 from . import config, utils ---> 28 from .features import ( 29 Features, 30 ImageExtensionType, ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/datasets/features/__init__.py in <module> 1 # flake8: noqa ----> 2 from .audio import Audio 3 from .features import * 4 from .features import ( 5 _ArrayXD, ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/datasets/features/audio.py in <module> 5 import pyarrow as pa 6 ----> 7 from ..utils.streaming_download_manager import xopen 8 9 ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/datasets/utils/streaming_download_manager.py in <module> 16 17 from .. import config ---> 18 from ..filesystems import COMPRESSION_FILESYSTEMS 19 from .download_manager import DownloadConfig, map_nested 20 from .file_utils import ( ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/datasets/filesystems/__init__.py in <module> 11 12 if _has_s3fs: ---> 13 from .s3filesystem import S3FileSystem # noqa: F401 14 15 COMPRESSION_FILESYSTEMS: List[compression.BaseCompressedFileFileSystem] = [ ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/datasets/filesystems/s3filesystem.py in <module> ----> 1 import s3fs 2 3 4 class S3FileSystem(s3fs.S3FileSystem): 5 """ ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/s3fs/__init__.py in <module> ----> 1 from .core import S3FileSystem, S3File 2 from .mapping import S3Map 3 4 from ._version import get_versions 5 ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/s3fs/core.py in <module> 12 from fsspec.asyn import AsyncFileSystem, sync, sync_wrapper 13 ---> 14 import aiobotocore 15 import botocore 16 import aiobotocore.session ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/aiobotocore/__init__.py in <module> ----> 1 from .session import get_session, AioSession 2 3 __all__ = ['get_session', 'AioSession'] 4 __version__ = '1.3.0' ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/aiobotocore/session.py in <module> 4 from botocore import retryhandler, translate 5 from botocore.exceptions import PartialCredentialsError ----> 6 from .client import AioClientCreator, AioBaseClient 7 from .hooks import AioHierarchicalEmitter 8 from .parsers import AioResponseParserFactory ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/aiobotocore/client.py in <module> 11 from .args import AioClientArgsCreator 12 from .utils import AioS3RegionRedirector ---> 13 from . import waiter 14 15 history_recorder = get_global_history_recorder() ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/aiobotocore/waiter.py in <module> 4 from botocore.exceptions import ClientError 5 from botocore.waiter import WaiterModel # noqa: F401, lgtm[py/unused-import] ----> 6 from botocore.waiter import Waiter, xform_name, logger, WaiterError, \ 7 NormalizedOperationMethod as _NormalizedOperationMethod, is_valid_waiter_error 8 from botocore.docs.docstring import WaiterDocstring ImportError: cannot import name 'is_valid_waiter_error' ``` Please let me know if there's anything else I can add to post. [1]: https://github.com/huggingface/notebooks/blob/master/examples/text_classification_quantization_inc.ipynb
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3554/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3554/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
34 days, 23:03:53
https://api.github.com/repos/huggingface/datasets/issues/3553
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3553/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3553/comments
https://api.github.com/repos/huggingface/datasets/issues/3553/events
https://github.com/huggingface/datasets/issues/3553
1,097,252,275
I_kwDODunzps5BZr2z
3,553
set_format("np") no longer works for Image data
{ "avatar_url": "https://avatars.githubusercontent.com/u/5862228?v=4", "events_url": "https://api.github.com/users/cgarciae/events{/privacy}", "followers_url": "https://api.github.com/users/cgarciae/followers", "following_url": "https://api.github.com/users/cgarciae/following{/other_user}", "gists_url": "https://api.github.com/users/cgarciae/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cgarciae", "id": 5862228, "login": "cgarciae", "node_id": "MDQ6VXNlcjU4NjIyMjg=", "organizations_url": "https://api.github.com/users/cgarciae/orgs", "received_events_url": "https://api.github.com/users/cgarciae/received_events", "repos_url": "https://api.github.com/users/cgarciae/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cgarciae/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cgarciae/subscriptions", "type": "User", "url": "https://api.github.com/users/cgarciae", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" } ]
[ "A quick fix for now is doing this:\r\n\r\n```python\r\nX_train = np.stack(dataset[\"train\"][\"image\"])[..., None]", "This error also propagates to jax and is even trickier to fix, since `.with_format(type='jax')` will use numpy conversion internally (and fail). For a three line failure:\r\n\r\n```python\r\ndataset = datasets.load_dataset(\"mnist\")\r\ndataset.set_format(\"jax\")\r\nX_train = dataset[\"train\"][\"image\"]\r\n```", "Hi! We've recently introduced a new Image feature that yields PIL Images (and caches transforms on them) instead of arrays.\r\n\r\nHowever, this feature requires a custom transform to yield np arrays directly:\r\n```python\r\nddict = datasets.load_dataset(\"mnist\")\r\n\r\ndef pil_image_to_array(batch):\r\n return {\"image\": [np.array(img) for img in batch[\"image\"]]} # or jnp.array(img) for Jax\r\n\r\nddict.set_transform(pil_image_to_array, columns=\"image\", output_all_columns=True)\r\n```\r\n\r\n[Docs](https://huggingface.co/docs/datasets/master/process.html#format-transform) on `set_transform`.\r\n\r\nAlso, the approach proposed by @cgarciae is not the best because it loads the entire column in memory.\r\n\r\n@albertvillanova @lhoestq WDYT? The Audio and the Image feature currently don't support the TF/Jax/PT Formatters, but for the Numpy Formatter maybe it makes more sense to return np arrays (and not a dict in the case of the Audio feature or a PIL Image object in the case of the Image feature).", "Yes I agree it should return arrays and not a PIL image (and possible an array instead of a dict for audio data).\r\nI'm currently finishing some code refactoring of the image and audio and opening a PR today. Maybe we can look into that after the refactoring", "This has been fixed in https://github.com/huggingface/datasets/pull/5072, which is included in the latest release of `datasets`." ]
2022-01-09T17:18:13
2022-10-14T12:03:55
2022-10-14T12:03:54
NONE
null
null
null
null
## Describe the bug `dataset.set_format("np")` no longer works for image data, previously you could load the MNIST like this: ```python dataset = load_dataset("mnist") dataset.set_format("np") X_train = dataset["train"]["image"][..., None] # <== No longer a numpy array ``` but now it doesn't work, `set_format("np")` seems to have no effect and the dataset just returns a list/array of PIL images instead of numpy arrays as requested.
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/3553/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3553/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
277 days, 18:45:41
https://api.github.com/repos/huggingface/datasets/issues/3550
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3550/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3550/comments
https://api.github.com/repos/huggingface/datasets/issues/3550/events
https://github.com/huggingface/datasets/issues/3550
1,096,522,377
I_kwDODunzps5BW5qJ
3,550
Bug in `openbookqa` dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/23355969?v=4", "events_url": "https://api.github.com/users/lucadiliello/events{/privacy}", "followers_url": "https://api.github.com/users/lucadiliello/followers", "following_url": "https://api.github.com/users/lucadiliello/following{/other_user}", "gists_url": "https://api.github.com/users/lucadiliello/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lucadiliello", "id": 23355969, "login": "lucadiliello", "node_id": "MDQ6VXNlcjIzMzU1OTY5", "organizations_url": "https://api.github.com/users/lucadiliello/orgs", "received_events_url": "https://api.github.com/users/lucadiliello/received_events", "repos_url": "https://api.github.com/users/lucadiliello/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lucadiliello/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lucadiliello/subscriptions", "type": "User", "url": "https://api.github.com/users/lucadiliello", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" }, { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
closed
false
null
[]
[ "Closed by:\r\n- #4259" ]
2022-01-07T17:32:57
2022-05-04T06:33:00
2022-05-04T06:32:19
CONTRIBUTOR
null
null
null
null
## Describe the bug Dataset entries contains a typo. ## Steps to reproduce the bug ```python >>> from datasets import load_dataset >>> obqa = load_dataset('openbookqa', 'main') >>> obqa['train'][0] ``` ## Expected results ```python {'id': '7-980', 'question_stem': 'The sun is responsible for', 'choices': {'text': ['puppies learning new tricks', 'children growing up and getting old', 'flowers wilting in a vase', 'plants sprouting, blooming and wilting'], 'label': ['A', 'B', 'C', 'D']}, 'answerKey': 'D'} ``` ## Actual results ```python {'id': '7-980', 'question_stem': 'The sun is responsible for', 'choices': {'text': ['puppies learning new tricks', 'children growing up and getting old', 'flowers wilting in a vase', 'plants sprouting, blooming and wilting'], 'label': ['puppies learning new tricks', 'children growing up and getting old', 'flowers wilting in a vase', 'plants sprouting, blooming and wilting']}, 'answerKey': 'D'} ``` The bug is present in all configs and all splits. ## Environment info - `datasets` version: 1.17.0 - Platform: Linux-5.4.0-1057-aws-x86_64-with-glibc2.27 - Python version: 3.9.7 - PyArrow version: 4.0.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3550/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3550/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
116 days, 12:59:22
https://api.github.com/repos/huggingface/datasets/issues/3548
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3548/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3548/comments
https://api.github.com/repos/huggingface/datasets/issues/3548/events
https://github.com/huggingface/datasets/issues/3548
1,096,409,512
I_kwDODunzps5BWeGo
3,548
Specify the feature types of a dataset on the Hub without needing a dataset script
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/1778297?v=4", "events_url": "https://api.github.com/users/abidlabs/events{/privacy}", "followers_url": "https://api.github.com/users/abidlabs/followers", "following_url": "https://api.github.com/users/abidlabs/following{/other_user}", "gists_url": "https://api.github.com/users/abidlabs/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/abidlabs", "id": 1778297, "login": "abidlabs", "node_id": "MDQ6VXNlcjE3NzgyOTc=", "organizations_url": "https://api.github.com/users/abidlabs/orgs", "received_events_url": "https://api.github.com/users/abidlabs/received_events", "repos_url": "https://api.github.com/users/abidlabs/repos", "site_admin": false, "starred_url": "https://api.github.com/users/abidlabs/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abidlabs/subscriptions", "type": "User", "url": "https://api.github.com/users/abidlabs", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/1778297?v=4", "events_url": "https://api.github.com/users/abidlabs/events{/privacy}", "followers_url": "https://api.github.com/users/abidlabs/followers", "following_url": "https://api.github.com/users/abidlabs/following{/other_user}", "gists_url": "https://api.github.com/users/abidlabs/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/abidlabs", "id": 1778297, "login": "abidlabs", "node_id": "MDQ6VXNlcjE3NzgyOTc=", "organizations_url": "https://api.github.com/users/abidlabs/orgs", "received_events_url": "https://api.github.com/users/abidlabs/received_events", "repos_url": "https://api.github.com/users/abidlabs/repos", "site_admin": false, "starred_url": "https://api.github.com/users/abidlabs/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abidlabs/subscriptions", "type": "User", "url": "https://api.github.com/users/abidlabs", "user_view_type": "public" } ]
[ "After looking into this, discovered that this is already supported if the `dataset_infos.json` file is configured correctly! Here is a working example: https://huggingface.co/datasets/abidlabs/test-audio-13\r\n\r\nThis should be probably be documented, though. " ]
2022-01-07T15:17:06
2022-01-20T14:48:38
2022-01-20T14:48:38
MEMBER
null
null
null
null
**Is your feature request related to a problem? Please describe.** Currently if I upload a CSV with paths to audio files, the column type is string instead of Audio. **Describe the solution you'd like** I'd like to be able to specify the types of the column, so that when loading the dataset I directly get the features types I want. The feature types could read from the `dataset_infos.json` for example. **Describe alternatives you've considered** Create a dataset script to specify the features, but that seems complicated for a simple thing. cc @abidlabs
{ "avatar_url": "https://avatars.githubusercontent.com/u/1778297?v=4", "events_url": "https://api.github.com/users/abidlabs/events{/privacy}", "followers_url": "https://api.github.com/users/abidlabs/followers", "following_url": "https://api.github.com/users/abidlabs/following{/other_user}", "gists_url": "https://api.github.com/users/abidlabs/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/abidlabs", "id": 1778297, "login": "abidlabs", "node_id": "MDQ6VXNlcjE3NzgyOTc=", "organizations_url": "https://api.github.com/users/abidlabs/orgs", "received_events_url": "https://api.github.com/users/abidlabs/received_events", "repos_url": "https://api.github.com/users/abidlabs/repos", "site_admin": false, "starred_url": "https://api.github.com/users/abidlabs/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abidlabs/subscriptions", "type": "User", "url": "https://api.github.com/users/abidlabs", "user_view_type": "public" }
{ "+1": 3, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 3, "url": "https://api.github.com/repos/huggingface/datasets/issues/3548/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3548/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
12 days, 23:31:32
https://api.github.com/repos/huggingface/datasets/issues/3547
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3547/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3547/comments
https://api.github.com/repos/huggingface/datasets/issues/3547/events
https://github.com/huggingface/datasets/issues/3547
1,096,405,515
I_kwDODunzps5BWdIL
3,547
Datasets created with `push_to_hub` can't be accessed in offline mode
{ "avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4", "events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}", "followers_url": "https://api.github.com/users/TevenLeScao/followers", "following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}", "gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/TevenLeScao", "id": 26709476, "login": "TevenLeScao", "node_id": "MDQ6VXNlcjI2NzA5NDc2", "organizations_url": "https://api.github.com/users/TevenLeScao/orgs", "received_events_url": "https://api.github.com/users/TevenLeScao/received_events", "repos_url": "https://api.github.com/users/TevenLeScao/repos", "site_admin": false, "starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions", "type": "User", "url": "https://api.github.com/users/TevenLeScao", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "Thanks for reporting. I think this can be fixed by improving the `CachedDatasetModuleFactory` and making it look into the `parquet` cache directory (datasets from push_to_hub are loaded with the parquet dataset builder). I'll look into it", "Hi, I'm having the same issue. Is there any update on this?", "We haven't had a chance to fix this yet. If someone would like to give it a try I'd be happy to give some guidance", "@lhoestq Do you have an idea of what changes need to be made to `CachedDatasetModuleFactory`? I would be willing to take a crack at it. Currently unable to train with datasets I have `push_to_hub` on a cluster whose compute nodes are not connected to the internet.\r\n\r\nIt looks like it might be this line:\r\n\r\nhttps://github.com/huggingface/datasets/blob/0c1d099f87a883e52c42d3fd1f1052ad3967e647/src/datasets/load.py#L994\r\n\r\nWhich wouldn't pick up the stuff saved under `\"datasets/allenai___parquet/*\"`. Additionally, the datasets saved under `\"datasets/allenai___parquet/*\"` appear to have hashes in their name, e.g. `\"datasets/allenai___parquet/my_dataset-def9ee5552a1043e\"`. This would not be detected by `CachedDatasetModuleFactory`, which currently looks for subdirectories\r\n\r\nhttps://github.com/huggingface/datasets/blob/0c1d099f87a883e52c42d3fd1f1052ad3967e647/src/datasets/load.py#L995-L999", "`importable_directory_path` is used to find a **dataset script** that was previously downloaded and cached from the Hub\r\n\r\nHowever in your case there's no dataset script on the Hub, only parquet files. So the logic must be extended for this case.\r\n\r\nIn particular I think you can add a new logic in the case where `hashes is None` (i.e. if there's no dataset script associated to the dataset in the cache).\r\n\r\nIn this case you can check directly in the in the datasets cache for a directory named `<namespace>__parquet` and a subdirectory named `<config_id>`. The config_id must match `{self.name.replace(\"/\", \"--\")}-*`. \r\n\r\nIn your case those two directories correspond to `allenai___parquet` and then `allenai--my_dataset-def9ee5552a1043e`\r\n\r\nThen you can find the most recent version of the dataset in subdirectories (e.g. sorting using the last modified time of the `dataset_info.json` file).\r\n\r\nFinally, we will need return the module that is used to load the dataset from the cache. It is the same module than the one that would have been normally used if you had an internet connection.\r\n\r\nAt that point you can ping me, because we will need to pass all this:\r\n- `module_path = _PACKAGED_DATASETS_MODULES[\"parquet\"][0]`\r\n- `hash` it corresponds the name of the directory that contains the .arrow file, inside `<namespace>__parquet/<config_id>`\r\n- ` builder_kwargs = {\"hash\": hash, \"repo_id\": self.name, \"config_id\": config_id}`\r\nand currently `config_id` is not a valid argument for a `DatasetBuilder`\r\n\r\nI think in the future we want to change this caching logic completely, since I don't find it super easy to play with.", "Hi! Is there a workaround for the time being?\r\nLike passing `data_dir` or something like that?\r\n\r\nI would like to use [this diffuser example](https://github.com/huggingface/diffusers/tree/main/examples/unconditional_image_generation) on my cluster whose nodes are not connected to the internet. I have downloaded the dataset online form the login node.", "Hi ! Yes you can save your dataset locally with `my_dataset.save_to_disk(\"path/to/local\")` and reload it later with `load_from_disk(\"path/to/local\")`\r\n\r\n(removing myself from assignees since I'm currently not working on this right now)", "Still not fixed? ......", "Any idea @lhoestq who to tag to fix this ? This is a very annoying bug, which is becoming more and more present since the push_to_hub API is getting used more ?", "Perhaps @mariosasko ? Thanks a lot for the great work on the lib !", "It should be easier to implement now that we improved the caching of datasets from `push_to_hub`: each dataset has its own directory in the cache.\r\n\r\nThe cache structure has been improved in https://github.com/huggingface/datasets/pull/5331. Now the cache structure is `\"{namespace__}<dataset_name>/<config_name>/<version>/<hash>/\"` which contains the arrow files `\"<dataset_name>-<split>.arrow\"` and `\"dataset_info.json\"`. \r\n\r\nThe idea is to extend `CachedDatasetModuleFactory` to also check if this directory exists in the cache (in addition to the already existing cache check) and return the requested dataset module. The module name can be found in the JSON file in the `builder_name` field.", "Any progress?", "I started a PR to draft the logic to reload datasets from the cache fi they were created with push_to_hub: https://github.com/huggingface/datasets/pull/6459\r\n\r\nFeel free to try it out", "It seems that this does not support dataset with uppercase name ", "Which version of `datasets` are you using ? This issue has been fixed with `datasets` 2.16", "I can confirm that this problem is still happening with `datasets` 2.17.0, installed from pip", "Can you share a code or a dataset that reproduces the issue ? It seems to work fine on my side.", "Yeah, \r\n```python\r\ndataset = load_dataset(\"roneneldan/TinyStories\")\r\n```\r\nI tried it with:\r\n```python\r\ndataset = load_dataset(\"roneneldan/tinystories\")\r\n```\r\nand it worked.\r\n\r\n> It seems that this does not support dataset with uppercase name\r\n\r\n@fecet was right, but if you just put the name lowercase, it works. " ]
2022-01-07T15:12:25
2024-02-15T17:41:24
2023-12-21T15:13:12
CONTRIBUTOR
null
null
null
null
## Describe the bug In offline mode, one can still access previously-cached datasets. This fails with datasets created with `push_to_hub`. ## Steps to reproduce the bug in Python: ``` import datasets mpwiki = datasets.load_dataset("teven/matched_passages_wikidata") ``` in bash: ``` export HF_DATASETS_OFFLINE=1 ``` in Python: ``` import datasets mpwiki = datasets.load_dataset("teven/matched_passages_wikidata") ``` ## Expected results `datasets` should find the previously-cached dataset. ## Actual results ConnectionError: Couln't reach the Hugging Face Hub for dataset 'teven/matched_passages_wikidata': Offline mode is enabled ## Environment info - `datasets` version: 1.16.2.dev0 - Platform: Linux-4.18.0-193.70.1.el8_2.x86_64-x86_64-with-glibc2.17 - Python version: 3.8.10 - PyArrow version: 3.0.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 3, "-1": 0, "confused": 0, "eyes": 4, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 7, "url": "https://api.github.com/repos/huggingface/datasets/issues/3547/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3547/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
713 days, 0:00:47
https://api.github.com/repos/huggingface/datasets/issues/3544
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3544/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3544/comments
https://api.github.com/repos/huggingface/datasets/issues/3544/events
https://github.com/huggingface/datasets/issues/3544
1,095,784,681
I_kwDODunzps5BUFjp
3,544
Ability to split a dataset in multiple files.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4", "events_url": "https://api.github.com/users/Dref360/events{/privacy}", "followers_url": "https://api.github.com/users/Dref360/followers", "following_url": "https://api.github.com/users/Dref360/following{/other_user}", "gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Dref360", "id": 8976546, "login": "Dref360", "node_id": "MDQ6VXNlcjg5NzY1NDY=", "organizations_url": "https://api.github.com/users/Dref360/orgs", "received_events_url": "https://api.github.com/users/Dref360/received_events", "repos_url": "https://api.github.com/users/Dref360/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Dref360/subscriptions", "type": "User", "url": "https://api.github.com/users/Dref360", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
[]
2022-01-06T23:02:25
2022-01-06T23:02:25
null
CONTRIBUTOR
null
null
null
null
Hello, **Is your feature request related to a problem? Please describe.** My use case is that I have one writer that adds columns and multiple workers reading the same `Dataset`. Each worker should have access to columns added by the writer when they reload the dataset. I understand that we shouldn't overwrite an arrow file as this could cause Segfault and so on. Before 1.16, I was able to overwrite the dataset and that would work most of the time with some retries. **Describe the solution you'd like** I was thinking that if we could append `Dataset._data_files`, when the workers reload the Dataset, they would get the new columns. **Describe alternatives you've considered** I currently need to 1. Save multiple "versions" of the dataset and load the latest. 2. Try working with cache files to get the latest columns. **Additional context** I think this would be a great addition to HFDataset as Parquet supports multi-files input out of the box! I can make a PR myself with some pointers as needed :)
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3544/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3544/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
https://api.github.com/repos/huggingface/datasets/issues/3543
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3543/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3543/comments
https://api.github.com/repos/huggingface/datasets/issues/3543/events
https://github.com/huggingface/datasets/issues/3543
1,095,226,438
I_kwDODunzps5BR9RG
3,543
Allow loading community metrics from the hub, just like datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/13485709?v=4", "events_url": "https://api.github.com/users/eladsegal/events{/privacy}", "followers_url": "https://api.github.com/users/eladsegal/followers", "following_url": "https://api.github.com/users/eladsegal/following{/other_user}", "gists_url": "https://api.github.com/users/eladsegal/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/eladsegal", "id": 13485709, "login": "eladsegal", "node_id": "MDQ6VXNlcjEzNDg1NzA5", "organizations_url": "https://api.github.com/users/eladsegal/orgs", "received_events_url": "https://api.github.com/users/eladsegal/received_events", "repos_url": "https://api.github.com/users/eladsegal/repos", "site_admin": false, "starred_url": "https://api.github.com/users/eladsegal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eladsegal/subscriptions", "type": "User", "url": "https://api.github.com/users/eladsegal", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "c5def5", "default": false, "description": "Generic discussion on the library", "id": 2067400324, "name": "generic discussion", "node_id": "MDU6TGFiZWwyMDY3NDAwMzI0", "url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion" } ]
closed
false
null
[]
[ "Hi ! Thanks for your message :) This is a great idea indeed. We haven't started working on this yet though. For now I guess you can host your metric on the Hub (either with your model or your dataset) and use `hf_hub_download` to download it (docs [here](https://github.com/huggingface/huggingface_hub/blob/main/docs/hub/how-to-downstream.md#cached_download))", "This is a great solution in the meantime, thanks!", "Here's the code I used, in case it can be of help to someone else:\r\n```python\r\nimport os, shutil\r\nfrom huggingface_hub import hf_hub_download\r\ndef download_metric(repo_id, file_path):\r\n # repo_id: for models \"username/model_name\", for datasets \"datasets/username/model_name\"\r\n local_metric_path = hf_hub_download(repo_id=repo_id, filename=file_path)\r\n updated_local_metric_path = (os.path.dirname(local_metric_path) + os.path.basename(local_metric_path).replace(\".\", \"_\") + \".py\")\r\n shutil.copy(local_metric_path, updated_local_metric_path)\r\n return updated_local_metric_path\r\n\r\nmetric = load_metric(download_metric(REPO_ID, FILE_PATH))\r\n```", "Solved with https://github.com/huggingface/evaluate 🤗 ", "Yay!! cc @lvwerra @sashavor @douwekiela \r\n\r\nPlease share your feedback @eladsegal =)" ]
2022-01-06T11:26:26
2022-05-31T20:59:14
2022-05-31T20:53:37
CONTRIBUTOR
null
null
null
null
**Is your feature request related to a problem? Please describe.** Currently, I can load a metric implemented by me by providing the local path to the file in `load_metric`. However, there is no option to do it with the metric uploaded to the hub. This means that if I want to allow other users to use it, they must download it first which makes the usage less smooth. **Describe the solution you'd like** Load metrics from the hub just like datasets are loaded. In order to not break stuff, the convention can be to put the metric file in a "metrics" folder in the hub.
{ "avatar_url": "https://avatars.githubusercontent.com/u/13485709?v=4", "events_url": "https://api.github.com/users/eladsegal/events{/privacy}", "followers_url": "https://api.github.com/users/eladsegal/followers", "following_url": "https://api.github.com/users/eladsegal/following{/other_user}", "gists_url": "https://api.github.com/users/eladsegal/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/eladsegal", "id": 13485709, "login": "eladsegal", "node_id": "MDQ6VXNlcjEzNDg1NzA5", "organizations_url": "https://api.github.com/users/eladsegal/orgs", "received_events_url": "https://api.github.com/users/eladsegal/received_events", "repos_url": "https://api.github.com/users/eladsegal/repos", "site_admin": false, "starred_url": "https://api.github.com/users/eladsegal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eladsegal/subscriptions", "type": "User", "url": "https://api.github.com/users/eladsegal", "user_view_type": "public" }
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/3543/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3543/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
145 days, 9:27:11
https://api.github.com/repos/huggingface/datasets/issues/3541
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3541/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3541/comments
https://api.github.com/repos/huggingface/datasets/issues/3541/events
https://github.com/huggingface/datasets/issues/3541
1,095,033,828
I_kwDODunzps5BROPk
3,541
Support 7-zip compressed data files
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
[ "This should also resolve: https://github.com/huggingface/datasets/issues/3185." ]
2022-01-06T07:11:03
2022-07-19T10:18:30
null
MEMBER
null
null
null
null
**Is your feature request related to a problem? Please describe.** We should support 7-zip compressed data files: - [x] in `extract`: - #4672 - [ ] in `iter_archive`: for streaming mode both in streaming and non-streaming modes.
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3541/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3541/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
https://api.github.com/repos/huggingface/datasets/issues/3540
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3540/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3540/comments
https://api.github.com/repos/huggingface/datasets/issues/3540/events
https://github.com/huggingface/datasets/issues/3540
1,094,900,336
I_kwDODunzps5BQtpw
3,540
How to convert torch.utils.data.Dataset to datasets.arrow_dataset.Dataset?
{ "avatar_url": "https://avatars.githubusercontent.com/u/35062414?v=4", "events_url": "https://api.github.com/users/CindyTing/events{/privacy}", "followers_url": "https://api.github.com/users/CindyTing/followers", "following_url": "https://api.github.com/users/CindyTing/following{/other_user}", "gists_url": "https://api.github.com/users/CindyTing/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/CindyTing", "id": 35062414, "login": "CindyTing", "node_id": "MDQ6VXNlcjM1MDYyNDE0", "organizations_url": "https://api.github.com/users/CindyTing/orgs", "received_events_url": "https://api.github.com/users/CindyTing/received_events", "repos_url": "https://api.github.com/users/CindyTing/repos", "site_admin": false, "starred_url": "https://api.github.com/users/CindyTing/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/CindyTing/subscriptions", "type": "User", "url": "https://api.github.com/users/CindyTing", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
[]
2022-01-06T02:13:42
2022-01-06T02:17:39
null
NONE
null
null
null
null
Hi, I use torch.utils.data.Dataset to define my own data, but I need to use the 'map' function of datasets.arrow_dataset.Dataset later, so I hope to convert torch.utils.data.Dataset to datasets.arrow_dataset.Dataset. Here is an example. ``` from torch.utils.data import Dataset from datasets.arrow_dataset import Dataset as HFDataset class ADataset(Dataset): def __init__(self, data): super().__init__() self.data = data def __getitem__(self, index): return self.data[index] def __len__(self): return self.len class MDataset(): def __init__(self, tokenizer: AutoTokenizer, data_args, training_args): self.train_dataset = ADataset(data_args) self.tokenizer = tokenizer self.data_args = data_args self.train_dataset = self.train_dataset.map( self.process_function, batched=True, remove_columns=column_names, load_from_cache_file=True, desc="Running tokenizer on train dataset", ) def process_function(self, examples): sentences = [" ".join(sample[0][3]) for sample in examples] tokenized = self.tokenizer( sentences, max_length=self.max_seq_len, padding=self.padding, truncation=True) ``` But it would raise an ERROR, AttributeError: 'ADataset' object has no attribute 'map'. so how to convert torch.utils.data.Dataset to datasets.arrow_dataset.Dataset? Thanks in advance!
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3540/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3540/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
https://api.github.com/repos/huggingface/datasets/issues/3533
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3533/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3533/comments
https://api.github.com/repos/huggingface/datasets/issues/3533/events
https://github.com/huggingface/datasets/issues/3533
1,094,156,147
I_kwDODunzps5BN39z
3,533
Task search function on hub not working correctly
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }, { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }, { "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" } ]
[ "known issue due to https://github.com/huggingface/datasets/pull/2362 (and [internal](https://github.com/huggingface/moon-landing/issues/946)) , will be solved soon", "hmm actually i have no recollection of why I said that", "Because it has dots in some YAML keys, it can't be parsed and indexed by the back-end" ]
2022-01-05T09:36:30
2022-05-12T14:45:57
null
CONTRIBUTOR
null
null
null
null
When I want to look at all datasets of the category: `speech-processing` *i.e.* https://huggingface.co/datasets?task_categories=task_categories:speech-processing&sort=downloads , then the following dataset doesn't show up for some reason: - https://huggingface.co/datasets/speech_commands even thought it's task tags seem correct: https://raw.githubusercontent.com/huggingface/datasets/master/datasets/speech_commands/README.md
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3533/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3533/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
https://api.github.com/repos/huggingface/datasets/issues/3531
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3531/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3531/comments
https://api.github.com/repos/huggingface/datasets/issues/3531/events
https://github.com/huggingface/datasets/issues/3531
1,094,033,280
I_kwDODunzps5BNZ-A
3,531
Give clearer instructions to add the YAML tags
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
[]
2022-01-05T06:44:20
2022-01-17T15:54:36
2022-01-17T15:54:36
MEMBER
null
null
null
null
## Describe the bug As reported by @julien-c, many community datasets contain the line `YAML tags:` at the top of the YAML section in the header of the README file. See e.g.: https://huggingface.co/datasets/bigscience/P3/commit/a03bea08cf4d58f268b469593069af6aeb15de32 Maybe we should give clearer instruction/hints in the README template.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3531/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3531/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
12 days, 9:10:16
https://api.github.com/repos/huggingface/datasets/issues/3522
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3522/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3522/comments
https://api.github.com/repos/huggingface/datasets/issues/3522/events
https://github.com/huggingface/datasets/issues/3522
1,093,807,586
I_kwDODunzps5BMi3i
3,522
wmt19 is broken (zh-en)
{ "avatar_url": "https://avatars.githubusercontent.com/u/5404177?v=4", "events_url": "https://api.github.com/users/AjayP13/events{/privacy}", "followers_url": "https://api.github.com/users/AjayP13/followers", "following_url": "https://api.github.com/users/AjayP13/following{/other_user}", "gists_url": "https://api.github.com/users/AjayP13/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/AjayP13", "id": 5404177, "login": "AjayP13", "node_id": "MDQ6VXNlcjU0MDQxNzc=", "organizations_url": "https://api.github.com/users/AjayP13/orgs", "received_events_url": "https://api.github.com/users/AjayP13/received_events", "repos_url": "https://api.github.com/users/AjayP13/repos", "site_admin": false, "starred_url": "https://api.github.com/users/AjayP13/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AjayP13/subscriptions", "type": "User", "url": "https://api.github.com/users/AjayP13", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" }, { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
closed
false
null
[]
[ "This issue is not reproducible." ]
2022-01-04T22:33:45
2022-05-06T16:27:37
2022-05-06T16:27:37
NONE
null
null
null
null
## Describe the bug A clear and concise description of what the bug is. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("wmt19", 'zh-en') ``` ## Expected results The dataset should download. ## Actual results `ConnectionError: Couldn't reach ftp://cwmt-wmt:cwmt-wmt@datasets.nju.edu.cn/parallel/casia2015.zip` ## Environment info - `datasets` version: 1.15.1 - Platform: Linux - Python version: 3.8
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3522/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3522/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
121 days, 17:53:52
https://api.github.com/repos/huggingface/datasets/issues/3518
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3518/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3518/comments
https://api.github.com/repos/huggingface/datasets/issues/3518/events
https://github.com/huggingface/datasets/issues/3518
1,093,063,455
I_kwDODunzps5BJtMf
3,518
Add PubMed Central Open Access dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
[ "In the framework of BigScience:\r\n- bigscience-workshop/data_tooling#121\r\n\r\nI have created this dataset as a community dataset: https://huggingface.co/datasets/albertvillanova/pmc_open_access\r\n\r\nHowever, I was wondering that it may be more appropriate to move it under an org namespace: `pubmed_central` or `pmc`\r\nThis way, we could add other datasets I'm also working on: Author Manuscript Dataset, Historical OCR Dataset, LitArch Open Access Subset.\r\n\r\nWhat do you think? @lhoestq @mariosasko ", "Why not ! Having them under such namespaces would also help people searching for this kind of datasets.\r\nWe can also invite people from pubmed at one point", "DONE: https://huggingface.co/datasets/pmc/open_access" ]
2022-01-04T06:54:35
2022-01-17T15:25:57
2022-01-17T15:25:57
MEMBER
null
null
null
null
## Adding a Dataset - **Name:** PubMed Central Open Access - **Description:** The PMC Open Access Subset includes more than 3.4 million journal articles and preprints that are made available under license terms that allow reuse. - **Paper:** *link to the dataset paper if available* - **Data:** https://www.ncbi.nlm.nih.gov/pmc/tools/openftlist/ - **Motivation:** *what are some good reasons to have this dataset* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3518/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3518/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
13 days, 8:31:22
https://api.github.com/repos/huggingface/datasets/issues/3515
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3515/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3515/comments
https://api.github.com/repos/huggingface/datasets/issues/3515/events
https://github.com/huggingface/datasets/issues/3515
1,092,624,695
I_kwDODunzps5BICE3
3,515
`ExpectedMoreDownloadedFiles` for `evidence_infer_treatment`
{ "avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4", "events_url": "https://api.github.com/users/VictorSanh/events{/privacy}", "followers_url": "https://api.github.com/users/VictorSanh/followers", "following_url": "https://api.github.com/users/VictorSanh/following{/other_user}", "gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/VictorSanh", "id": 16107619, "login": "VictorSanh", "node_id": "MDQ6VXNlcjE2MTA3NjE5", "organizations_url": "https://api.github.com/users/VictorSanh/orgs", "received_events_url": "https://api.github.com/users/VictorSanh/received_events", "repos_url": "https://api.github.com/users/VictorSanh/repos", "site_admin": false, "starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions", "type": "User", "url": "https://api.github.com/users/VictorSanh", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" }, { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
[ "Thanks for reporting @VictorSanh.\r\n\r\nI'm looking at it... " ]
2022-01-03T15:58:38
2022-02-14T13:21:43
2022-02-14T13:21:43
CONTRIBUTOR
null
null
null
null
## Describe the bug I am trying to load a dataset called `evidence_infer_treatment`. The first subset (`1.1`) works fine but the second returns an error (`2.0`). It downloads a file but crashes during the checksums. ## Steps to reproduce the bug ```python >>> from datasets import load_dataset >>> load_dataset("evidence_infer_treatment", "2.0") Downloading and preparing dataset evidence_infer_treatment/2.0 (download: 34.84 MiB, generated: 91.46 MiB, post-processed: Unknown size, total: 126.30 MiB) to /home/victor_huggingface_co/.cache/huggingface/datasets/evidence_infer_treatment/2.0/2.0.0/6812655bfd26cbaa58c84eab098bf6403694b06c6ae2ded603c55681868a1e24... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/load.py", line 1669, in load_dataset use_auth_token=use_auth_token, File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/builder.py", line 594, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/builder.py", line 664, in _download_and_prepare self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), "dataset source files" File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/utils/info_utils.py", line 33, in verify_checksums raise ExpectedMoreDownloadedFiles(str(set(expected_checksums) - set(recorded_checksums))) datasets.utils.info_utils.ExpectedMoreDownloadedFiles: {'http://evidence-inference.ebm-nlp.com/v2.0.tar.gz'} ``` I did try to pass the argument `ignore_verifications=True` but run into an error when trying to build the dataset: ```python >>> load_dataset("evidence_infer_treatment", "2.0", ignore_verifications=True, download_mode="force_redownload") Downloading and preparing dataset evidence_infer_treatment/2.0 (download: 34.84 MiB, generated: 91.46 MiB, post-processed: Unknown size, total: 126.30 MiB) to /home/victor_huggingface_co/.cache/huggingface/datasets/evidence_infer_treatment/2.0/2.0.0/6812655bfd26cbaa58c84eab098bf6403694b06c6ae2ded603c55681868a1e24... Downloading: 164MB [00:23, 6.98MB/s] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/load.py", line 1669, in load_dataset use_auth_token=use_auth_token, File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/builder.py", line 594, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/builder.py", line 681, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/builder.py", line 1080, in _prepare_split example = self.info.features.encode_example(record) File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/features/features.py", line 1032, in encode_example return encode_nested_example(self, example) File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/features/features.py", line 807, in encode_nested_example k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj) File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/features/features.py", line 807, in <dictcomp> k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj) File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/features/features.py", line 829, in encode_nested_example list_dict[k] = [encode_nested_example(dict_tuples[0], o) for o in dict_tuples[1:]] File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/features/features.py", line 829, in <listcomp> list_dict[k] = [encode_nested_example(dict_tuples[0], o) for o in dict_tuples[1:]] File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/features/features.py", line 828, in encode_nested_example for k, dict_tuples in utils.zip_dict(schema.feature, *obj): File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 136, in zip_dict yield key, tuple(d[key] for d in dicts) File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 136, in <genexpr> yield key, tuple(d[key] for d in dicts) KeyError: '' ``` ## Environment info - `datasets` version: 1.16.1 - Platform: Linux-5.0.0-1020-gcp-x86_64-with-debian-buster-sid - Python version: 3.7.11 - PyArrow version: 6.0.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3515/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3515/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
41 days, 21:23:05
https://api.github.com/repos/huggingface/datasets/issues/3512
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3512/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3512/comments
https://api.github.com/repos/huggingface/datasets/issues/3512/events
https://github.com/huggingface/datasets/issues/3512
1,092,359,973
I_kwDODunzps5BHBcl
3,512
No Data format found
{ "avatar_url": "https://avatars.githubusercontent.com/u/57741378?v=4", "events_url": "https://api.github.com/users/shazzad47/events{/privacy}", "followers_url": "https://api.github.com/users/shazzad47/followers", "following_url": "https://api.github.com/users/shazzad47/following{/other_user}", "gists_url": "https://api.github.com/users/shazzad47/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/shazzad47", "id": 57741378, "login": "shazzad47", "node_id": "MDQ6VXNlcjU3NzQxMzc4", "organizations_url": "https://api.github.com/users/shazzad47/orgs", "received_events_url": "https://api.github.com/users/shazzad47/received_events", "repos_url": "https://api.github.com/users/shazzad47/repos", "site_admin": false, "starred_url": "https://api.github.com/users/shazzad47/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shazzad47/subscriptions", "type": "User", "url": "https://api.github.com/users/shazzad47", "user_view_type": "public" }
[ { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
closed
false
null
[]
[ "Hi, which dataset is giving you an error?" ]
2022-01-03T09:41:11
2022-01-17T13:26:05
2022-01-17T13:26:05
NONE
null
null
null
null
## Dataset viewer issue for '*name of the dataset*' **Link:** *link to the dataset viewer page* *short description of the issue* Am I the one who added this dataset ? Yes-No
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3512/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3512/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
14 days, 3:44:54
https://api.github.com/repos/huggingface/datasets/issues/3511
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3511/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3511/comments
https://api.github.com/repos/huggingface/datasets/issues/3511/events
https://github.com/huggingface/datasets/issues/3511
1,092,170,411
I_kwDODunzps5BGTKr
3,511
Dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/92849978?v=4", "events_url": "https://api.github.com/users/MIKURI0114/events{/privacy}", "followers_url": "https://api.github.com/users/MIKURI0114/followers", "following_url": "https://api.github.com/users/MIKURI0114/following{/other_user}", "gists_url": "https://api.github.com/users/MIKURI0114/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/MIKURI0114", "id": 92849978, "login": "MIKURI0114", "node_id": "U_kgDOBYjHOg", "organizations_url": "https://api.github.com/users/MIKURI0114/orgs", "received_events_url": "https://api.github.com/users/MIKURI0114/received_events", "repos_url": "https://api.github.com/users/MIKURI0114/repos", "site_admin": false, "starred_url": "https://api.github.com/users/MIKURI0114/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MIKURI0114/subscriptions", "type": "User", "url": "https://api.github.com/users/MIKURI0114", "user_view_type": "public" }
[ { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
closed
false
null
[]
[ "Can you reopen with the correct dataset name (if relevant)?\r\n\r\nThanks", "The dataset viewer was down tonight. It works again." ]
2022-01-03T02:03:23
2022-01-03T08:41:26
2022-01-03T08:23:07
NONE
null
null
null
null
## Dataset viewer issue for '*name of the dataset*' **Link:** *link to the dataset viewer page* *short description of the issue* Am I the one who added this dataset ? Yes-No
{ "avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4", "events_url": "https://api.github.com/users/julien-c/events{/privacy}", "followers_url": "https://api.github.com/users/julien-c/followers", "following_url": "https://api.github.com/users/julien-c/following{/other_user}", "gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/julien-c", "id": 326577, "login": "julien-c", "node_id": "MDQ6VXNlcjMyNjU3Nw==", "organizations_url": "https://api.github.com/users/julien-c/orgs", "received_events_url": "https://api.github.com/users/julien-c/received_events", "repos_url": "https://api.github.com/users/julien-c/repos", "site_admin": false, "starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/julien-c/subscriptions", "type": "User", "url": "https://api.github.com/users/julien-c", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3511/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3511/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
6:19:44
https://api.github.com/repos/huggingface/datasets/issues/3510
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3510/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3510/comments
https://api.github.com/repos/huggingface/datasets/issues/3510/events
https://github.com/huggingface/datasets/issues/3510
1,091,997,004
I_kwDODunzps5BFo1M
3,510
`wiki_dpr` details for Open Domain Question Answering tasks
{ "avatar_url": "https://avatars.githubusercontent.com/u/40918514?v=4", "events_url": "https://api.github.com/users/pk1130/events{/privacy}", "followers_url": "https://api.github.com/users/pk1130/followers", "following_url": "https://api.github.com/users/pk1130/following{/other_user}", "gists_url": "https://api.github.com/users/pk1130/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/pk1130", "id": 40918514, "login": "pk1130", "node_id": "MDQ6VXNlcjQwOTE4NTE0", "organizations_url": "https://api.github.com/users/pk1130/orgs", "received_events_url": "https://api.github.com/users/pk1130/received_events", "repos_url": "https://api.github.com/users/pk1130/repos", "site_admin": false, "starred_url": "https://api.github.com/users/pk1130/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pk1130/subscriptions", "type": "User", "url": "https://api.github.com/users/pk1130", "user_view_type": "public" }
[]
closed
false
null
[]
[ "Hi ! According to the DPR paper, the wikipedia dump is the one from Dec. 20, 2018.\r\nEach instance contains a paragraph of at most 100 word, as well as the title of the wikipedia page it comes from and the DPR embedding (a 768-d vector).", "Closed by:\r\n- #3534" ]
2022-01-02T11:04:01
2022-02-17T13:46:20
2022-02-17T13:46:20
NONE
null
null
null
null
Hey guys! Thanks for creating the `wiki_dpr` dataset! I am currently trying to use the dataset for context retrieval using DPR on NQ questions and need details about what each of the files and data instances mean, which version of the Wikipedia dump it uses, etc. Please respond at your earliest convenience regarding the same! Thanks a ton! P.S.: (If one of @thomwolf @lewtun @lhoestq could respond, that would be even better since they have the first-hand details of the dataset. If anyone else has those, please reach out! Thanks!)
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3510/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3510/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
46 days, 2:42:19
https://api.github.com/repos/huggingface/datasets/issues/3507
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3507/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3507/comments
https://api.github.com/repos/huggingface/datasets/issues/3507/events
https://github.com/huggingface/datasets/issues/3507
1,091,214,808
I_kwDODunzps5BCp3Y
3,507
Discuss whether support canonical datasets w/o dataset_infos.json and/or dummy data
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "c5def5", "default": false, "description": "Generic discussion on the library", "id": 2067400324, "name": "generic discussion", "node_id": "MDU6TGFiZWwyMDY3NDAwMzI0", "url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion" } ]
closed
false
null
[]
[ "IMO, the data streaming test is good enough of a test that the dataset works correctly (assuming that we can more or less ensure that if streaming works then the non-streaming case will also work), so that for datasets that have a working dataset preview, we can remove the dummy data IMO. On the other hand, it seems like not all datasets have streaming enabled yet and for those datasets (if they are used a lot), I think it would be nice to continue testing some dummy data.\r\n\r\nI don't really have an opinion regarding the JSON metadata as I don't know enough about it.\r\n\r\n", "I don't know all the details, but generally I'd be in favor of unifying the metadata formats into YAML inside .md (and so deprecating the dataset_infos.json) \r\n\r\n(Ultimately the CI can run on \"HuggingFace Actions\" instead of on GitHub)", "The dataset_infos.json file currently has these useful infos for each dataset configuration, that I think can be moved to the dataset tags:\r\n- Size of the dataset in MB: download size, arrow file size, and total size (sum of download + arrow)\r\n- Size of each split in MB and number of examples. Again this can be moved to the dataset tags\r\n- Feature type of each column\r\n- supported task templates (it defines what columns correspond to the features and labels for example)\r\n\r\nBut it also has this, which I'm not sure if it should be in the tags or not:\r\n- Checksums of the downloaded files for integrity verifications\r\n\r\nSo ultimately this file could probably be deprecated in favor of having the infos in the tags.\r\n\r\n> Also note that for generating both (dataset_infos.json file and dummy data), the entire dataset needs being downloaded. This can be an issue for huge datasets (like WIT, with 400 GB of data).\r\n\r\nTo get the exact number of examples and size in MB of the dataset, one needs to download and generate it completely. IMO these infos are very important when someone considers using a dataset. Though using streaming we could do some extrapolation to have approximate values instead.\r\n\r\nFor the integrity verifications we also need the number of examples and the checksums of the downloaded files, so it requires the dataset to be fully downloaded once. This can be optional though.\r\n\r\n> IMO, the data streaming test is good enough of a test that the dataset works correctly (assuming that we can more or less ensure that if streaming works then the non-streaming case will also work)\r\n\r\nI agree with this. Usually if a dataset works in streaming mode, then it works in non-streaming mode (the other way around is not true though).\r\n\r\n> On the other hand, it seems like not all datasets have streaming enabled yet and for those datasets (if they are used a lot), I think it would be nice to continue testing some dummy data.\r\n\r\nYes indeed, or at least make sure that it was tested on the true data.", "(note that if we wanted to display sizes, etc we could also pretty easily parse the `dataset_infos.json` on the hub side)", "I agree that we can move the relevant parts of `dataset_infos.json` to the YAML tags.\r\n\r\n> On the other hand, it seems like not all datasets have streaming enabled yet and for those datasets (if they are used a lot), I think it would be nice to continue testing some dummy data. <\r\n> > Yes indeed, or at least make sure that it was tested on the true data.\r\n\r\nI like the idea of testing streaming and falling back to the dummy data test if streaming does not work. Generating dummy data can be very tedious, so this would be a nice incentive for the contributors to make their datasets streamable. ", "CC: @severo ", "About dummy data, please see e.g. this PR: https://github.com/huggingface/datasets/pull/3692/commits/62368daac0672041524a471386d5e78005cf357a\r\n- I updated the previous dummy data: I just had to rename the file and its directory\r\n - the dummy data zip contains only a single file: `pubmed22n0001.xml.gz`\r\n\r\nThen I discover it fails: https://app.circleci.com/pipelines/github/huggingface/datasets/9800/workflows/173a4433-8feb-4fc6-ab9e-59762084e3e1/jobs/60437\r\n```\r\nNo such file or directory: '.../dummy_data/pubmed22n0002.xml.gz'\r\n```\r\n- it needs dummy data for all the 1114 files: \r\n `_URLs = [f\"ftp://ftp.ncbi.nlm.nih.gov/pubmed/baseline/pubmed22n{i:04d}.xml.gz\" for i in range(1, 1115)]`\r\n- this confirms me that it never passed the test: these dummy data files were not present before my PR\r\n- therefore, is it really useful the data test if we just ignore it when it does not pass?\r\n\r\nIn relation with JSON metadata, I'm generating the file for `pubmed` (see above) in a GCP instance: it's running for more than 3 hours and only 9 million examples generated so far (before my PR, it had 32 million, now it has more).", "I mention in https://github.com/huggingface/datasets-server/wiki/Preliminary-design that the future \"datasets server\" could be in charge of generating both the dummy data and the dataset-info.json file if required (or their equivalent).", "Hi ! I think dummy data generation is out of scope for the datasets server, since it's about generating the original data files.\r\n\r\nThat would be amazing to have it generate the dataset_infos.json though !", "From some offline discussion with @mariosasko and especially for vision datasets, we'll probably not require dummy data anymore and use streaming instead :) This will make adding a new dataset much easier.\r\nThis should also make sure that streaming works as expected directly in the CI, without having to check the dataset viewer once the PR is merged", "OK. I removed the \"dummy data\" item from the services of the dataset server", "It seems that migration from dataset-info.json to dataset card YAML has been acted.\r\n\r\nProbably it's a good idea, but I didn't find the pros and cons of this decision, so I put some I could think of:\r\n\r\npros:\r\n- only one file to parse, share, sync\r\n- it gives a hint to the users that if you write your dataset card, you should also specify the metadata\r\n\r\ncons:\r\n- the metadata header might be very long, before reaching the start of the README/dataset card. It might be surprising when you edit the file because the metadata is not shown on top when the dataset card is rendered (dataset page). It also somewhat prevents including large strings like the checksums\r\n- YAML vs JSON: not sure which one is easier for users to fill and maintain\r\n- two concepts are mixed in the same file (metadata and documentation). This means that if you're interested only in one of them, you still have to know how to parse the whole file.\r\n- [low priority] besides the JSON file, we might want to support yaml or toml file if the user prefers (as [prettier](https://prettier.io/docs/en/configuration.html) and others do for their config files, for example). Inside the md, I understand that only YAML is allowed", "> the metadata header might be very long, before reaching the start of the README/dataset card. It might be surprising when you edit the file because the metadata is not shown on top when the dataset card is rendered (dataset page). It also somewhat prevents including large strings like the checksums\r\n\r\nNote that we could simply not have the checksums in the YAML metadata at all, or maybe at one point have a pointer to another file instead.\r\n\r\nWe can also choose to hide (collapse) certain sections in the YAML by default when we open the dataset card editor.\r\n\r\n> two concepts are mixed in the same file (metadata and documentation). This means that if you're interested only in one of them, you still have to know how to parse the whole file.\r\n\r\nI think it's fine for now. Later if we really end up with too many YAML sections we can see if we need to tweak the API endpoints or the `datasets`/`huggingface_hub` tools\r\n\r\n> YAML vs JSON: not sure which one is easier for users to fill and maintain\r\n\r\nRegarding YAML vs JSON: I think YAML is easier to write by hand, and I also think that it's better for consistency - i.e. we're using more and more YAML to configure models/datasets/spaces", "I didn't know the decision was already taken. Good to know. 😅", "> the metadata header might be very long, before reaching the start of the README/dataset card. It might be surprising when you edit the file because the metadata is not shown on top when the dataset card is rendered (dataset page). It also somewhat prevents including large strings like the checksums\r\n\r\nWe can definitely work on this on the hub side to make the UX better", "Tensorflow Datasets catalog includes a community catalog where you can find and use HF datasets (see [here](https://www.tensorflow.org/datasets/community_catalog/huggingface)).\r\n\r\nFYI I noticed today that they are using the exported dataset_infos.json files from github to get the metadata (see their code [here](https://github.com/tensorflow/datasets/blob/a482f01c036a10496f5e22e69a2ef81b707cc418/tensorflow_datasets/scripts/documentation/build_community_catalog.py#L261))", "Metadata is now stored as YAML, and dummy data is deprecated, so I think we can close this issue." ]
2021-12-30T17:04:25
2022-11-04T15:31:38
2022-11-04T15:31:37
MEMBER
null
null
null
null
I open this PR to have a public discussion about this topic and make a decision. As previously discussed, once we have the metadata in the dataset card (README file, containing both Markdown info and YAML tags), what is the point of having also the JSON metadata (dataset_infos.json file)? On the other hand, the dummy data is necessary for testing (in our CI suite) that the canonical dataset loads correctly. However: - the dataset preview feature is already an indirect test that the dataset loads correctly (it also tests it is streamable though) - we are migrating canonical datasets to the Hub Do we really need to continue testing them in out CI? Also note that for generating both (dataset_infos.json file and dummy data), the entire dataset needs being downloaded. This can be an issue for huge datasets (like WIT, with 400 GB of data). Feel free to ping other people for the discussion. CC: @lhoestq @mariosasko @thomwolf @julien-c @patrickvonplaten @anton-l @LysandreJik @yjernite @nateraw
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3507/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3507/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
308 days, 22:27:12
https://api.github.com/repos/huggingface/datasets/issues/3505
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3505/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3505/comments
https://api.github.com/repos/huggingface/datasets/issues/3505/events
https://github.com/huggingface/datasets/issues/3505
1,091,150,820
I_kwDODunzps5BCaPk
3,505
cast_column function not working with map function in streaming mode for Audio features
{ "avatar_url": "https://avatars.githubusercontent.com/u/8268102?v=4", "events_url": "https://api.github.com/users/ashu5644/events{/privacy}", "followers_url": "https://api.github.com/users/ashu5644/followers", "following_url": "https://api.github.com/users/ashu5644/following{/other_user}", "gists_url": "https://api.github.com/users/ashu5644/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ashu5644", "id": 8268102, "login": "ashu5644", "node_id": "MDQ6VXNlcjgyNjgxMDI=", "organizations_url": "https://api.github.com/users/ashu5644/orgs", "received_events_url": "https://api.github.com/users/ashu5644/received_events", "repos_url": "https://api.github.com/users/ashu5644/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ashu5644/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ashu5644/subscriptions", "type": "User", "url": "https://api.github.com/users/ashu5644", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" } ]
[ "Hi! This is probably due to the fact that `IterableDataset.map` sets `features` to `None` before mapping examples. We can fix the issue by passing the old `features` dict to the map generator and performing encoding/decoding there (before calling the map transform function)." ]
2021-12-30T14:52:01
2022-01-18T19:54:07
2022-01-18T19:54:07
NONE
null
null
null
null
## Describe the bug I am trying to use Audio class for loading audio features using custom dataset. I am able to cast 'audio' feature into 'Audio' format with cast_column function. On using map function, I am not getting 'Audio' casted feature but getting path of audio file only. I am getting features of 'audio' of string type with load_dataset call. After using cast_column 'audio' feature is converted into 'Audio' type. But in map function I am not able to get Audio type for audio feature & getting string type data containing path of file only. So I am not able to use processor in encode function. ## Steps to reproduce the bug ```python # Sample code to reproduce the bug from datasets import load_dataset, Audio from transformers import Wav2Vec2Processor def encode(batch, processor): print("Audio: ",batch['audio']) batch["input_values"] = processor(batch["audio"]['array'], sampling_rate=16000).input_values return batch def print_ds(ds): iterator = iter(ds) for d in iterator: print("Data: ",d) break processor = Wav2Vec2Processor.from_pretrained(pretrained_model_path) dataset = load_dataset("custom_dataset.py","train",data_files={'train':'train_path.txt'}, data_dir="data", streaming=True, split="train") print("Features: ",dataset.features) print_ds(dataset) dataset = dataset.cast_column("audio", Audio(sampling_rate=16_000)) print("Features: ",dataset.features) print_ds(dataset) dataset = dataset.map(lambda x: encode(x,processor)) print("Features: ",dataset.features) print_ds(dataset) ``` ## Expected results map function not printing Audio type features be used with processor function and getting error in processor call due to this. ## Actual results # after load_dataset call Features: {'sentence': Value(dtype='string', id=None), 'audio': Value(dtype='string', id=None)} Data: {'sentence': 'और अपने पेट को माँ की स्वादिष्ट गरमगरम जलेबियाँ हड़पते\n', 'audio': 'data/0116_003.wav'} # after cast_column call Features: {'sentence': Value(dtype='string', id=None), 'audio': Audio(sampling_rate=16000, mono=True, _storage_dtype='string', id=None)} Data: {'sentence': 'और अपने पेट को माँ की स्वादिष्ट गरमगरम जलेबियाँ हड़पते\n', 'audio': {'path': 'data/0116_003.wav', 'array': array([ 1.2662281e-06, 1.0264218e-06, -1.3615092e-06, ..., 1.3017889e-02, 1.0085563e-02, 4.8155054e-03], dtype=float32), 'sampling_rate': 16000}} # after map call Features: None Audio: data/0116_003.wav Traceback (most recent call last): File "demo2.py", line 36, in <module> print_ds(dataset) File "demo2.py", line 11, in print_ds for d in iterator: File "/opt/conda/lib/python3.7/site-packages/datasets/iterable_dataset.py", line 341, in __iter__ for key, example in self._iter(): File "/opt/conda/lib/python3.7/site-packages/datasets/iterable_dataset.py", line 338, in _iter yield from ex_iterable File "/opt/conda/lib/python3.7/site-packages/datasets/iterable_dataset.py", line 192, in __iter__ yield key, self.function(example) File "demo2.py", line 32, in <lambda> dataset = dataset.map(lambda x: batch_encode(x,processor)) File "demo2.py", line 6, in batch_encode batch["input_values"] = processor(batch["audio"]['array'], sampling_rate=16000).input_values TypeError: string indices must be integers ## Environment info - `datasets` version: 1.17.0 - Platform: Linux-4.14.243 with-debian-bullseye-sid - Python version: 3.7.9 - PyArrow version: 6.0.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3505/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3505/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
19 days, 5:02:06
https://api.github.com/repos/huggingface/datasets/issues/3504
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3504/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3504/comments
https://api.github.com/repos/huggingface/datasets/issues/3504/events
https://github.com/huggingface/datasets/issues/3504
1,090,682,230
I_kwDODunzps5BAn12
3,504
Unable to download PUBMED_title_abstracts_2019_baseline.jsonl.zst
{ "avatar_url": "https://avatars.githubusercontent.com/u/12600692?v=4", "events_url": "https://api.github.com/users/ToddMorrill/events{/privacy}", "followers_url": "https://api.github.com/users/ToddMorrill/followers", "following_url": "https://api.github.com/users/ToddMorrill/following{/other_user}", "gists_url": "https://api.github.com/users/ToddMorrill/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ToddMorrill", "id": 12600692, "login": "ToddMorrill", "node_id": "MDQ6VXNlcjEyNjAwNjky", "organizations_url": "https://api.github.com/users/ToddMorrill/orgs", "received_events_url": "https://api.github.com/users/ToddMorrill/received_events", "repos_url": "https://api.github.com/users/ToddMorrill/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ToddMorrill/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ToddMorrill/subscriptions", "type": "User", "url": "https://api.github.com/users/ToddMorrill", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" }, { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
[ "Hi @ToddMorrill, thanks for reporting.\r\n\r\nThree weeks ago I contacted the team who created the Pile dataset to report this issue with their data host server: https://the-eye.eu\r\n\r\nThey told me that unfortunately, the-eye was heavily affected by the recent tornado catastrophe in the US. They hope to have their data back online asap.", "Hi @ToddMorrill, people from the Pile team have mirrored their data in a new host server: https://mystic.the-eye.eu\r\n\r\nSee:\r\n- #3627\r\n\r\nIt should work if you update your URL.\r\n\r\nWe should also update the URL in our course material.", "The old URL is still present in the HuggingFace course here: \r\nhttps://huggingface.co/course/chapter5/4?fw=pt\r\n\r\nI have created a PR for the Notebook here: https://github.com/huggingface/notebooks/pull/148\r\nNot sure if the HTML is in a public repo. I wasn't able to find it. ", "Fixed the other two URLs here: \r\nhttps://github.com/mwunderlich/notebooks/pull/1", "Both URLs are broken now\r\n`HTTPError: 404 Client Error: Not Found for URL: https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst`\r\nAnd\r\n`ConnectTimeout: HTTPSConnectionPool(host='mystic.the-eye.eu', port=443): Max retries exceeded with url: /public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst (Caused by ConnectTimeoutError(, 'Connection to mystic.the-eye.eu timed out. (connect timeout=10.0)'))`\r\n\r\n\r\n", "I was able to find a torrent with \"The Pile\" dataset here: [The Pile An 800GB Dataset of Diverse Text for Language Modeling ](https://academictorrents.com/details/0d366035664fdf51cfbe9f733953ba325776e667)\r\n\r\nThe complete dataset is huge, so I would suggest you to download only the \"PUBMED_title_abstracts_2019_baseline.jsonl.zst\" file, which is about 7GB. You can do this by using a torrent client of your choice (I typically utilize Transmission, which is pre-installed in Ubuntu distributions).\r\n\r\n", "@albertvillanova another issue:\r\n```\r\n15 experiment_compute_diveristy_coeff_single_dataset_then_combined_datasets_with_domain_weights()\r\n16 File \"/lfs/ampere1/0/brando9/beyond-scale-language-data-diversity/src/diversity/div_coeff.py\", line 474, in experiment_compute_diveristy_coeff_single_dataset_then_combined_datasets_with_domain_weights\r\n17 column_names = next(iter(dataset)).keys()\r\n18 File \"/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/datasets/iterable_dataset.py\", line 1353, in __iter__\r\n19 for key, example in ex_iterable:\r\n20 File \"/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/datasets/iterable_dataset.py\", line 207, in __iter__\r\n21 yield from self.generate_examples_fn(**self.kwargs)\r\n22 File \"/lfs/ampere1/0/brando9/.cache/huggingface/modules/datasets_modules/datasets/EleutherAI--pile/ebea56d358e91cf4d37b0fde361d563bed1472fbd8221a21b38fc8bb4ba554fb/pile.py\", line 236, in _generate_examples\r\n23 with zstd.open(open(files[subset], \"rb\"), \"rt\", encoding=\"utf-8\") as f:\r\n24 File \"/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/datasets/streaming.py\", line 74, in wrapper\r\n25 return function(*args, download_config=download_config, **kwargs)\r\n26 File \"/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/datasets/download/streaming_download_manager.py\", line 496, in xopen\r\n27 file_obj = fsspec.open(file, mode=mode, *args, **kwargs).open()\r\n28 File \"/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/fsspec/core.py\", line 134, in open\r\n29 return self.__enter__()\r\n30 File \"/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/fsspec/core.py\", line 102, in __enter__\r\n31 f = self.fs.open(self.path, mode=mode)\r\n32 File \"/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/fsspec/spec.py\", line 1241, in open\r\n33 f = self._open(\r\n34 File \"/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/fsspec/implementations/http.py\", line 356, in _open\r\n35 size = size or self.info(path, **kwargs)[\"size\"]\r\n36 File \"/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/fsspec/asyn.py\", line 121, in wrapper\r\n37 return sync(self.loop, func, *args, **kwargs)\r\n38 File \"/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/fsspec/asyn.py\", line 106, in sync\r\n39 raise return_result\r\n40 File \"/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/fsspec/asyn.py\", line 61, in _runner\r\n41 result[0] = await coro\r\n42 File \"/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/fsspec/implementations/http.py\", line 430, in _info\r\n43 raise FileNotFoundError(url) from exc\r\n44 FileNotFoundError: https://the-eye.eu/public/AI/pile_preliminary_components/NIH_ExPORTER_awarded_grant_text.jsonl.zst\r\n```\r\n\r\nany suggestions?", "related: https://github.com/huggingface/datasets/issues/6144", "this seems to work but it's rather annoying.\r\n\r\nSummary of how to make it work:\r\n1. get urls to parquet files into a list\r\n2. load list to load_dataset via `load_dataset('parquet', data_files=urls)` (note api names to hf are really confusing sometimes)\r\n3. then it should work, print a batch of text.\r\n\r\npresudo code\r\n```python\r\nurls_hacker_news = [\r\n \"https://huggingface.co/datasets/EleutherAI/pile/resolve/refs%2Fconvert%2Fparquet/hacker_news/pile-train-00000-of-00004.parquet\",\r\n \"https://huggingface.co/datasets/EleutherAI/pile/resolve/refs%2Fconvert%2Fparquet/hacker_news/pile-train-00001-of-00004.parquet\",\r\n \"https://huggingface.co/datasets/EleutherAI/pile/resolve/refs%2Fconvert%2Fparquet/hacker_news/pile-train-00002-of-00004.parquet\",\r\n \"https://huggingface.co/datasets/EleutherAI/pile/resolve/refs%2Fconvert%2Fparquet/hacker_news/pile-train-00003-of-00004.parquet\"\r\n]\r\n\r\n...\r\n\r\n\r\n # streaming = False\r\n from diversity.pile_subset_urls import urls_hacker_news\r\n path, name, data_files = 'parquet', 'hacker_news', urls_hacker_news\r\n # not changing\r\n batch_size = 512\r\n today = datetime.datetime.now().strftime('%Y-m%m-d%d-t%Hh_%Mm_%Ss')\r\n run_name = f'{path} div_coeff_{num_batches=} ({today=} ({name=}) {data_mixture_name=} {probabilities=})'\r\n print(f'{run_name=}')\r\n\r\n # - Init wandb\r\n debug: bool = mode == 'dryrun'\r\n run = wandb.init(mode=mode, project=\"beyond-scale\", name=run_name, save_code=True)\r\n wandb.config.update({\"num_batches\": num_batches, \"path\": path, \"name\": name, \"today\": today, 'probabilities': probabilities, 'batch_size': batch_size, 'debug': debug, 'data_mixture_name': data_mixture_name, 'streaming': streaming, 'data_files': data_files})\r\n # run.notify_on_failure() # https://community.wandb.ai/t/how-do-i-set-the-wandb-alert-programatically-for-my-current-run/4891\r\n print(f'{debug=}')\r\n print(f'{wandb.config=}')\r\n\r\n # -- Get probe network\r\n from datasets import load_dataset\r\n import torch\r\n from transformers import GPT2Tokenizer, GPT2LMHeadModel\r\n\r\n tokenizer = GPT2Tokenizer.from_pretrained(\"gpt2\")\r\n if tokenizer.pad_token_id is None:\r\n tokenizer.pad_token = tokenizer.eos_token\r\n probe_network = GPT2LMHeadModel.from_pretrained(\"gpt2\")\r\n device = torch.device(f\"cuda:{0}\" if torch.cuda.is_available() else \"cpu\")\r\n probe_network = probe_network.to(device)\r\n\r\n # -- Get data set\r\n def my_load_dataset(path, name):\r\n print(f'{path=} {name=} {streaming=}')\r\n if path == 'json' or path == 'bin' or path == 'csv':\r\n print(f'{data_files_prefix+name=}')\r\n return load_dataset(path, data_files=data_files_prefix+name, streaming=streaming, split=\"train\").with_format(\"torch\")\r\n elif path == 'parquet':\r\n print(f'{data_files=}')\r\n return load_dataset(path, data_files=data_files, streaming=streaming, split=\"train\").with_format(\"torch\")\r\n else:\r\n return load_dataset(path, name, streaming=streaming, split=\"train\").with_format(\"torch\")\r\n # - get data set for real now\r\n if isinstance(path, str):\r\n dataset = my_load_dataset(path, name)\r\n else:\r\n print('-- interleaving datasets')\r\n datasets = [my_load_dataset(path, name).with_format(\"torch\") for path, name in zip(path, name)]\r\n [print(f'{dataset.description=}') for dataset in datasets]\r\n dataset = interleave_datasets(datasets, probabilities)\r\n print(f'{dataset=}')\r\n batch = dataset.take(batch_size)\r\n print(f'{next(iter(batch))=}')\r\n column_names = next(iter(batch)).keys()\r\n print(f'{column_names=}')\r\n\r\n # - Prepare functions to tokenize batch\r\n def preprocess(examples):\r\n return tokenizer(examples[\"text\"], padding=\"max_length\", max_length=128, truncation=True, return_tensors=\"pt\")\r\n remove_columns = column_names # remove all keys that are not tensors to avoid bugs in collate function in task2vec's pytorch data loader\r\n def map(batch):\r\n return batch.map(preprocess, batched=True, remove_columns=remove_columns)\r\n tokenized_batch = map(batch)\r\n print(f'{next(iter(tokenized_batch))=}')\r\n```\r\n\r\nhttps://stackoverflow.com/questions/76891189/how-to-download-data-from-hugging-face-that-is-visible-on-the-data-viewer-but-th/76902681#76902681\r\n\r\nhttps://discuss.huggingface.co/t/how-to-download-data-from-hugging-face-that-is-visible-on-the-data-viewer-but-the-files-are-not-available/50555/5?u=severo", "If some people stumble upon this thread and still have this problem, i reuploaded the dataset to HF [here](https://huggingface.co/datasets/casinca/PUBMED_title_abstracts_2019_baseline)\r\n\r\nIts the exact same dataset you just have to change the url from the course, for example:\r\n\r\n```python\r\nfrom datasets import load_dataset, DownloadConfig\r\n\r\ndata_files = \"https://huggingface.co/datasets/casinca/PUBMED_title_abstracts_2019_baseline/resolve/main/PUBMED_title_abstracts_2019_baseline.jsonl.zst\"\r\npubmed_dataset = load_dataset(\r\n \"json\",\r\n data_files=data_files,\r\n split=\"train\",\r\n download_config=DownloadConfig(delete_extracted=True), # optional argument\r\n)\r\n```" ]
2021-12-29T18:23:20
2024-05-20T09:44:59
2022-02-17T15:04:25
NONE
null
null
null
null
## Describe the bug I am unable to download the PubMed dataset from the link provided in the [Hugging Face Course (Chapter 5 Section 4)](https://huggingface.co/course/chapter5/4?fw=pt). https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst ## Steps to reproduce the bug ```python # Sample code to reproduce the bug from datasets import load_dataset # This takes a few minutes to run, so go grab a tea or coffee while you wait :) data_files = "https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst" pubmed_dataset = load_dataset("json", data_files=data_files, split="train") pubmed_dataset ``` I also tried with `wget` as follows. ``` wget https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst ``` ## Expected results I expect to be able to download this file. ## Actual results Traceback ``` --------------------------------------------------------------------------- timeout Traceback (most recent call last) /usr/lib/python3/dist-packages/urllib3/connection.py in _new_conn(self) 158 try: --> 159 conn = connection.create_connection( 160 (self._dns_host, self.port), self.timeout, **extra_kw /usr/lib/python3/dist-packages/urllib3/util/connection.py in create_connection(address, timeout, source_address, socket_options) 83 if err is not None: ---> 84 raise err 85 /usr/lib/python3/dist-packages/urllib3/util/connection.py in create_connection(address, timeout, source_address, socket_options) 73 sock.bind(source_address) ---> 74 sock.connect(sa) 75 return sock timeout: timed out During handling of the above exception, another exception occurred: ConnectTimeoutError Traceback (most recent call last) /usr/lib/python3/dist-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw) 664 # Make the request on the httplib connection object. --> 665 httplib_response = self._make_request( 666 conn, /usr/lib/python3/dist-packages/urllib3/connectionpool.py in _make_request(self, conn, method, url, timeout, chunked, **httplib_request_kw) 375 try: --> 376 self._validate_conn(conn) 377 except (SocketTimeout, BaseSSLError) as e: /usr/lib/python3/dist-packages/urllib3/connectionpool.py in _validate_conn(self, conn) 995 if not getattr(conn, "sock", None): # AppEngine might not have `.sock` --> 996 conn.connect() 997 /usr/lib/python3/dist-packages/urllib3/connection.py in connect(self) 313 # Add certificate verification --> 314 conn = self._new_conn() 315 hostname = self.host /usr/lib/python3/dist-packages/urllib3/connection.py in _new_conn(self) 163 except SocketTimeout: --> 164 raise ConnectTimeoutError( 165 self, ConnectTimeoutError: (<urllib3.connection.VerifiedHTTPSConnection object at 0x7f06dd698850>, 'Connection to the-eye.eu timed out. (connect timeout=10.0)') During handling of the above exception, another exception occurred: MaxRetryError Traceback (most recent call last) /usr/lib/python3/dist-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies) 438 if not chunked: --> 439 resp = conn.urlopen( 440 method=request.method, /usr/lib/python3/dist-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw) 718 --> 719 retries = retries.increment( 720 method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2] /usr/lib/python3/dist-packages/urllib3/util/retry.py in increment(self, method, url, response, error, _pool, _stacktrace) 435 if new_retry.is_exhausted(): --> 436 raise MaxRetryError(_pool, url, error or ResponseError(cause)) 437 MaxRetryError: HTTPSConnectionPool(host='the-eye.eu', port=443): Max retries exceeded with url: /public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst (Caused by ConnectTimeoutError(<urllib3.connection.VerifiedHTTPSConnection object at 0x7f06dd698850>, 'Connection to the-eye.eu timed out. (connect timeout=10.0)')) During handling of the above exception, another exception occurred: ConnectTimeout Traceback (most recent call last) /tmp/ipykernel_15104/606583593.py in <module> 3 # This takes a few minutes to run, so go grab a tea or coffee while you wait :) 4 data_files = "https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst" ----> 5 pubmed_dataset = load_dataset("json", data_files=data_files, split="train") 6 pubmed_dataset ~/.local/lib/python3.8/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, script_version, **config_kwargs) 1655 1656 # Create a dataset builder -> 1657 builder_instance = load_dataset_builder( 1658 path=path, 1659 name=name, ~/.local/lib/python3.8/site-packages/datasets/load.py in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, script_version, **config_kwargs) 1492 download_config = download_config.copy() if download_config else DownloadConfig() 1493 download_config.use_auth_token = use_auth_token -> 1494 dataset_module = dataset_module_factory( 1495 path, revision=revision, download_config=download_config, download_mode=download_mode, data_files=data_files 1496 ) ~/.local/lib/python3.8/site-packages/datasets/load.py in dataset_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, data_files, **download_kwargs) 1116 # Try packaged 1117 if path in _PACKAGED_DATASETS_MODULES: -> 1118 return PackagedDatasetModuleFactory( 1119 path, data_files=data_files, download_config=download_config, download_mode=download_mode 1120 ).get_module() ~/.local/lib/python3.8/site-packages/datasets/load.py in get_module(self) 773 else get_patterns_locally(str(Path().resolve())) 774 ) --> 775 data_files = DataFilesDict.from_local_or_remote(patterns, use_auth_token=self.downnload_config.use_auth_token) 776 module_path, hash = _PACKAGED_DATASETS_MODULES[self.name] 777 builder_kwargs = {"hash": hash, "data_files": data_files} ~/.local/lib/python3.8/site-packages/datasets/data_files.py in from_local_or_remote(cls, patterns, base_path, allowed_extensions, use_auth_token) 576 for key, patterns_for_key in patterns.items(): 577 out[key] = ( --> 578 DataFilesList.from_local_or_remote( 579 patterns_for_key, 580 base_path=base_path, ~/.local/lib/python3.8/site-packages/datasets/data_files.py in from_local_or_remote(cls, patterns, base_path, allowed_extensions, use_auth_token) 545 base_path = base_path if base_path is not None else str(Path().resolve()) 546 data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions) --> 547 origin_metadata = _get_origin_metadata_locally_or_by_urls(data_files, use_auth_token=use_auth_token) 548 return cls(data_files, origin_metadata) 549 ~/.local/lib/python3.8/site-packages/datasets/data_files.py in _get_origin_metadata_locally_or_by_urls(data_files, max_workers, use_auth_token) 492 data_files: List[Union[Path, Url]], max_workers=64, use_auth_token: Optional[Union[bool, str]] = None 493 ) -> Tuple[str]: --> 494 return thread_map( 495 partial(_get_single_origin_metadata_locally_or_by_urls, use_auth_token=use_auth_token), 496 data_files, ~/.local/lib/python3.8/site-packages/tqdm/contrib/concurrent.py in thread_map(fn, *iterables, **tqdm_kwargs) 92 """ 93 from concurrent.futures import ThreadPoolExecutor ---> 94 return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs) 95 96 ~/.local/lib/python3.8/site-packages/tqdm/contrib/concurrent.py in _executor_map(PoolExecutor, fn, *iterables, **tqdm_kwargs) 74 map_args.update(chunksize=chunksize) 75 with PoolExecutor(**pool_kwargs) as ex: ---> 76 return list(tqdm_class(ex.map(fn, *iterables, **map_args), **kwargs)) 77 78 ~/.local/lib/python3.8/site-packages/tqdm/notebook.py in __iter__(self) 252 def __iter__(self): 253 try: --> 254 for obj in super(tqdm_notebook, self).__iter__(): 255 # return super(tqdm...) will not catch exception 256 yield obj ~/.local/lib/python3.8/site-packages/tqdm/std.py in __iter__(self) 1171 # (note: keep this check outside the loop for performance) 1172 if self.disable: -> 1173 for obj in iterable: 1174 yield obj 1175 return /usr/lib/python3.8/concurrent/futures/_base.py in result_iterator() 617 # Careful not to keep a reference to the popped future 618 if timeout is None: --> 619 yield fs.pop().result() 620 else: 621 yield fs.pop().result(end_time - time.monotonic()) /usr/lib/python3.8/concurrent/futures/_base.py in result(self, timeout) 442 raise CancelledError() 443 elif self._state == FINISHED: --> 444 return self.__get_result() 445 else: 446 raise TimeoutError() /usr/lib/python3.8/concurrent/futures/_base.py in __get_result(self) 387 if self._exception: 388 try: --> 389 raise self._exception 390 finally: 391 # Break a reference cycle with the exception in self._exception /usr/lib/python3.8/concurrent/futures/thread.py in run(self) 55 56 try: ---> 57 result = self.fn(*self.args, **self.kwargs) 58 except BaseException as exc: 59 self.future.set_exception(exc) ~/.local/lib/python3.8/site-packages/datasets/data_files.py in _get_single_origin_metadata_locally_or_by_urls(data_file, use_auth_token) 483 if isinstance(data_file, Url): 484 data_file = str(data_file) --> 485 return (request_etag(data_file, use_auth_token=use_auth_token),) 486 else: 487 data_file = str(data_file.resolve()) ~/.local/lib/python3.8/site-packages/datasets/utils/file_utils.py in request_etag(url, use_auth_token) 489 def request_etag(url: str, use_auth_token: Optional[Union[str, bool]] = None) -> Optional[str]: 490 headers = get_authentication_headers_for_url(url, use_auth_token=use_auth_token) --> 491 response = http_head(url, headers=headers, max_retries=3) 492 response.raise_for_status() 493 etag = response.headers.get("ETag") if response.ok else None ~/.local/lib/python3.8/site-packages/datasets/utils/file_utils.py in http_head(url, proxies, headers, cookies, allow_redirects, timeout, max_retries) 474 headers = copy.deepcopy(headers) or {} 475 headers["user-agent"] = get_datasets_user_agent(user_agent=headers.get("user-agent")) --> 476 response = _request_with_retry( 477 method="HEAD", 478 url=url, ~/.local/lib/python3.8/site-packages/datasets/utils/file_utils.py in _request_with_retry(method, url, max_retries, base_wait_time, max_wait_time, timeout, **params) 407 except (requests.exceptions.ConnectTimeout, requests.exceptions.ConnectionError) as err: 408 if tries > max_retries: --> 409 raise err 410 else: 411 logger.info(f"{method} request to {url} timed out, retrying... [{tries/max_retries}]") ~/.local/lib/python3.8/site-packages/datasets/utils/file_utils.py in _request_with_retry(method, url, max_retries, base_wait_time, max_wait_time, timeout, **params) 403 tries += 1 404 try: --> 405 response = requests.request(method=method.upper(), url=url, timeout=timeout, **params) 406 success = True 407 except (requests.exceptions.ConnectTimeout, requests.exceptions.ConnectionError) as err: /usr/lib/python3/dist-packages/requests/api.py in request(method, url, **kwargs) 58 # cases, and look like a memory leak in others. 59 with sessions.Session() as session: ---> 60 return session.request(method=method, url=url, **kwargs) 61 62 /usr/lib/python3/dist-packages/requests/sessions.py in request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json) 531 } 532 send_kwargs.update(settings) --> 533 resp = self.send(prep, **send_kwargs) 534 535 return resp /usr/lib/python3/dist-packages/requests/sessions.py in send(self, request, **kwargs) 644 645 # Send the request --> 646 r = adapter.send(request, **kwargs) 647 648 # Total elapsed time of the request (approximately) /usr/lib/python3/dist-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies) 502 # TODO: Remove this in 3.0.0: see #2811 503 if not isinstance(e.reason, NewConnectionError): --> 504 raise ConnectTimeout(e, request=request) 505 506 if isinstance(e.reason, ResponseError): ConnectTimeout: HTTPSConnectionPool(host='the-eye.eu', port=443): Max retries exceeded with url: /public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst (Caused by ConnectTimeoutError(<urllib3.connection.VerifiedHTTPSConnection object at 0x7f06dd698850>, 'Connection to the-eye.eu timed out. (connect timeout=10.0)')) ``` ## Environment info - `datasets` version: 1.17.0 - Platform: Linux-5.11.0-43-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 6.0.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lewtun", "id": 26859204, "login": "lewtun", "node_id": "MDQ6VXNlcjI2ODU5MjA0", "organizations_url": "https://api.github.com/users/lewtun/orgs", "received_events_url": "https://api.github.com/users/lewtun/received_events", "repos_url": "https://api.github.com/users/lewtun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "type": "User", "url": "https://api.github.com/users/lewtun", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3504/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3504/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
49 days, 20:41:05
https://api.github.com/repos/huggingface/datasets/issues/3503
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3503/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3503/comments
https://api.github.com/repos/huggingface/datasets/issues/3503/events
https://github.com/huggingface/datasets/issues/3503
1,090,472,735
I_kwDODunzps5A_0sf
3,503
Batched in filter throws error
{ "avatar_url": "https://avatars.githubusercontent.com/u/32967787?v=4", "events_url": "https://api.github.com/users/gpucce/events{/privacy}", "followers_url": "https://api.github.com/users/gpucce/followers", "following_url": "https://api.github.com/users/gpucce/following{/other_user}", "gists_url": "https://api.github.com/users/gpucce/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/gpucce", "id": 32967787, "login": "gpucce", "node_id": "MDQ6VXNlcjMyOTY3Nzg3", "organizations_url": "https://api.github.com/users/gpucce/orgs", "received_events_url": "https://api.github.com/users/gpucce/received_events", "repos_url": "https://api.github.com/users/gpucce/repos", "site_admin": false, "starred_url": "https://api.github.com/users/gpucce/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gpucce/subscriptions", "type": "User", "url": "https://api.github.com/users/gpucce", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4", "events_url": "https://api.github.com/users/thomasw21/events{/privacy}", "followers_url": "https://api.github.com/users/thomasw21/followers", "following_url": "https://api.github.com/users/thomasw21/following{/other_user}", "gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thomasw21", "id": 24695242, "login": "thomasw21", "node_id": "MDQ6VXNlcjI0Njk1MjQy", "organizations_url": "https://api.github.com/users/thomasw21/orgs", "received_events_url": "https://api.github.com/users/thomasw21/received_events", "repos_url": "https://api.github.com/users/thomasw21/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions", "type": "User", "url": "https://api.github.com/users/thomasw21", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4", "events_url": "https://api.github.com/users/thomasw21/events{/privacy}", "followers_url": "https://api.github.com/users/thomasw21/followers", "following_url": "https://api.github.com/users/thomasw21/following{/other_user}", "gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thomasw21", "id": 24695242, "login": "thomasw21", "node_id": "MDQ6VXNlcjI0Njk1MjQy", "organizations_url": "https://api.github.com/users/thomasw21/orgs", "received_events_url": "https://api.github.com/users/thomasw21/received_events", "repos_url": "https://api.github.com/users/thomasw21/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions", "type": "User", "url": "https://api.github.com/users/thomasw21", "user_view_type": "public" } ]
[]
2021-12-29T12:01:04
2022-01-04T10:24:27
2022-01-04T10:24:27
CONTRIBUTOR
null
null
null
null
I hope this is really a bug, I could not find it among the open issues ## Describe the bug using `batched=False` in DataSet.filter throws error ```python TypeError: filter() got an unexpected keyword argument 'batched' ``` but in the docs it is lister as an argument. ## Steps to reproduce the bug ```python task = "mnli" max_length = 128 tokenizer = AutoTokenizer.from_pretrained("./pretrained_models/pretrained_models_drozd/sl250.m.gsic.titech.ac.jp:8000/21.11.17_06.30.32_roberta-base_a0057/checkpoints/smpl_400M/hf/") dataset = load_dataset("glue", task) task_to_keys = { "cola": ("sentence", None), "mnli": ("premise", "hypothesis"), "mnli-mm": ("premise", "hypothesis"), "mrpc": ("sentence1", "sentence2"), "qnli": ("question", "sentence"), "qqp": ("question1", "question2"), "rte": ("sentence1", "sentence2"), "sst2": ("sentence", None), "stsb": ("sentence1", "sentence2"), "wnli": ("sentence1", "sentence2"), } ##### tokenization_parameters sentence1_key, sentence2_key = task_to_keys[task] def preprocess_function(examples, max_length): if sentence2_key is None: return tokenizer( examples[sentence1_key], truncation=True, max_length=max_length ) return tokenizer( examples[sentence1_key], examples[sentence2_key], truncation=False, padding="max_length", max_length=max_length, ) encoded_dataset = dataset.map( lambda x: preprocess_function(x, max_length=max_length), batched=False ) encoded_dataset.filter(lambda x: len(x['input_ids']) <= max_length, batched=False) ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.16.1, 1.17.0 - Platform: ubuntu - Python version: 3.8.12
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3503/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3503/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
5 days, 22:23:23
https://api.github.com/repos/huggingface/datasets/issues/3499
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3499/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3499/comments
https://api.github.com/repos/huggingface/datasets/issues/3499/events
https://github.com/huggingface/datasets/issues/3499
1,090,132,618
I_kwDODunzps5A-hqK
3,499
Adjusting chunk size for streaming datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/3775944?v=4", "events_url": "https://api.github.com/users/JoelNiklaus/events{/privacy}", "followers_url": "https://api.github.com/users/JoelNiklaus/followers", "following_url": "https://api.github.com/users/JoelNiklaus/following{/other_user}", "gists_url": "https://api.github.com/users/JoelNiklaus/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/JoelNiklaus", "id": 3775944, "login": "JoelNiklaus", "node_id": "MDQ6VXNlcjM3NzU5NDQ=", "organizations_url": "https://api.github.com/users/JoelNiklaus/orgs", "received_events_url": "https://api.github.com/users/JoelNiklaus/received_events", "repos_url": "https://api.github.com/users/JoelNiklaus/repos", "site_admin": false, "starred_url": "https://api.github.com/users/JoelNiklaus/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JoelNiklaus/subscriptions", "type": "User", "url": "https://api.github.com/users/JoelNiklaus", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
[ "Hi ! Data streaming uses `fsspec` to read the data files progressively. IIRC the block size for buffering is 5MiB by default. So every time you finish iterating over a block, it downloads the next one. You can still try to increase the `fsspec` block size for buffering if it can help. To do so you just need to increase `fsspec.spec.AbstractBufferedFile.DEFAULT_BLOCK_SIZE `\r\n\r\nCurrently this is unfortunately done in a single thread, so it blocks the processing to download and uncompress the next block. At one point it would be nice to be able to do that in parallel !", "Hi! Thanks for the help, I will try it :)" ]
2021-12-28T21:17:53
2022-05-06T16:29:05
2022-05-06T16:29:05
CONTRIBUTOR
null
null
null
null
**Is your feature request related to a problem? Please describe.** I want to use mc4 which I cannot save locally, so I stream it. However, I want to process the entire dataset and filter some documents from it. With the current chunk size of around 1000 documents (right?) I hit a performance bottleneck because of the frequent decompressing. **Describe the solution you'd like** I would appreciate a parameter in the load_dataset function, that allows me to set the chunksize myself (to a value like 100'000 in my case). Like that, I hope to improve the processing time.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3499/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3499/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
128 days, 19:11:12
https://api.github.com/repos/huggingface/datasets/issues/3497
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3497/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3497/comments
https://api.github.com/repos/huggingface/datasets/issues/3497/events
https://github.com/huggingface/datasets/issues/3497
1,090,050,148
I_kwDODunzps5A-Nhk
3,497
Changing sampling rate in audio dataset and subsequently mapping with `num_proc > 1` leads to weird bug
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" } ]
[ "Same error occures when using max samples with https://github.com/huggingface/transformers/blob/master/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py", "I'm seeing this too, when using preprocessing_num_workers with \r\nhttps://github.com/huggingface/transformers/blob/master/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py" ]
2021-12-28T18:03:49
2022-01-21T13:22:27
2022-01-21T13:22:27
CONTRIBUTOR
null
null
null
null
Running: ```python from datasets import load_dataset, DatasetDict import datasets from transformers import AutoFeatureExtractor raw_datasets = DatasetDict() raw_datasets["train"] = load_dataset("common_voice", "ab", split="train") feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/wav2vec2-base") raw_datasets = raw_datasets.cast_column( "audio", datasets.features.Audio(sampling_rate=feature_extractor.sampling_rate) ) num_workers = 16 def prepare_dataset(batch): sample = batch["audio"] inputs = feature_extractor(sample["array"], sampling_rate=sample["sampling_rate"]) batch["input_values"] = inputs.input_values[0] batch["input_length"] = len(batch["input_values"]) return batch raw_datasets.map( prepare_dataset, remove_columns=next(iter(raw_datasets.values())).column_names, num_proc=16, desc="preprocess datasets", ) ``` gives ```bash File "/home/patrick/experiments/run_bug.py", line 25, in <module> raw_datasets.map( File "/home/patrick/python_bin/datasets/dataset_dict.py", line 492, in map { File "/home/patrick/python_bin/datasets/dataset_dict.py", line 493, in <dictcomp> k: dataset.map( File "/home/patrick/python_bin/datasets/arrow_dataset.py", line 2139, in map shards = [ File "/home/patrick/python_bin/datasets/arrow_dataset.py", line 2140, in <listcomp> self.shard(num_shards=num_proc, index=rank, contiguous=True, keep_in_memory=keep_in_memory) File "/home/patrick/python_bin/datasets/arrow_dataset.py", line 3164, in shard return self.select( File "/home/patrick/python_bin/datasets/arrow_dataset.py", line 485, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home/patrick/python_bin/datasets/fingerprint.py", line 411, in wrapper out = func(self, *args, **kwargs) File "/home/patrick/python_bin/datasets/arrow_dataset.py", line 2756, in select return self._new_dataset_with_indices(indices_buffer=buf_writer.getvalue(), fingerprint=new_fingerprint) File "/home/patrick/python_bin/datasets/arrow_dataset.py", line 2667, in _new_dataset_with_indices return Dataset( File "/home/patrick/python_bin/datasets/arrow_dataset.py", line 659, in __init__ raise ValueError( ValueError: External features info don't match the dataset: Got {'client_id': Value(dtype='string', id=None), 'path': Value(dtype='string', id=None), 'audio': Audio(sampling_rate=16000, mono=True, _storage_dtype='string', id=None), 'sentence': Value(dtype='string', id=None), 'up_votes': Value(dtype='int64', id=None), 'down_votes': Value(dtype='int64', id=None), 'age': Value(dtype='string', id=None), 'gender': Value(dtype='string', id=None), 'accent': Value(dtype='string', id=None), 'locale': Value(dtype='string', id=None), 'segment': Value(dtype='string', id=None)} with type struct<client_id: string, path: string, audio: string, sentence: string, up_votes: int64, down_votes: int64, age: string, gender: string, accent: string, locale: string, segment: string> but expected something like {'client_id': Value(dtype='string', id=None), 'path': Value(dtype='string', id=None), 'audio': {'path': Value(dtype='string', id=None), 'bytes': Value(dtype='binary', id=None)}, 'sentence': Value(dtype='string', id=None), 'up_votes': Value(dtype='int64', id=None), 'down_votes': Value(dtype='int64', id=None), 'age': Value(dtype='string', id=None), 'gender': Value(dtype='string', id=None), 'accent': Value(dtype='string', id=None), 'locale': Value(dtype='string', id=None), 'segment': Value(dtype='string', id=None)} with type struct<client_id: string, path: string, audio: struct<path: string, bytes: binary>, sentence: string, up_votes: int64, down_votes: int64, age: string, gender: string, accent: string, locale: string, segment: string> ``` Versions: ```python - `datasets` version: 1.16.2.dev0 - Platform: Linux-5.15.8-76051508-generic-x86_64-with-glibc2.33 - Python version: 3.9.7 - PyArrow version: 6.0.1 ``` and `transformers`: ``` - `transformers` version: 4.16.0.dev0 - Platform: Linux-5.15.8-76051508-generic-x86_64-with-glibc2.33 - Python version: 3.9.7 ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3497/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3497/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
23 days, 19:18:38
https://api.github.com/repos/huggingface/datasets/issues/3495
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3495/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3495/comments
https://api.github.com/repos/huggingface/datasets/issues/3495/events
https://github.com/huggingface/datasets/issues/3495
1,089,983,632
I_kwDODunzps5A99SQ
3,495
Add VoxLingua107
{ "avatar_url": "https://avatars.githubusercontent.com/u/25360440?v=4", "events_url": "https://api.github.com/users/jaketae/events{/privacy}", "followers_url": "https://api.github.com/users/jaketae/followers", "following_url": "https://api.github.com/users/jaketae/following{/other_user}", "gists_url": "https://api.github.com/users/jaketae/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jaketae", "id": 25360440, "login": "jaketae", "node_id": "MDQ6VXNlcjI1MzYwNDQw", "organizations_url": "https://api.github.com/users/jaketae/orgs", "received_events_url": "https://api.github.com/users/jaketae/received_events", "repos_url": "https://api.github.com/users/jaketae/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jaketae/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jaketae/subscriptions", "type": "User", "url": "https://api.github.com/users/jaketae", "user_view_type": "public" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
open
false
null
[]
[]
2021-12-28T15:51:43
2021-12-28T15:51:43
null
CONTRIBUTOR
null
null
null
null
## Adding a Dataset - **Name:** VoxLingua107 - **Description:** VoxLingua107 is a speech dataset for training spoken language identification models. - **Paper:** https://arxiv.org/abs/2011.12998 - **Data:** http://bark.phon.ioc.ee/voxlingua107/ - **Motivation:** 107 languages, totaling 6628 hours for the train split. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
null
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/3495/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3495/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
https://api.github.com/repos/huggingface/datasets/issues/3491
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3491/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3491/comments
https://api.github.com/repos/huggingface/datasets/issues/3491/events
https://github.com/huggingface/datasets/issues/3491
1,089,918,018
I_kwDODunzps5A9tRC
3,491
Update version of pib dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
[]
2021-12-28T14:03:58
2021-12-29T08:42:57
2021-12-29T08:42:57
MEMBER
null
null
null
null
On the Hub we have v0, while there exists v1.3. Related to bigscience-workshop/data_tooling#130
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3491/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3491/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
18:38:59
https://api.github.com/repos/huggingface/datasets/issues/3490
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3490/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3490/comments
https://api.github.com/repos/huggingface/datasets/issues/3490/events
https://github.com/huggingface/datasets/issues/3490
1,089,730,181
I_kwDODunzps5A8_aF
3,490
Does datasets support load text from HDFS?
{ "avatar_url": "https://avatars.githubusercontent.com/u/20511825?v=4", "events_url": "https://api.github.com/users/dancingpipi/events{/privacy}", "followers_url": "https://api.github.com/users/dancingpipi/followers", "following_url": "https://api.github.com/users/dancingpipi/following{/other_user}", "gists_url": "https://api.github.com/users/dancingpipi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dancingpipi", "id": 20511825, "login": "dancingpipi", "node_id": "MDQ6VXNlcjIwNTExODI1", "organizations_url": "https://api.github.com/users/dancingpipi/orgs", "received_events_url": "https://api.github.com/users/dancingpipi/received_events", "repos_url": "https://api.github.com/users/dancingpipi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dancingpipi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dancingpipi/subscriptions", "type": "User", "url": "https://api.github.com/users/dancingpipi", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
[ "Hi ! `datasets` currently supports reading local files or files over HTTP. We may add support for other filesystems (cloud storages, hdfs...) at one point though :)" ]
2021-12-28T08:56:02
2022-02-14T14:00:51
null
NONE
null
null
null
null
The raw text data is stored on HDFS due to the dataset's size is too large to store on my develop machine, so I wander does datasets support read data from hdfs?
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3490/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3490/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
https://api.github.com/repos/huggingface/datasets/issues/3488
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3488/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3488/comments
https://api.github.com/repos/huggingface/datasets/issues/3488/events
https://github.com/huggingface/datasets/issues/3488
1,089,345,653
I_kwDODunzps5A7hh1
3,488
URL query parameters are set as path in the compression hop for fsspec
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
[]
[ "I think the test passes because it simply ignore what's after `gzip://`.\r\n\r\nThe returned urlpath is expected to look like `gzip://filename::url`, and the filename is currently considered to be what's after the final `/`, hence the result.\r\n\r\nWe can decide to change this and simply have `gzip://::url`, this way we don't need to guess the filename, what do you think ?" ]
2021-12-27T16:29:00
2022-01-05T15:15:25
null
MEMBER
null
null
null
null
## Describe the bug There is an ssue with `StreamingDownloadManager._extract`. I don't know how the test `test_streaming_gg_drive_gzipped` passes: For ```python TEST_GG_DRIVE_GZIPPED_URL = "https://drive.google.com/uc?export=download&id=1Bt4Garpf0QLiwkJhHJzXaVa0I0H5Qhwz" urlpath = StreamingDownloadManager().download_and_extract(TEST_GG_DRIVE_GZIPPED_URL) ``` gives `urlpath`: ```python 'gzip://uc?export=download&id=1Bt4Garpf0QLiwkJhHJzXaVa0I0H5Qhwz::https://drive.google.com/uc?export=download&id=1Bt4Garpf0QLiwkJhHJzXaVa0I0H5Qhwz' ``` The gzip path makes no sense: `gzip://uc?export=download&id=1Bt4Garpf0QLiwkJhHJzXaVa0I0H5Qhwz` ## Steps to reproduce the bug ```python from datasets.utils.streaming_download_manager import StreamingDownloadManager dl_manager = StreamingDownloadManager() urlpath = dl_manager.extract("https://drive.google.com/uc?export=download&id=1Bt4Garpf0QLiwkJhHJzXaVa0I0H5Qhwz") print(urlpath) ``` ## Expected results The query parameters should not be set as part of the path.
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3488/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3488/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
https://api.github.com/repos/huggingface/datasets/issues/3485
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3485/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3485/comments
https://api.github.com/repos/huggingface/datasets/issues/3485/events
https://github.com/huggingface/datasets/issues/3485
1,089,027,581
I_kwDODunzps5A6T39
3,485
skip columns which cannot set to specific format when set_format
{ "avatar_url": "https://avatars.githubusercontent.com/u/13161779?v=4", "events_url": "https://api.github.com/users/tshu-w/events{/privacy}", "followers_url": "https://api.github.com/users/tshu-w/followers", "following_url": "https://api.github.com/users/tshu-w/following{/other_user}", "gists_url": "https://api.github.com/users/tshu-w/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/tshu-w", "id": 13161779, "login": "tshu-w", "node_id": "MDQ6VXNlcjEzMTYxNzc5", "organizations_url": "https://api.github.com/users/tshu-w/orgs", "received_events_url": "https://api.github.com/users/tshu-w/received_events", "repos_url": "https://api.github.com/users/tshu-w/repos", "site_admin": false, "starred_url": "https://api.github.com/users/tshu-w/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tshu-w/subscriptions", "type": "User", "url": "https://api.github.com/users/tshu-w", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
[ "You can add columns that you wish to set into `torch` format using `dataset.set_format(\"torch\", ['id', 'abc'])` so that input batch of the transform only contains those columns", "Sorry, I miss `output_all_columns` args and thought after `dataset.set_format(\"torch\", columns=columns)` I can only get specific columns I assigned." ]
2021-12-27T07:19:55
2021-12-27T09:07:07
2021-12-27T09:07:07
NONE
null
null
null
null
**Is your feature request related to a problem? Please describe.** When using `dataset.set_format("torch")`, I must make sure every columns in datasets can convert to `torch`, however, sometimes I want to keep some string columns. **Describe the solution you'd like** skip columns which cannot set to specific format when set_format instead of raise an error.
{ "avatar_url": "https://avatars.githubusercontent.com/u/13161779?v=4", "events_url": "https://api.github.com/users/tshu-w/events{/privacy}", "followers_url": "https://api.github.com/users/tshu-w/followers", "following_url": "https://api.github.com/users/tshu-w/following{/other_user}", "gists_url": "https://api.github.com/users/tshu-w/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/tshu-w", "id": 13161779, "login": "tshu-w", "node_id": "MDQ6VXNlcjEzMTYxNzc5", "organizations_url": "https://api.github.com/users/tshu-w/orgs", "received_events_url": "https://api.github.com/users/tshu-w/received_events", "repos_url": "https://api.github.com/users/tshu-w/repos", "site_admin": false, "starred_url": "https://api.github.com/users/tshu-w/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tshu-w/subscriptions", "type": "User", "url": "https://api.github.com/users/tshu-w", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3485/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3485/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
1:47:12
https://api.github.com/repos/huggingface/datasets/issues/3484
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3484/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3484/comments
https://api.github.com/repos/huggingface/datasets/issues/3484/events
https://github.com/huggingface/datasets/issues/3484
1,088,910,402
I_kwDODunzps5A53RC
3,484
make shape verification to use ArrayXD instead of nested lists for map
{ "avatar_url": "https://avatars.githubusercontent.com/u/13161779?v=4", "events_url": "https://api.github.com/users/tshu-w/events{/privacy}", "followers_url": "https://api.github.com/users/tshu-w/followers", "following_url": "https://api.github.com/users/tshu-w/following{/other_user}", "gists_url": "https://api.github.com/users/tshu-w/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/tshu-w", "id": 13161779, "login": "tshu-w", "node_id": "MDQ6VXNlcjEzMTYxNzc5", "organizations_url": "https://api.github.com/users/tshu-w/orgs", "received_events_url": "https://api.github.com/users/tshu-w/received_events", "repos_url": "https://api.github.com/users/tshu-w/repos", "site_admin": false, "starred_url": "https://api.github.com/users/tshu-w/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tshu-w/subscriptions", "type": "User", "url": "https://api.github.com/users/tshu-w", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
[ "Hi! \r\n\r\nYes, this makes sense for numeric values, but first I have to finish https://github.com/huggingface/datasets/pull/3336 because currently ArrayXD only allows the first dimension to be dynamic." ]
2021-12-27T02:16:02
2022-01-05T13:54:03
null
NONE
null
null
null
null
As describe in https://github.com/huggingface/datasets/issues/2005#issuecomment-793716753 and mentioned by @mariosasko in [image feature example](https://colab.research.google.com/drive/1mIrTnqTVkWLJWoBzT1ABSe-LFelIep1c#scrollTo=ow3XHDvf2I0B&line=1&uniqifier=1), IMO make shape verifcaiton to use ArrayXD instead of nested lists for map can help user reduce unnecessary cast. I notice datasets have done something special for `input_ids` and `attention_mask` which is also unnecessary after this feature added.
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3484/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3484/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
https://api.github.com/repos/huggingface/datasets/issues/3480
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3480/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3480/comments
https://api.github.com/repos/huggingface/datasets/issues/3480/events
https://github.com/huggingface/datasets/issues/3480
1,088,267,110
I_kwDODunzps5A3aNm
3,480
the compression format requested when saving a dataset in json format is not respected
{ "avatar_url": "https://avatars.githubusercontent.com/u/55560583?v=4", "events_url": "https://api.github.com/users/SaulLu/events{/privacy}", "followers_url": "https://api.github.com/users/SaulLu/followers", "following_url": "https://api.github.com/users/SaulLu/following{/other_user}", "gists_url": "https://api.github.com/users/SaulLu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/SaulLu", "id": 55560583, "login": "SaulLu", "node_id": "MDQ6VXNlcjU1NTYwNTgz", "organizations_url": "https://api.github.com/users/SaulLu/orgs", "received_events_url": "https://api.github.com/users/SaulLu/received_events", "repos_url": "https://api.github.com/users/SaulLu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/SaulLu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SaulLu/subscriptions", "type": "User", "url": "https://api.github.com/users/SaulLu", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "Thanks for reporting @SaulLu.\r\n\r\nAt first sight I think the problem is caused because `pandas` only takes into account the `compression` parameter if called with a non-null file path or buffer. And in our implementation, we call pandas `to_json` with `None` `path_or_buf`.\r\n\r\nWe should fix this:\r\n- either handling directly the `compression` parameter ourselves\r\n- or refactoring to pass non-null path or buffer to pandas\r\n\r\nCC: @lhoestq", "I was thinking if we can handle the `compression` parameter by ourselves? Compression types will be similar to what `pandas` offer. Initially, we can try this with 2-3 compression types and see how good/bad it is? Let me know if it sounds good, I can raise a PR for this next week", "Hi ! Thanks for your help @bhavitvyamalik :)\r\nMaybe let's start with `gzip` ? I think it's the most common use case, then if we're fine with it we can add other compression methods" ]
2021-12-24T09:23:51
2022-01-05T13:03:35
2022-01-05T13:03:35
CONTRIBUTOR
null
null
null
null
## Describe the bug In the documentation of the `to_json` method, it is stated in the parameters that > **to_json_kwargs – Parameters to pass to pandas’s pandas.DataFrame.to_json. however when we pass for example `compression="gzip"`, the saved file is not compressed. Would you also have expected compression to be applied? :relaxed: ## Steps to reproduce the bug ```python my_dict = {"a": [1, 2, 3], "b": [1, 2, 3]} ``` ### Result with datasets ```python from datasets import Dataset dataset = Dataset.from_dict(my_dict) dataset.to_json("dic_with_datasets.jsonl.gz", compression="gzip") !cat dic_with_datasets.jsonl.gz ``` output ``` {"a":1,"b":1} {"a":2,"b":2} {"a":3,"b":3} ``` Note: I would expected to see binary data here ### Result with pandas ```python import pandas as pd df = pd.DataFrame(my_dict) df.to_json("dic_with_pandas.jsonl.gz", lines=True, orient="records", compression="gzip") !cat dic_with_pandas.jsonl.gz ``` output ``` 4��a�dic_with_pandas.jsonl��VJT�2�QJ��\� ��g��yƵ���������)��� ``` Note: It looks like binary data ## Expected results I would have expected that the saved result with datasets would also be a binary file ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.16.1 - Platform: Linux-4.18.0-193.70.1.el8_2.x86_64-x86_64-with-glibc2.17 - Python version: 3.8.11 - PyArrow version: 5.0.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3480/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3480/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
12 days, 3:39:44
https://api.github.com/repos/huggingface/datasets/issues/3479
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3479/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3479/comments
https://api.github.com/repos/huggingface/datasets/issues/3479/events
https://github.com/huggingface/datasets/issues/3479
1,088,232,880
I_kwDODunzps5A3R2w
3,479
Dataset preview is not available (I think for all Hugging Face datasets)
{ "avatar_url": "https://avatars.githubusercontent.com/u/66887439?v=4", "events_url": "https://api.github.com/users/Abirate/events{/privacy}", "followers_url": "https://api.github.com/users/Abirate/followers", "following_url": "https://api.github.com/users/Abirate/following{/other_user}", "gists_url": "https://api.github.com/users/Abirate/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Abirate", "id": 66887439, "login": "Abirate", "node_id": "MDQ6VXNlcjY2ODg3NDM5", "organizations_url": "https://api.github.com/users/Abirate/orgs", "received_events_url": "https://api.github.com/users/Abirate/received_events", "repos_url": "https://api.github.com/users/Abirate/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Abirate/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Abirate/subscriptions", "type": "User", "url": "https://api.github.com/users/Abirate", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" }, { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo", "user_view_type": "public" } ]
[ "You're right, we have an issue today with the datasets preview. We're investigating.", "It should be fixed now. Thanks for reporting.", "Down again. ", "Fixed for good." ]
2021-12-24T08:18:48
2021-12-24T14:27:46
2021-12-24T14:27:46
NONE
null
null
null
null
## Dataset viewer issue for '*french_book_reviews*' **Link:** https://huggingface.co/datasets/Abirate/french_book_reviews **short description of the issue** For my dataset, the dataset preview is no longer functional (it used to work: The dataset had been added the day before and it was fine...) And, after looking over the datasets, I discovered that this issue affects all Hugging Face datasets (as of yesterday, December 23, 2021, around 10 p.m. (CET)). **Am I the one who added this dataset** : Yes **Note**: here a screenshot showing the issue ![Dataset preview is not available for my dataset](https://user-images.githubusercontent.com/66887439/147333078-60734578-420d-4e91-8691-a90afeaa8948.jpg) **And here for glue dataset :** ![Dataset preview is not available for other Hugging Face datasets(glue)](https://user-images.githubusercontent.com/66887439/147333492-26fa530c-befd-4992-8361-70c51397a25a.jpg)
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo", "user_view_type": "public" }
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/3479/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3479/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
6:08:58
https://api.github.com/repos/huggingface/datasets/issues/3475
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3475/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3475/comments
https://api.github.com/repos/huggingface/datasets/issues/3475/events
https://github.com/huggingface/datasets/issues/3475
1,087,352,041
I_kwDODunzps5Az6zp
3,475
The rotten_tomatoes dataset of movie reviews contains some reviews in Spanish
{ "avatar_url": "https://avatars.githubusercontent.com/u/17426779?v=4", "events_url": "https://api.github.com/users/puzzler10/events{/privacy}", "followers_url": "https://api.github.com/users/puzzler10/followers", "following_url": "https://api.github.com/users/puzzler10/following{/other_user}", "gists_url": "https://api.github.com/users/puzzler10/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/puzzler10", "id": 17426779, "login": "puzzler10", "node_id": "MDQ6VXNlcjE3NDI2Nzc5", "organizations_url": "https://api.github.com/users/puzzler10/orgs", "received_events_url": "https://api.github.com/users/puzzler10/received_events", "repos_url": "https://api.github.com/users/puzzler10/repos", "site_admin": false, "starred_url": "https://api.github.com/users/puzzler10/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/puzzler10/subscriptions", "type": "User", "url": "https://api.github.com/users/puzzler10", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
[]
[ "Hi @puzzler10, thanks for reporting.\r\n\r\nPlease note this dataset is not hosted on Hugging Face Hub. See: \r\nhttps://github.com/huggingface/datasets/blob/c8f914473b041833fd47178fa4373cdcb56ac522/datasets/rotten_tomatoes/rotten_tomatoes.py#L42\r\n\r\nIf there are issues with the source data of a dataset, you should contact the data owners/creators instead. In the homepage associated with this dataset (http://www.cs.cornell.edu/people/pabo/movie-review-data/), you can find the authors of the dataset and how to contact them:\r\n> If you have any questions or comments regarding this site, please send email to Bo Pang or Lillian Lee.\r\n\r\nP.S.: Please also note that the example you gave of non-English review is in Portuguese (not Spanish). ;)", "Maybe best to just put a quick sentence in the dataset description that highlights this? " ]
2021-12-23T03:56:43
2021-12-24T00:23:03
null
NONE
null
null
null
null
## Describe the bug See title. I don't think this is intentional and they probably should be removed. If they stay the dataset description should be at least updated to make it clear to the user. ## Steps to reproduce the bug Go to the [dataset viewer](https://huggingface.co/datasets/viewer/?dataset=rotten_tomatoes) for the dataset, set the offset to 4160 for the train dataset, and scroll through the results. I found ones at index 4166 and 4173. There's others too (e.g. index 2888) but those two are easy to find like that. ## Expected results English movie reviews only. ## Actual results Example of a Spanish movie review (4173): > "É uma pena que , mais tarde , o próprio filme abandone o tom de paródia e passe a utilizar os mesmos clichês que havia satirizado "
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3475/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3475/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
https://api.github.com/repos/huggingface/datasets/issues/3473
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3473/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3473/comments
https://api.github.com/repos/huggingface/datasets/issues/3473/events
https://github.com/huggingface/datasets/issues/3473
1,086,937,610
I_kwDODunzps5AyVoK
3,473
Iterating over a vision dataset doesn't decode the images
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" }, { "color": "bfdadc", "default": false, "description": "Vision datasets", "id": 3608941089, "name": "vision", "node_id": "LA_kwDODunzps7XHBIh", "url": "https://api.github.com/repos/huggingface/datasets/labels/vision" } ]
closed
false
null
[]
[ "As discussed, I remember I set `decoded=False` here to avoid decoding just by iterating over examples of dataset. We wanted to decode only if the \"audio\" field (for Audio feature) was accessed.", "> I set decoded=False here to avoid decoding just by iterating over examples of dataset. We wanted to decode only if the \"audio\" field (for Audio feature) was accessed\r\n\r\nhttps://github.com/huggingface/datasets/pull/3430 will add more control to decoding, so I think it's OK to enable decoding in `__iter__` for now. After we merge the linked PR, the user can easily disable it again.", "@mariosasko I wonder why there is no issue in `Audio` feature with decoding disabled in `__iter__`, whereas there is in `Image` feature.\r\n\r\nEnabling decoding in `__iter__` will make fail Audio regressions tests: https://github.com/huggingface/datasets/runs/4608657230?check_suite_focus=true\r\n```\r\n=========================== short test summary info ============================\r\nFAILED tests/features/test_audio.py::test_dataset_with_audio_feature_map_is_not_decoded\r\nFAILED tests/features/test_audio.py::test_dataset_with_audio_feature_map_is_decoded\r\n========================= 2 failed, 15 passed in 8.37s =========================", "Please also note that the regression tests were implemented in accordance with the specifications:\r\n- when doing a `map` (wich calls `__iter__`) of a function that doesn't access the audio field, the decoding should be disabled; this is why the decoding is disabled in `__iter__` (and only enabled in `__getitem__`).", "> I wonder why there is no issue in Audio feature with decoding disabled in __iter__, whereas there is in Image feature.\r\n\r\n@albertvillanova Not sure if I understand this part. Currently, both the Image and the Audio feature don't decode data in `__iter__`, so their behavior is aligned there.\r\n", "Therefore, this is not an issue, neither for Audio nor Image feature.\r\n\r\nCould you please elaborate more on the expected use case? @lhoestq @NielsRogge \r\n\r\nThe expected use cases (in accordance with the specs: see #2324):\r\n- decoding should be enabled when accessing a specific item (`__getitem__`)\r\n- decoding should be disabled while iterating (`__iter__`) to allow preprocessing of non-audio/image features (like label or text, for example) using `.map`\r\n- decoding should be enabled in a `.map` only if the `.map` function accesses the audio/image feature (implemented using `LazyDict`)", "For me it's not an issue, actually. I just (mistakenly) tried to iterate over a PyTorch Dataset instead of a PyTorch DataLoader, \r\n\r\ni.e. I did this:\r\n\r\n`batch = next(iter(train_ds)) `\r\n\r\nwhereas I actually wanted to do\r\n\r\n`batch = next(iter(train_dataloader))`\r\n\r\nand then it turned out that in the first case, the image was a string of bytes rather than a Pillow image, hence Quentin opened an issue.", "Thanks @NielsRogge for the context.\r\n\r\nSo IMO everything is working as expected.\r\n\r\nI'm closing this issue. Feel free to reopen it again if further changes of the specs should be addressed.", "Thanks for the details :)\r\n\r\nI still think that it's unexpected to get different results when doing\r\n```python\r\nfor i in range(len(dataset)):\r\n sample = dataset[i]\r\n```\r\nand\r\n```python\r\nfor sample in dataset:\r\n pass\r\n```\r\neven though I understand that if you don't need to decode the data, then decoding image or audio data when iterating is a waste of time and resources.\r\n\r\nBut in this case users can still drop the column that need decoding to get the full speed back no ?" ]
2021-12-22T15:26:32
2021-12-27T14:13:21
2021-12-23T15:21:57
MEMBER
null
null
null
null
## Describe the bug If I load `mnist` and I iterate over the dataset, the images are not decoded, and the dictionary with the bytes is returned. ## Steps to reproduce the bug ```python from datasets import load_dataset import PIL mnist = load_dataset("mnist", split="train") first_image = mnist[0]["image"] assert isinstance(first_image, PIL.PngImagePlugin.PngImageFile) # passes first_image = next(iter(mnist))["image"] assert isinstance(first_image, PIL.PngImagePlugin.PngImageFile) # fails ``` ## Expected results The image should be decoded, as a PIL Image ## Actual results We get a dictionary ``` {'bytes': b'\x89PNG\r\n\x1a\n\x00..., 'path': None} ``` ## Environment info - `datasets` version: 1.17.1.dev0 - Platform: Darwin-20.6.0-x86_64-i386-64bit - Python version: 3.7.2 - PyArrow version: 6.0.0 The bug also exists in 1.17.0 ## Investigation I think the issue is that decoding is disabled in `__iter__`: https://github.com/huggingface/datasets/blob/dfe5b73387c5e27de6a16b0caeb39d3b9ded66d6/src/datasets/arrow_dataset.py#L1651-L1661 Do you remember why it was disabled in the first place @albertvillanova ? Also cc @mariosasko @NielsRogge
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3473/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3473/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
23:55:25
https://api.github.com/repos/huggingface/datasets/issues/3465
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3465/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3465/comments
https://api.github.com/repos/huggingface/datasets/issues/3465/events
https://github.com/huggingface/datasets/issues/3465
1,085,400,432
I_kwDODunzps5AseVw
3,465
Unable to load 'cnn_dailymail' dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/42352729?v=4", "events_url": "https://api.github.com/users/talha1503/events{/privacy}", "followers_url": "https://api.github.com/users/talha1503/followers", "following_url": "https://api.github.com/users/talha1503/following{/other_user}", "gists_url": "https://api.github.com/users/talha1503/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/talha1503", "id": 42352729, "login": "talha1503", "node_id": "MDQ6VXNlcjQyMzUyNzI5", "organizations_url": "https://api.github.com/users/talha1503/orgs", "received_events_url": "https://api.github.com/users/talha1503/received_events", "repos_url": "https://api.github.com/users/talha1503/repos", "site_admin": false, "starred_url": "https://api.github.com/users/talha1503/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/talha1503/subscriptions", "type": "User", "url": "https://api.github.com/users/talha1503", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" }, { "color": "cfd3d7", "default": true, "description": "This issue or pull request already exists", "id": 1935892865, "name": "duplicate", "node_id": "MDU6TGFiZWwxOTM1ODkyODY1", "url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate" }, { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
closed
false
null
[]
[ "Hi @talha1503, thanks for reporting.\r\n\r\nIt seems there is an issue with one of the data files hosted at Google Drive:\r\n```\r\nGoogle Drive - Quota exceeded\r\n\r\nSorry, you can't view or download this file at this time.\r\n\r\nToo many users have viewed or downloaded this file recently. Please try accessing the file again later. If the file you are trying to access is particularly large or is shared with many people, it may take up to 24 hours to be able to view or download the file. If you still can't access a file after 24 hours, contact your domain administrator.\r\n```\r\n\r\nAs you probably know, Hugging Face does not host the data, and in this case the data owner decided to host their data at Google Drive, which has quota limits.\r\n\r\nIs there anything we could do, @lhoestq @mariosasko?", "This looks related to https://github.com/huggingface/datasets/issues/996", "It seems that [this](https://huggingface.co/datasets/ccdv/cnn_dailymail) copy of the dataset has fixed the problem", "thank you @AyhamAlom ...\r\nit resolved the error" ]
2021-12-21T03:32:21
2024-06-12T14:41:17
2022-02-17T14:13:57
NONE
null
null
null
null
## Describe the bug I wanted to load cnn_dailymail dataset from huggingface datasets on Google Colab, but I am getting an error while loading it. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset('cnn_dailymail', '3.0.0', ignore_verifications = True) ``` ## Expected results Expecting to load 'cnn_dailymail' dataset. ## Actual results `NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories'` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.16.1 - Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.12 - PyArrow version: 3.0.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3465/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3465/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
58 days, 10:41:36
https://api.github.com/repos/huggingface/datasets/issues/3464
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3464/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3464/comments
https://api.github.com/repos/huggingface/datasets/issues/3464/events
https://github.com/huggingface/datasets/issues/3464
1,085,399,097
I_kwDODunzps5AseA5
3,464
struct.error: 'i' format requires -2147483648 <= number <= 2147483647
{ "avatar_url": "https://avatars.githubusercontent.com/u/30341159?v=4", "events_url": "https://api.github.com/users/koukoulala/events{/privacy}", "followers_url": "https://api.github.com/users/koukoulala/followers", "following_url": "https://api.github.com/users/koukoulala/following{/other_user}", "gists_url": "https://api.github.com/users/koukoulala/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/koukoulala", "id": 30341159, "login": "koukoulala", "node_id": "MDQ6VXNlcjMwMzQxMTU5", "organizations_url": "https://api.github.com/users/koukoulala/orgs", "received_events_url": "https://api.github.com/users/koukoulala/received_events", "repos_url": "https://api.github.com/users/koukoulala/repos", "site_admin": false, "starred_url": "https://api.github.com/users/koukoulala/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/koukoulala/subscriptions", "type": "User", "url": "https://api.github.com/users/koukoulala", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
[]
[ "Hi ! Can you try setting `datasets.config.MAX_TABLE_NBYTES_FOR_PICKLING` to a smaller value than `4 << 30` (4GiB), for example `500 << 20` (500MiB) ? It should reduce the maximum size of the arrow table being pickled during multiprocessing.\r\n\r\nIf it fixes the issue, we can consider lowering the default value for everyone.", "@lhoestq I tried that just now but didn't seem to help." ]
2021-12-21T03:29:01
2022-11-21T19:55:11
null
NONE
null
null
null
null
## Describe the bug A clear and concise description of what the bug is. using latest datasets=datasets-1.16.1-py3-none-any.whl process my own multilingual dataset by following codes, and the number of rows in all dataset is 306000, the max_length of each sentence is 256: ![image](https://user-images.githubusercontent.com/30341159/146865779-3d25d011-1f42-4026-9e1b-76f6e1d172e9.png) then I get this error: ![image](https://user-images.githubusercontent.com/30341159/146865844-e60a404c-5f3a-403c-b2f1-acd943b5cdb8.png) I have seen the issue in #2134 and #2150, so I don't understand why latest repo still can't deal with big dataset. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - Platform: linux docker - Python version: 3.6
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3464/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3464/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null