url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
48
51
id
int64
600M
3.67B
node_id
stringlengths
18
24
number
int64
2
7.88k
title
stringlengths
1
290
user
dict
labels
listlengths
0
4
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
4
comments
listlengths
0
30
created_at
timestamp[s]date
2020-04-14 18:18:51
2025-11-26 16:16:56
updated_at
timestamp[s]date
2020-04-29 09:23:05
2025-11-30 03:52:07
closed_at
timestamp[s]date
2020-04-29 09:23:05
2025-11-21 12:31:19
author_association
stringclasses
4 values
type
null
active_lock_reason
null
draft
null
pull_request
null
body
stringlengths
0
228k
closed_by
dict
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
null
state_reason
stringclasses
4 values
sub_issues_summary
dict
issue_dependencies_summary
dict
is_pull_request
bool
1 class
closed_at_time_taken
duration[s]
https://api.github.com/repos/huggingface/datasets/issues/2775
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2775/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2775/comments
https://api.github.com/repos/huggingface/datasets/issues/2775/events
https://github.com/huggingface/datasets/issues/2775
964,303,626
MDU6SXNzdWU5NjQzMDM2MjY=
2,775
`generate_random_fingerprint()` deterministic with 🤗Transformers' `set_seed()`
{ "avatar_url": "https://avatars.githubusercontent.com/u/1170062?v=4", "events_url": "https://api.github.com/users/mbforbes/events{/privacy}", "followers_url": "https://api.github.com/users/mbforbes/followers", "following_url": "https://api.github.com/users/mbforbes/following{/other_user}", "gists_url": "https://api.github.com/users/mbforbes/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mbforbes", "id": 1170062, "login": "mbforbes", "node_id": "MDQ6VXNlcjExNzAwNjI=", "organizations_url": "https://api.github.com/users/mbforbes/orgs", "received_events_url": "https://api.github.com/users/mbforbes/received_events", "repos_url": "https://api.github.com/users/mbforbes/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mbforbes/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mbforbes/subscriptions", "type": "User", "url": "https://api.github.com/users/mbforbes", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "I dug into what I believe is the root of this issue and added a repro in my comment. If this is better addressed as a cross-team issue, let me know and I can open an issue in the Transformers repo", "Hi !\r\n\r\nIMO we shouldn't try to modify `set_seed` from transformers but maybe make `datasets` have its own RNG just to generate random fingerprints.\r\n\r\nAny opinion on this @LysandreJik ?", "Yes, this sounds good @lhoestq " ]
2021-08-09T19:28:51
2024-01-26T15:05:36
2024-01-26T15:05:35
NONE
null
null
null
null
## Describe the bug **Update:** I dug into this to try to reproduce the underlying issue, and I believe it's that `set_seed()` from the `transformers` library makes the "random" fingerprint identical each time. I believe this is still a bug, because `datasets` is used exactly this way in `transformers` after `set_seed()` has been called, and I think that using `set_seed()` is a standard procedure to aid reproducibility. I've added more details to reproduce this below. Hi there! I'm using my own local dataset and custom preprocessing function. My preprocessing function seems to be unpickle-able, perhaps because it is from a closure (will debug this separately). I get this warning, which is expected: https://github.com/huggingface/datasets/blob/450b9174765374111e5c6daab0ed294bc3d9b639/src/datasets/fingerprint.py#L260-L265 However, what's not expected is that the `datasets` actually _does_ seem to cache and reuse this dataset between runs! After that line, the next thing that's logged looks like: ```text Loading cached processed dataset at /home/xxx/.cache/huggingface/datasets/csv/default-xxx/0.0.0/xxx/cache-xxx.arrow ``` The path is exactly the same each run (e.g., last 26 runs). This becomes a problem because I'll pass in the `--max_eval_samples` flag to the HuggingFace example script I'm running off of ([run_swag.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/multiple-choice/run_swag.py)). The fact that the cached dataset is reused means this flag gets ignored. I'll try to load 100 examples, and it will load the full cached 1,000,000. I think that https://github.com/huggingface/datasets/blob/450b9174765374111e5c6daab0ed294bc3d9b639/src/datasets/fingerprint.py#L248 ... is actually consistent because randomness is being controlled in HuggingFace/Transformers for reproducibility. I've added a demo of this below. ## Steps to reproduce the bug ```python # Contents of print_fingerprint.py from transformers import set_seed from datasets.fingerprint import generate_random_fingerprint set_seed(42) print(generate_random_fingerprint()) ``` ```bash for i in {0..10}; do python print_fingerprint.py done 1c80317fa3b1799d 1c80317fa3b1799d 1c80317fa3b1799d 1c80317fa3b1799d 1c80317fa3b1799d 1c80317fa3b1799d 1c80317fa3b1799d 1c80317fa3b1799d 1c80317fa3b1799d 1c80317fa3b1799d 1c80317fa3b1799d ``` ## Expected results After the "random hash" warning is emitted, a random hash is generated, and no outdated cached datasets are reused. ## Actual results After the "random hash" warning is emitted, an identical hash is generated each time, and an outdated cached dataset is reused each run. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.9.0 - Platform: Linux-5.8.0-1038-gcp-x86_64-with-glibc2.31 - Python version: 3.9.6 - PyArrow version: 4.0.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2775/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2775/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
899 days, 19:36:44
https://api.github.com/repos/huggingface/datasets/issues/2773
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2773/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2773/comments
https://api.github.com/repos/huggingface/datasets/issues/2773/events
https://github.com/huggingface/datasets/issues/2773
963,730,497
MDU6SXNzdWU5NjM3MzA0OTc=
2,773
Remove dataset_infos.json
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "c5def5", "default": false, "description": "Generic discussion on the library", "id": 2067400324, "name": "generic discussion", "node_id": "MDU6TGFiZWwyMDY3NDAwMzI0", "url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion" } ]
closed
false
null
[]
[ "This was closed by:\r\n- #4926" ]
2021-08-09T07:43:19
2024-05-04T14:52:10
2024-05-04T14:52:10
MEMBER
null
null
null
null
**Is your feature request related to a problem? Please describe.** As discussed, there are infos in the `dataset_infos.json` which are redundant and we could have them only in the README file. Others could be migrated to the README, like: "dataset_size", "size_in_bytes", "download_size", "splits.split_name.[num_bytes, num_examples]",... However, there are others that do not seem too meaningful in the README, like the checksums. **Describe the solution you'd like** Open a discussion to decide what to do with the `dataset_infos.json` files: which information to be migrated and/or which information to be kept. cc: @julien-c @lhoestq
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2773/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2773/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
999 days, 7:08:51
https://api.github.com/repos/huggingface/datasets/issues/2772
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2772/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2772/comments
https://api.github.com/repos/huggingface/datasets/issues/2772/events
https://github.com/huggingface/datasets/issues/2772
963,348,834
MDU6SXNzdWU5NjMzNDg4MzQ=
2,772
Remove returned feature constrain
{ "avatar_url": "https://avatars.githubusercontent.com/u/33200481?v=4", "events_url": "https://api.github.com/users/PosoSAgapo/events{/privacy}", "followers_url": "https://api.github.com/users/PosoSAgapo/followers", "following_url": "https://api.github.com/users/PosoSAgapo/following{/other_user}", "gists_url": "https://api.github.com/users/PosoSAgapo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/PosoSAgapo", "id": 33200481, "login": "PosoSAgapo", "node_id": "MDQ6VXNlcjMzMjAwNDgx", "organizations_url": "https://api.github.com/users/PosoSAgapo/orgs", "received_events_url": "https://api.github.com/users/PosoSAgapo/received_events", "repos_url": "https://api.github.com/users/PosoSAgapo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/PosoSAgapo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PosoSAgapo/subscriptions", "type": "User", "url": "https://api.github.com/users/PosoSAgapo", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
[]
2021-08-08T04:01:30
2021-08-08T08:48:01
null
NONE
null
null
null
null
In the current version, the returned value of the map function has to be list or ndarray. However, this makes it unsuitable for many tasks. In NLP, many features are sparse like verb words, noun chunks, if we want to assign different values to different words, which will result in a large sparse matrix if we only score useful words like verb words. Mostly, when using it on large scale, saving it as a whole takes a lot of disk storage and making it hard to read, the normal method is saving it in sparse form. However, the NumPy does not support sparse, therefore I have to use PyTorch or scipy to transform a matrix into special sparse form, which is not a form that can be transformed into list or ndarry. This violates the feature constraints of the map function. I do appreciate the convenience of Datasets package, but I do not think the compulsory datatype constrain is necessary, in some cases, we just cannot transform it into a list or ndarray due to some reasons. Any way to fix this? Or what I can do to disable the compulsory datatype constrain?
null
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2772/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2772/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
https://api.github.com/repos/huggingface/datasets/issues/2768
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2768/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2768/comments
https://api.github.com/repos/huggingface/datasets/issues/2768/events
https://github.com/huggingface/datasets/issues/2768
963,229,173
MDU6SXNzdWU5NjMyMjkxNzM=
2,768
`ArrowInvalid: Added column's length must match table's length.` after using `select`
{ "avatar_url": "https://avatars.githubusercontent.com/u/8264887?v=4", "events_url": "https://api.github.com/users/lvwerra/events{/privacy}", "followers_url": "https://api.github.com/users/lvwerra/followers", "following_url": "https://api.github.com/users/lvwerra/following{/other_user}", "gists_url": "https://api.github.com/users/lvwerra/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lvwerra", "id": 8264887, "login": "lvwerra", "node_id": "MDQ6VXNlcjgyNjQ4ODc=", "organizations_url": "https://api.github.com/users/lvwerra/orgs", "received_events_url": "https://api.github.com/users/lvwerra/received_events", "repos_url": "https://api.github.com/users/lvwerra/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lvwerra/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lvwerra/subscriptions", "type": "User", "url": "https://api.github.com/users/lvwerra", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "Hi,\r\n\r\nthe `select` method creates an indices mapping and doesn't modify the underlying PyArrow table by default for better performance. To modify the underlying table after the `select` call, call `flatten_indices` on the dataset object as follows:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset(\"tweets_hate_speech_detection\")['train'].select(range(128))\r\nds = ds.flatten_indices()\r\nds = ds.add_column('ones', [1]*128)\r\n```", "Thanks for the question @lvwerra. And thanks for the answer @mariosasko. ^^" ]
2021-08-07T13:17:29
2021-08-09T11:26:43
2021-08-09T11:26:43
MEMBER
null
null
null
null
## Describe the bug I would like to add a column to a downsampled dataset. However I get an error message saying the length don't match with the length of the unsampled dataset indicated. I suspect that the dataset size is not updated when calling `select`. ## Steps to reproduce the bug ```python from datasets import load_dataset ds = load_dataset("tweets_hate_speech_detection")['train'].select(range(128)) ds = ds.add_column('ones', [1]*128) ``` ## Expected results I would expect a new column named `ones` filled with `1`. When I check the length of `ds` it says `128`. Interestingly, it works when calling `ds = ds.map(lambda x: x)` before adding the column. ## Actual results Specify the actual results or traceback. ```python --------------------------------------------------------------------------- ArrowInvalid Traceback (most recent call last) /var/folders/l4/2905jygx4tx5jv8_kn03vxsw0000gn/T/ipykernel_6301/868709636.py in <module> 1 from datasets import load_dataset 2 ds = load_dataset("tweets_hate_speech_detection")['train'].select(range(128)) ----> 3 ds = ds.add_column('ones', [0]*128) ~/git/semantic-clustering/env/lib/python3.8/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs) 183 } 184 # apply actual function --> 185 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 186 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 187 # re-apply format to the output ~/git/semantic-clustering/env/lib/python3.8/site-packages/datasets/fingerprint.py in wrapper(*args, **kwargs) 395 # Call actual function 396 --> 397 out = func(self, *args, **kwargs) 398 399 # Update fingerprint of in-place transforms + update in-place history of transforms ~/git/semantic-clustering/env/lib/python3.8/site-packages/datasets/arrow_dataset.py in add_column(self, name, column, new_fingerprint) 2965 column_table = InMemoryTable.from_pydict({name: column}) 2966 # Concatenate tables horizontally -> 2967 table = ConcatenationTable.from_tables([self._data, column_table], axis=1) 2968 # Update features 2969 info = self.info.copy() ~/git/semantic-clustering/env/lib/python3.8/site-packages/datasets/table.py in from_tables(cls, tables, axis) 715 table_blocks = to_blocks(table) 716 blocks = _extend_blocks(blocks, table_blocks, axis=axis) --> 717 return cls.from_blocks(blocks) 718 719 @property ~/git/semantic-clustering/env/lib/python3.8/site-packages/datasets/table.py in from_blocks(cls, blocks) 663 return cls(table, blocks) 664 else: --> 665 table = cls._concat_blocks_horizontally_and_vertically(blocks) 666 return cls(table, blocks) 667 ~/git/semantic-clustering/env/lib/python3.8/site-packages/datasets/table.py in _concat_blocks_horizontally_and_vertically(cls, blocks) 623 if not tables: 624 continue --> 625 pa_table_horizontally_concatenated = cls._concat_blocks(tables, axis=1) 626 pa_tables_to_concat_vertically.append(pa_table_horizontally_concatenated) 627 return cls._concat_blocks(pa_tables_to_concat_vertically, axis=0) ~/git/semantic-clustering/env/lib/python3.8/site-packages/datasets/table.py in _concat_blocks(blocks, axis) 612 else: 613 for name, col in zip(table.column_names, table.columns): --> 614 pa_table = pa_table.append_column(name, col) 615 return pa_table 616 else: ~/git/semantic-clustering/env/lib/python3.8/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.append_column() ~/git/semantic-clustering/env/lib/python3.8/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.add_column() ~/git/semantic-clustering/env/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status() ~/git/semantic-clustering/env/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status() ArrowInvalid: Added column's length must match table's length. Expected length 31962 but got length 128 ``` ## Environment info - `datasets` version: 1.11.0 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.8.5 - PyArrow version: 5.0.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2768/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2768/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
1 day, 22:09:14
https://api.github.com/repos/huggingface/datasets/issues/2767
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2767/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2767/comments
https://api.github.com/repos/huggingface/datasets/issues/2767/events
https://github.com/huggingface/datasets/issues/2767
963,002,120
MDU6SXNzdWU5NjMwMDIxMjA=
2,767
equal operation to perform unbatch for huggingface datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/79288051?v=4", "events_url": "https://api.github.com/users/dorooddorood606/events{/privacy}", "followers_url": "https://api.github.com/users/dorooddorood606/followers", "following_url": "https://api.github.com/users/dorooddorood606/following{/other_user}", "gists_url": "https://api.github.com/users/dorooddorood606/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dorooddorood606", "id": 79288051, "login": "dorooddorood606", "node_id": "MDQ6VXNlcjc5Mjg4MDUx", "organizations_url": "https://api.github.com/users/dorooddorood606/orgs", "received_events_url": "https://api.github.com/users/dorooddorood606/received_events", "repos_url": "https://api.github.com/users/dorooddorood606/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dorooddorood606/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorooddorood606/subscriptions", "type": "User", "url": "https://api.github.com/users/dorooddorood606", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "Hi @lhoestq \r\nMaybe this is clearer to explain like this, currently map function, map one example to \"one\" modified one, lets assume we want to map one example to \"multiple\" examples, in which we do not know in advance how many examples they would be per each entry. I greatly appreciate telling me how I can handle this operation, thanks a lot", "Hi,\r\nthis is also my question on how to perform similar operation as \"unbatch\" in tensorflow in great huggingface dataset library. \r\nthanks.", "Hi,\r\n\r\n`Dataset.map` in the batched mode allows you to map a single row to multiple rows. So to perform \"unbatch\", you can do the following:\r\n```python\r\nimport collections\r\n\r\ndef unbatch(batch):\r\n new_batch = collections.defaultdict(list)\r\n keys = batch.keys()\r\n for values in zip(*batch.values()):\r\n ex = {k: v for k, v in zip(keys, values)}\r\n inputs = f\"record query: {ex['query']} entities: {', '.join(ex['entities'])} passage: {ex['passage']}\"\r\n new_batch[\"inputs\"].extend([inputs] * len(ex[\"answers\"]))\r\n new_batch[\"targets\"].extend(ex[\"answers\"])\r\n return new_batch\r\n\r\ndset = dset.map(unbatch, batched=True, remove_columns=dset.column_names)\r\n```", "Dear @mariosasko \r\nFirst, thank you very much for coming back to me on this, I appreciate it a lot. I tried this solution, I am getting errors, do you mind\r\ngiving me one test example to be able to run your code, to understand better the format of the inputs to your function?\r\nin this function https://github.com/google-research/text-to-text-transfer-transformer/blob/3c58859b8fe72c2dbca6a43bc775aa510ba7e706/t5/data/preprocessors.py#L952 they copy each example to the number of \"answers\", do you mean one should not do the copying part and use directly your function? \r\n\r\n\r\nthank you very much for your help and time.", "Hi @mariosasko \r\nI think finally I got this, I think you mean to do things in one step, here is the full example for completeness:\r\n\r\n```\r\ndef unbatch(batch):\r\n new_batch = collections.defaultdict(list)\r\n keys = batch.keys()\r\n for values in zip(*batch.values()):\r\n ex = {k: v for k, v in zip(keys, values)}\r\n # updates the passage.\r\n passage = ex['passage']\r\n passage = re.sub(r'(\\.|\\?|\\!|\\\"|\\')\\n@highlight\\n', r'\\1 ', passage)\r\n passage = re.sub(r'\\n@highlight\\n', '. ', passage)\r\n inputs = f\"record query: {ex['query']} entities: {', '.join(ex['entities'])} passage: {passage}\"\r\n # duplicates the samples based on number of answers.\r\n num_answers = len(ex[\"answers\"])\r\n num_duplicates = np.maximum(1, num_answers)\r\n new_batch[\"inputs\"].extend([inputs] * num_duplicates) #len(ex[\"answers\"]))\r\n new_batch[\"targets\"].extend(ex[\"answers\"] if num_answers > 0 else [\"<unk>\"])\r\n return new_batch\r\n\r\ndata = datasets.load_dataset('super_glue', 'record', split=\"train\", script_version=\"master\")\r\ndata = data.map(unbatch, batched=True, remove_columns=data.column_names)\r\n```\r\n\r\nThanks a lot again, this was a super great way to do it." ]
2021-08-06T19:45:52
2022-03-07T13:58:00
2022-03-07T13:58:00
NONE
null
null
null
null
Hi I need to use "unbatch" operation in tensorflow on a huggingface dataset, I could not find this operation, could you kindly direct me how I can do it, here is the problem I am trying to solve: I am considering "record" dataset in SuperGlue and I need to replicate each entery of the dataset for each answer, to make it similar to what T5 originally did: https://github.com/google-research/text-to-text-transfer-transformer/blob/3c58859b8fe72c2dbca6a43bc775aa510ba7e706/t5/data/preprocessors.py#L925 Here please find an example: For example, a typical example from ReCoRD might look like { 'passsage': 'This is the passage.', 'query': 'A @placeholder is a bird.', 'entities': ['penguin', 'potato', 'pigeon'], 'answers': ['penguin', 'pigeon'], } and I need a prosessor which would turn this example into the following two examples: { 'inputs': 'record query: A @placeholder is a bird. entities: penguin, ' 'potato, pigeon passage: This is the passage.', 'targets': 'penguin', } and { 'inputs': 'record query: A @placeholder is a bird. entities: penguin, ' 'potato, pigeon passage: This is the passage.', 'targets': 'pigeon', } For doing this, one need unbatch, as each entry can map to multiple samples depending on the number of answers, I am not sure how to perform this operation with huggingface datasets library and greatly appreciate your help @lhoestq Thank you very much.
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2767/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2767/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
212 days, 18:12:08
https://api.github.com/repos/huggingface/datasets/issues/2765
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2765/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2765/comments
https://api.github.com/repos/huggingface/datasets/issues/2765/events
https://github.com/huggingface/datasets/issues/2765
962,861,395
MDU6SXNzdWU5NjI4NjEzOTU=
2,765
BERTScore Error
{ "avatar_url": "https://avatars.githubusercontent.com/u/49101362?v=4", "events_url": "https://api.github.com/users/gagan3012/events{/privacy}", "followers_url": "https://api.github.com/users/gagan3012/followers", "following_url": "https://api.github.com/users/gagan3012/following{/other_user}", "gists_url": "https://api.github.com/users/gagan3012/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/gagan3012", "id": 49101362, "login": "gagan3012", "node_id": "MDQ6VXNlcjQ5MTAxMzYy", "organizations_url": "https://api.github.com/users/gagan3012/orgs", "received_events_url": "https://api.github.com/users/gagan3012/received_events", "repos_url": "https://api.github.com/users/gagan3012/repos", "site_admin": false, "starred_url": "https://api.github.com/users/gagan3012/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gagan3012/subscriptions", "type": "User", "url": "https://api.github.com/users/gagan3012", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "Hi,\r\n\r\nThe `use_fast_tokenizer` argument has been recently added to the bert-score lib. I've opened a PR with the fix. In the meantime, you can try to downgrade the version of bert-score with the following command to make the code work:\r\n```\r\npip uninstall bert-score\r\npip install \"bert-score<0.3.10\"\r\n```" ]
2021-08-06T15:58:57
2021-08-09T11:16:25
2021-08-09T11:16:25
NONE
null
null
null
null
## Describe the bug A clear and concise description of what the bug is. ## Steps to reproduce the bug ```python predictions = ["hello there", "general kenobi"] references = ["hello there", "general kenobi"] bert = load_metric('bertscore') bert.compute(predictions=predictions, references=references,lang='en') ``` # Bug `TypeError: get_hash() missing 1 required positional argument: 'use_fast_tokenizer'` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - Platform: Colab - Python version: - PyArrow version:
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2765/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2765/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
2 days, 19:17:28
https://api.github.com/repos/huggingface/datasets/issues/2763
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2763/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2763/comments
https://api.github.com/repos/huggingface/datasets/issues/2763/events
https://github.com/huggingface/datasets/issues/2763
961,895,523
MDU6SXNzdWU5NjE4OTU1MjM=
2,763
English wikipedia datasets is not clean
{ "avatar_url": "https://avatars.githubusercontent.com/u/23355969?v=4", "events_url": "https://api.github.com/users/lucadiliello/events{/privacy}", "followers_url": "https://api.github.com/users/lucadiliello/followers", "following_url": "https://api.github.com/users/lucadiliello/following{/other_user}", "gists_url": "https://api.github.com/users/lucadiliello/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lucadiliello", "id": 23355969, "login": "lucadiliello", "node_id": "MDQ6VXNlcjIzMzU1OTY5", "organizations_url": "https://api.github.com/users/lucadiliello/orgs", "received_events_url": "https://api.github.com/users/lucadiliello/received_events", "repos_url": "https://api.github.com/users/lucadiliello/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lucadiliello/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lucadiliello/subscriptions", "type": "User", "url": "https://api.github.com/users/lucadiliello", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "Hi ! Certain users might need these data (for training or simply to explore/index the dataset).\r\n\r\nFeel free to implement a map function that gets rid of these paragraphs and process the wikipedia dataset with it before training" ]
2021-08-05T14:37:24
2023-07-25T17:43:04
2023-07-25T17:43:04
CONTRIBUTOR
null
null
null
null
## Describe the bug Wikipedia english dumps contain many wikipedia paragraphs like "References", "Category:" and "See Also" that should not be used for training. ## Steps to reproduce the bug ```python # Sample code to reproduce the bug from datasets import load_dataset w = load_dataset('wikipedia', '20200501.en') print(w['train'][0]['text']) ``` > 'Yangliuqing () is a market town in Xiqing District, in the western suburbs of Tianjin, People\'s Republic of China. Despite its relatively small size, it has been named since 2006 in the "famous historical and cultural market towns in China".\n\nIt is best known in China for creating nianhua or Yangliuqing nianhua. For more than 400 years, Yangliuqing has in effect specialised in the creation of these woodcuts for the New Year. wood block prints using vivid colourschemes to portray traditional scenes of children\'s games often interwoven with auspiciouse objects.\n\n, it had 27 residential communities () and 25 villages under its administration.\n\nShi Family Grand Courtyard\n\nShi Family Grand Courtyard (Tiānjīn Shí Jiā Dà Yuàn, 天津石家大院) is situated in Yangliuqing Town of Xiqing District, which is the former residence of wealthy merchant Shi Yuanshi - the 4th son of Shi Wancheng, one of the eight great masters in Tianjin. First built in 1875, it covers over 6,000 square meters, including large and small yards and over 200 folk houses, a theater and over 275 rooms that served as apartments and places of business and worship for this powerful family. Shifu Garden, which finished its expansion in October 2003, covers 1,200 square meters, incorporates the elegance of imperial garden and delicacy of south garden. Now the courtyard of Shi family covers about 10,000 square meters, which is called the first mansion in North China. Now it serves as the folk custom museum in Yangliuqing, which has a large collection of folk custom museum in Yanliuqing, which has a large collection of folk art pieces like Yanliuqing New Year pictures, brick sculpture.\n\nShi\'s ancestor came from Dong\'e County in Shandong Province, engaged in water transport of grain. As the wealth gradually accumulated, the Shi Family moved to Yangliuqing and bought large tracts of land and set up their residence. Shi Yuanshi came from the fourth generation of the family, who was a successful businessman and a good household manager, and the residence was thus enlarged for several times until it acquired the present scale. It is believed to be the first mansion in the west of Tianjin.\n\nThe residence is symmetric based on the axis formed by a passageway in the middle, on which there are four archways. On the east side of the courtyard, there are traditional single-story houses with rows of rooms around the four sides, which was once the living area for the Shi Family. The rooms on north side were the accountants\' office. On the west are the major constructions including the family hall for worshipping Buddha, theater and the south reception room. On both sides of the residence are side yard rooms for maids and servants.\n\nToday, the Shi mansion, located in the township of Yangliuqing to the west of central Tianjin, stands as a surprisingly well-preserved monument to China\'s pre-revolution mercantile spirit. It also serves as an on-location shoot for many of China\'s popular historical dramas. Many of the rooms feature period furniture, paintings and calligraphy, and the extensive Shifu Garden.\n\nPart of the complex has been turned into the Yangliuqing Museum, which includes displays focused on symbolic aspects of the courtyards\' construction, local folk art and customs, and traditional period furnishings and crafts.\n\n**See also \n\nList of township-level divisions of Tianjin\n\nReferences \n\n http://arts.cultural-china.com/en/65Arts4795.html\n\nCategory:Towns in Tianjin'** ## Expected results I expect no junk in the data. ## Actual results Specify the actual results or traceback. ## Environment info - `datasets` version: 1.10.2 - Platform: macOS-10.15.7-x86_64-i386-64bit - Python version: 3.8.5 - PyArrow version: 3.0.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2763/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2763/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
719 days, 3:05:40
https://api.github.com/repos/huggingface/datasets/issues/2762
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2762/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2762/comments
https://api.github.com/repos/huggingface/datasets/issues/2762/events
https://github.com/huggingface/datasets/issues/2762
961,652,046
MDU6SXNzdWU5NjE2NTIwNDY=
2,762
Add RVL-CDIP dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/NielsRogge", "id": 48327001, "login": "NielsRogge", "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "repos_url": "https://api.github.com/users/NielsRogge/repos", "site_admin": false, "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "type": "User", "url": "https://api.github.com/users/NielsRogge", "user_view_type": "public" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" }, { "color": "bfdadc", "default": false, "description": "Vision datasets", "id": 3608941089, "name": "vision", "node_id": "LA_kwDODunzps7XHBIh", "url": "https://api.github.com/repos/huggingface/datasets/labels/vision" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/17746528?v=4", "events_url": "https://api.github.com/users/dnaveenr/events{/privacy}", "followers_url": "https://api.github.com/users/dnaveenr/followers", "following_url": "https://api.github.com/users/dnaveenr/following{/other_user}", "gists_url": "https://api.github.com/users/dnaveenr/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dnaveenr", "id": 17746528, "login": "dnaveenr", "node_id": "MDQ6VXNlcjE3NzQ2NTI4", "organizations_url": "https://api.github.com/users/dnaveenr/orgs", "received_events_url": "https://api.github.com/users/dnaveenr/received_events", "repos_url": "https://api.github.com/users/dnaveenr/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dnaveenr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dnaveenr/subscriptions", "type": "User", "url": "https://api.github.com/users/dnaveenr", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/17746528?v=4", "events_url": "https://api.github.com/users/dnaveenr/events{/privacy}", "followers_url": "https://api.github.com/users/dnaveenr/followers", "following_url": "https://api.github.com/users/dnaveenr/following{/other_user}", "gists_url": "https://api.github.com/users/dnaveenr/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dnaveenr", "id": 17746528, "login": "dnaveenr", "node_id": "MDQ6VXNlcjE3NzQ2NTI4", "organizations_url": "https://api.github.com/users/dnaveenr/orgs", "received_events_url": "https://api.github.com/users/dnaveenr/received_events", "repos_url": "https://api.github.com/users/dnaveenr/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dnaveenr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dnaveenr/subscriptions", "type": "User", "url": "https://api.github.com/users/dnaveenr", "user_view_type": "public" } ]
[ "cc @nateraw ", "#self-assign", "[labels_only.tar.gz](https://docs.google.com/uc?authuser=0&id=0B0NKIRwUL9KYcXo3bV9LU0t3SGs&export=download) on the RVL-CDIP website does not work for me.\r\n\r\n> 404. That’s an error. The requested URL was not found on this server.\r\n\r\nI contacted the author ( Adam Harley) regarding this, and he told me that the link works for him. Not sure what the issue is. But Adam shared the file (labels_only.tar.gz) with me as an attachment.\r\n\r\nAre we allowed to host this file(labels_only.tar.gz) elsewhere and use that link instead ?\r\n\r\nThank you.\r\n" ]
2021-08-05T09:57:05
2022-04-21T17:15:41
2022-04-21T17:15:41
CONTRIBUTOR
null
null
null
null
## Adding a Dataset - **Name:** RVL-CDIP - **Description:** The RVL-CDIP (Ryerson Vision Lab Complex Document Information Processing) dataset consists of 400,000 grayscale images in 16 classes, with 25,000 images per class. There are 320,000 training images, 40,000 validation images, and 40,000 test images. The images are sized so their largest dimension does not exceed 1000 pixels. - **Paper:** https://www.cs.cmu.edu/~aharley/icdar15/ - **Data:** https://www.cs.cmu.edu/~aharley/rvl-cdip/ - **Motivation:** I'm currently adding LayoutLMv2 and LayoutXLM to HuggingFace Transformers. LayoutLM (v1) already exists in the library. This dataset has a large value for document image classification (i.e. classifying scanned documents). LayoutLM models obtain SOTA on this dataset, so would be great to directly use it in notebooks. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2762/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2762/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
259 days, 7:18:36
https://api.github.com/repos/huggingface/datasets/issues/2761
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2761/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2761/comments
https://api.github.com/repos/huggingface/datasets/issues/2761/events
https://github.com/huggingface/datasets/issues/2761
961,568,287
MDU6SXNzdWU5NjE1NjgyODc=
2,761
Error loading C4 realnewslike dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/32061512?v=4", "events_url": "https://api.github.com/users/danshirron/events{/privacy}", "followers_url": "https://api.github.com/users/danshirron/followers", "following_url": "https://api.github.com/users/danshirron/following{/other_user}", "gists_url": "https://api.github.com/users/danshirron/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/danshirron", "id": 32061512, "login": "danshirron", "node_id": "MDQ6VXNlcjMyMDYxNTEy", "organizations_url": "https://api.github.com/users/danshirron/orgs", "received_events_url": "https://api.github.com/users/danshirron/received_events", "repos_url": "https://api.github.com/users/danshirron/repos", "site_admin": false, "starred_url": "https://api.github.com/users/danshirron/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/danshirron/subscriptions", "type": "User", "url": "https://api.github.com/users/danshirron", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "Hi @danshirron, \r\n`c4` was updated few days back by @lhoestq. The new configs are `['en', 'en.noclean', 'en.realnewslike', 'en.webtextlike'].` You'll need to remove any older version of this dataset you previously downloaded and then run `load_dataset` again with new configuration.", "@bhavitvyamalik @lhoestq , just tried the above and got:\r\n>>> a=datasets.load_dataset('c4','en.realnewslike')\r\nDownloading: 3.29kB [00:00, 1.66MB/s] \r\nDownloading: 2.40MB [00:00, 12.6MB/s] \r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/dshirron/.local/lib/python3.8/site-packages/datasets/load.py\", line 819, in load_dataset\r\n builder_instance = load_dataset_builder(\r\n File \"/home/dshirron/.local/lib/python3.8/site-packages/datasets/load.py\", line 701, in load_dataset_builder\r\n builder_instance: DatasetBuilder = builder_cls(\r\n File \"/home/dshirron/.local/lib/python3.8/site-packages/datasets/builder.py\", line 1049, in __init__\r\n super(GeneratorBasedBuilder, self).__init__(*args, **kwargs)\r\n File \"/home/dshirron/.local/lib/python3.8/site-packages/datasets/builder.py\", line 268, in __init__\r\n self.config, self.config_id = self._create_builder_config(\r\n File \"/home/dshirron/.local/lib/python3.8/site-packages/datasets/builder.py\", line 360, in _create_builder_config\r\n raise ValueError(\r\nValueError: BuilderConfig en.realnewslike not found. Available: ['en', 'realnewslike', 'en.noblocklist', 'en.noclean']\r\n>>> \r\n\r\ndatasets version is 1.11.0\r\n", "I think I had an older version of datasets installed and that's why I commented the old configurations in my last comment, my bad! I re-checked and updated it to latest version (`datasets==1.11.0`) and it's showing `available configs: ['en', 'realnewslike', 'en.noblocklist', 'en.noclean']`. \r\n\r\nI tried `raw_datasets = load_dataset('c4', 'realnewslike')` and the download started. Make sure you don't have any old copy of this dataset and you download it fresh using the latest version of datasets. Sorry for the mix up!", "It works. I probably had some issue with the cache. after cleaning it im able to download the dataset. Thanks" ]
2021-08-05T08:16:58
2021-08-08T19:44:34
2021-08-08T19:44:34
NONE
null
null
null
null
## Describe the bug Error loading C4 realnewslike dataset. Validation part mismatch ## Steps to reproduce the bug ```python raw_datasets = load_dataset('c4', 'realnewslike', cache_dir=model_args.cache_dir) ## Expected results success on data loading ## Actual results Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 15.3M/15.3M [00:00<00:00, 28.1MB/s]Traceback (most recent call last): File "run_mlm_tf.py", line 794, in <module> main() File "run_mlm_tf.py", line 425, in main raw_datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name, cache_dir=model_args.cache_dir) File "/home/dshirron/.local/lib/python3.8/site-packages/datasets/load.py", line 843, in load_dataset builder_instance.download_and_prepare( File "/home/dshirron/.local/lib/python3.8/site-packages/datasets/builder.py", line 608, in download_and_prepare self._download_and_prepare( File "/home/dshirron/.local/lib/python3.8/site-packages/datasets/builder.py", line 698, in _download_and_prepare verify_splits(self.info.splits, split_dict) File "/home/dshirron/.local/lib/python3.8/site-packages/datasets/utils/info_utils.py", line 74, in verify_splits raise NonMatchingSplitsSizesError(str(bad_splits)) datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='validation', num_bytes=38165657946, num_examples=13799838, dataset_name='c4'), 'recorded': SplitInfo(name='validation', num_bytes=37875873, num_examples=13863, dataset_name='c4')}] ## Environment info - `datasets` version: 1.10.2 - Platform: Linux-5.4.0-58-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 4.0.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/32061512?v=4", "events_url": "https://api.github.com/users/danshirron/events{/privacy}", "followers_url": "https://api.github.com/users/danshirron/followers", "following_url": "https://api.github.com/users/danshirron/following{/other_user}", "gists_url": "https://api.github.com/users/danshirron/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/danshirron", "id": 32061512, "login": "danshirron", "node_id": "MDQ6VXNlcjMyMDYxNTEy", "organizations_url": "https://api.github.com/users/danshirron/orgs", "received_events_url": "https://api.github.com/users/danshirron/received_events", "repos_url": "https://api.github.com/users/danshirron/repos", "site_admin": false, "starred_url": "https://api.github.com/users/danshirron/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/danshirron/subscriptions", "type": "User", "url": "https://api.github.com/users/danshirron", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2761/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2761/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
3 days, 11:27:36
https://api.github.com/repos/huggingface/datasets/issues/2760
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2760/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2760/comments
https://api.github.com/repos/huggingface/datasets/issues/2760/events
https://github.com/huggingface/datasets/issues/2760
961,372,667
MDU6SXNzdWU5NjEzNzI2Njc=
2,760
Add Nuswide dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/19774925?v=4", "events_url": "https://api.github.com/users/shivangibithel/events{/privacy}", "followers_url": "https://api.github.com/users/shivangibithel/followers", "following_url": "https://api.github.com/users/shivangibithel/following{/other_user}", "gists_url": "https://api.github.com/users/shivangibithel/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/shivangibithel", "id": 19774925, "login": "shivangibithel", "node_id": "MDQ6VXNlcjE5Nzc0OTI1", "organizations_url": "https://api.github.com/users/shivangibithel/orgs", "received_events_url": "https://api.github.com/users/shivangibithel/received_events", "repos_url": "https://api.github.com/users/shivangibithel/repos", "site_admin": false, "starred_url": "https://api.github.com/users/shivangibithel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shivangibithel/subscriptions", "type": "User", "url": "https://api.github.com/users/shivangibithel", "user_view_type": "public" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" }, { "color": "bfdadc", "default": false, "description": "Vision datasets", "id": 3608941089, "name": "vision", "node_id": "LA_kwDODunzps7XHBIh", "url": "https://api.github.com/repos/huggingface/datasets/labels/vision" } ]
open
false
null
[]
[]
2021-08-05T03:00:41
2021-12-08T12:06:23
null
NONE
null
null
null
null
## Adding a Dataset - **Name:** *NUSWIDE* - **Description:** *[A Real-World Web Image Dataset from National University of Singapore](https://lms.comp.nus.edu.sg/wp-content/uploads/2019/research/nuswide/NUS-WIDE.html)* - **Paper:** *[here](https://lms.comp.nus.edu.sg/wp-content/uploads/2019/research/nuswide/nuswide-civr2009.pdf)* - **Data:** *[here](https://github.com/wenting-zhao/nuswide)* - **Motivation:** *This dataset is a benchmark in the Text Retrieval task.* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2760/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2760/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
https://api.github.com/repos/huggingface/datasets/issues/2757
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2757/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2757/comments
https://api.github.com/repos/huggingface/datasets/issues/2757/events
https://github.com/huggingface/datasets/issues/2757
959,984,081
MDU6SXNzdWU5NTk5ODQwODE=
2,757
Unexpected type after `concatenate_datasets`
{ "avatar_url": "https://avatars.githubusercontent.com/u/32683010?v=4", "events_url": "https://api.github.com/users/JulesBelveze/events{/privacy}", "followers_url": "https://api.github.com/users/JulesBelveze/followers", "following_url": "https://api.github.com/users/JulesBelveze/following{/other_user}", "gists_url": "https://api.github.com/users/JulesBelveze/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/JulesBelveze", "id": 32683010, "login": "JulesBelveze", "node_id": "MDQ6VXNlcjMyNjgzMDEw", "organizations_url": "https://api.github.com/users/JulesBelveze/orgs", "received_events_url": "https://api.github.com/users/JulesBelveze/received_events", "repos_url": "https://api.github.com/users/JulesBelveze/repos", "site_admin": false, "starred_url": "https://api.github.com/users/JulesBelveze/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JulesBelveze/subscriptions", "type": "User", "url": "https://api.github.com/users/JulesBelveze", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
[ "Hi @JulesBelveze, thanks for your question.\r\n\r\nNote that 🤗 `datasets` internally store their data in Apache Arrow format.\r\n\r\nHowever, when accessing dataset columns, by default they are returned as native Python objects (lists in this case).\r\n\r\nIf you would like their columns to be returned in a more suitable format for your use case (torch arrays), you can use the method `set_format()`:\r\n```python\r\nconcat_dataset.set_format(type=\"torch\")\r\n```\r\n\r\nYou have detailed information in our docs:\r\n- [Using a Dataset with PyTorch/Tensorflow](https://huggingface.co/docs/datasets/torch_tensorflow.html)\r\n- [Dataset.set_format()](https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.set_format)", "Thanks @albertvillanova it indeed did the job 😃 \r\nThanks for your answer!" ]
2021-08-04T07:10:39
2021-08-04T16:01:24
2021-08-04T16:01:23
NONE
null
null
null
null
## Describe the bug I am trying to concatenate two `Dataset` using `concatenate_datasets` but it turns out that after concatenation the features are casted from `torch.Tensor` to `list`. It then leads to a weird tensors when trying to convert it to a `DataLoader`. However, if I use each `Dataset` separately everything behave as expected. ## Steps to reproduce the bug ```python >>> featurized_teacher Dataset({ features: ['t_labels', 't_input_ids', 't_token_type_ids', 't_attention_mask'], num_rows: 502 }) >>> for f in featurized_teacher.features: print(featurized_teacher[f].shape) torch.Size([502]) torch.Size([502, 300]) torch.Size([502, 300]) torch.Size([502, 300]) >>> featurized_student Dataset({ features: ['s_features', 's_labels'], num_rows: 502 }) >>> for f in featurized_student.features: print(featurized_student[f].shape) torch.Size([502, 64]) torch.Size([502]) ``` The shapes seem alright to me. Then the results after concatenation are as follow: ```python >>> concat_dataset = datasets.concatenate_datasets([featurized_student, featurized_teacher], axis=1) >>> type(concat_dataset["t_labels"]) <class 'list'> ``` One would expect to obtain the same type as the one before concatenation. Am I doing something wrong here? Any idea on how to fix this unexpected behavior? ## Environment info - `datasets` version: 1.9.0 - Platform: macOS-10.14.6-x86_64-i386-64bit - Python version: 3.9.5 - PyArrow version: 3.0.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/32683010?v=4", "events_url": "https://api.github.com/users/JulesBelveze/events{/privacy}", "followers_url": "https://api.github.com/users/JulesBelveze/followers", "following_url": "https://api.github.com/users/JulesBelveze/following{/other_user}", "gists_url": "https://api.github.com/users/JulesBelveze/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/JulesBelveze", "id": 32683010, "login": "JulesBelveze", "node_id": "MDQ6VXNlcjMyNjgzMDEw", "organizations_url": "https://api.github.com/users/JulesBelveze/orgs", "received_events_url": "https://api.github.com/users/JulesBelveze/received_events", "repos_url": "https://api.github.com/users/JulesBelveze/repos", "site_admin": false, "starred_url": "https://api.github.com/users/JulesBelveze/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JulesBelveze/subscriptions", "type": "User", "url": "https://api.github.com/users/JulesBelveze", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2757/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2757/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
8:50:44
https://api.github.com/repos/huggingface/datasets/issues/2750
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2750/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2750/comments
https://api.github.com/repos/huggingface/datasets/issues/2750/events
https://github.com/huggingface/datasets/issues/2750
958,984,730
MDU6SXNzdWU5NTg5ODQ3MzA=
2,750
Second concatenation of datasets produces errors
{ "avatar_url": "https://avatars.githubusercontent.com/u/36672861?v=4", "events_url": "https://api.github.com/users/Aktsvigun/events{/privacy}", "followers_url": "https://api.github.com/users/Aktsvigun/followers", "following_url": "https://api.github.com/users/Aktsvigun/following{/other_user}", "gists_url": "https://api.github.com/users/Aktsvigun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Aktsvigun", "id": 36672861, "login": "Aktsvigun", "node_id": "MDQ6VXNlcjM2NjcyODYx", "organizations_url": "https://api.github.com/users/Aktsvigun/orgs", "received_events_url": "https://api.github.com/users/Aktsvigun/received_events", "repos_url": "https://api.github.com/users/Aktsvigun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Aktsvigun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Aktsvigun/subscriptions", "type": "User", "url": "https://api.github.com/users/Aktsvigun", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
[ "@albertvillanova ", "Hi @Aktsvigun, thanks for reporting.\r\n\r\nI'm investigating this.", "Hi @albertvillanova ,\r\nany update on this? Can I probably help in some way?", "Hi @Aktsvigun! We are planning to address this issue before our next release, in a couple of weeks at most. 😅 \r\n\r\nIn the meantime, if you would like to contribute, feel free to open a Pull Request. You are welcome. Here you can find more information: [How to contribute to Datasets?](CONTRIBUTING.md)", "I can't reproduce the bug on master. I believe this issue was fixed by https://github.com/huggingface/datasets/pull/3551." ]
2021-08-03T10:47:04
2022-01-19T14:23:43
2022-01-19T14:19:05
NONE
null
null
null
null
Hi, I am need to concatenate my dataset with others several times, and after I concatenate it for the second time, the features of features (e.g. tags names) are collapsed. This hinders, for instance, the usage of tokenize function with `data.map`. ``` from datasets import load_dataset, concatenate_datasets data = load_dataset('trec')['train'] concatenated = concatenate_datasets([data, data]) concatenated_2 = concatenate_datasets([concatenated, concatenated]) print('True features of features:', concatenated.features) print('\nProduced features of features:', concatenated_2.features) ``` outputs ``` True features of features: {'label-coarse': ClassLabel(num_classes=6, names=['DESC', 'ENTY', 'ABBR', 'HUM', 'NUM', 'LOC'], names_file=None, id=None), 'label-fine': ClassLabel(num_classes=47, names=['manner', 'cremat', 'animal', 'exp', 'ind', 'gr', 'title', 'def', 'date', 'reason', 'event', 'state', 'desc', 'count', 'other', 'letter', 'religion', 'food', 'country', 'color', 'termeq', 'city', 'body', 'dismed', 'mount', 'money', 'product', 'period', 'substance', 'sport', 'plant', 'techmeth', 'volsize', 'instru', 'abb', 'speed', 'word', 'lang', 'perc', 'code', 'dist', 'temp', 'symbol', 'ord', 'veh', 'weight', 'currency'], names_file=None, id=None), 'text': Value(dtype='string', id=None)} Produced features of features: {'label-coarse': Value(dtype='int64', id=None), 'label-fine': Value(dtype='int64', id=None), 'text': Value(dtype='string', id=None)} ``` I am using `datasets` v.1.11.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2750/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2750/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
169 days, 3:32:01
https://api.github.com/repos/huggingface/datasets/issues/2749
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2749/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2749/comments
https://api.github.com/repos/huggingface/datasets/issues/2749/events
https://github.com/huggingface/datasets/issues/2749
958,968,748
MDU6SXNzdWU5NTg5Njg3NDg=
2,749
Raise a proper exception when trying to stream a dataset that requires to manually download files
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
[ "Hi @severo, thanks for reporting.\r\n\r\nAs discussed, datasets requiring manual download should be:\r\n- programmatically identifiable\r\n- properly handled with more clear error message when trying to load them with streaming\r\n\r\nIn relation with programmatically identifiability, note that for datasets requiring manual download, their builder have a property `manual_download_instructions` which is not None:\r\n```python\r\n# Dataset requiring manual download:\r\nbuilder.manual_download_instructions is not None\r\n```", "Thanks @albertvillanova " ]
2021-08-03T10:26:27
2021-08-09T08:53:35
2021-08-04T11:36:30
COLLABORATOR
null
null
null
null
## Describe the bug At least for 'reclor', 'telugu_books', 'turkish_movie_sentiment', 'ubuntu_dialogs_corpus', 'wikihow', trying to `load_dataset` in streaming mode raises a `TypeError` without any detail about why it fails. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("reclor", streaming=True) ``` ## Expected results Ideally: raise a specific exception, something like `ManualDownloadError`. Or at least give the reason in the message, as when we load in normal mode: ```python from datasets import load_dataset dataset = load_dataset("reclor") ``` ``` AssertionError: The dataset reclor with config default requires manual data. Please follow the manual download instructions: to use ReClor you need to download it manually. Please go to its homepage (http://whyu.me/reclor/) fill the google form and you will receive a download link and a password to extract it.Please extract all files in one folder and use the path folder in datasets.load_dataset('reclor', data_dir='path/to/folder/folder_name') . Manual data can be loaded with `datasets.load_dataset(reclor, data_dir='<path/to/manual/data>') ``` ## Actual results ``` TypeError: expected str, bytes or os.PathLike object, not NoneType ``` ## Environment info - `datasets` version: 1.11.0 - Platform: macOS-11.5-x86_64-i386-64bit - Python version: 3.8.11 - PyArrow version: 4.0.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2749/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2749/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
1 day, 1:10:03
https://api.github.com/repos/huggingface/datasets/issues/2746
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2746/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2746/comments
https://api.github.com/repos/huggingface/datasets/issues/2746/events
https://github.com/huggingface/datasets/issues/2746
958,551,619
MDU6SXNzdWU5NTg1NTE2MTk=
2,746
Cannot load `few-nerd` dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/28717374?v=4", "events_url": "https://api.github.com/users/Mehrad0711/events{/privacy}", "followers_url": "https://api.github.com/users/Mehrad0711/followers", "following_url": "https://api.github.com/users/Mehrad0711/following{/other_user}", "gists_url": "https://api.github.com/users/Mehrad0711/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Mehrad0711", "id": 28717374, "login": "Mehrad0711", "node_id": "MDQ6VXNlcjI4NzE3Mzc0", "organizations_url": "https://api.github.com/users/Mehrad0711/orgs", "received_events_url": "https://api.github.com/users/Mehrad0711/received_events", "repos_url": "https://api.github.com/users/Mehrad0711/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Mehrad0711/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Mehrad0711/subscriptions", "type": "User", "url": "https://api.github.com/users/Mehrad0711", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "Hi @Mehrad0711,\r\n\r\nI'm afraid there is no \"canonical\" Hugging Face dataset named \"few-nerd\".\r\n\r\nThere are 2 kinds of datasets hosted at the Hugging Face Hub:\r\n- canonical datasets (their identifier contains no slash \"/\"): we, the Hugging Face team, supervise their implementation and we make sure they work correctly by means of our test suite\r\n- community datasets (their identifier contains a slash \"/\", where before the slash it is the username or the organization name): those datasets are uploaded to the Hub by the community, and we, the Hugging Face team, do not supervise them; it is the responsibility of the user/organization implementing them properly if they want them to be used by other users.\r\n\r\nIn this specific case, there is no \"canonical\" dataset named \"few-nerd\". On the other hand, there are two \"community\" datasets named \"few-nerd\":\r\n- [\"nbroad/few-nerd\"](https://huggingface.co/datasets/nbroad/few-nerd)\r\n- [\"dfki-nlp/few-nerd\"](https://huggingface.co/datasets/dfki-nlp/few-nerd)\r\n\r\nIf they were properly implemented, you should be able to load them this way:\r\n```python\r\n# \"nbroad/few-nerd\" community dataset\r\nds = load_dataset(\"nbroad/few-nerd\", \"supervised\")\r\n\r\n# \"dfki-nlp/few-nerd\" community dataset\r\nds = load_dataset(\"dfki-nlp/few-nerd\", \"supervised\")\r\n```\r\n\r\nHowever, they are not correctly implemented and both of them give errors:\r\n- \"nbroad/few-nerd\":\r\n ```\r\n TypeError: expected str, bytes or os.PathLike object, not dict\r\n ```\r\n- \"dfki-nlp/few-nerd\":\r\n ```\r\n ConnectionError: Couldn't reach https://cloud.tsinghua.edu.cn/f/09265750ae6340429827/?dl=1\r\n ```\r\n\r\nYou could try to contact their users/organizations to inform them about their bugs and ask them if they are planning to fix them. Alternatively you could try to implement your own script for this dataset.", "Thanks @albertvillanova for your detailed explanation! I will resort to my own scripts for now. ", "Hello, @Mehrad0711; Hi, @albertvillanova !\r\nI am the maintainer of the `dfki/few-nerd\" dataset script, sorry for the very late reply and hope this message finds you well!\r\nWe should use\r\n```\r\ndataset = load_dataset(\"dfki-nlp/few-nerd\", name=\"supervised\")\r\n```\r\ninstead of not specifying the \"name\" argument, where name is from `[\"supervised\", \"inter\", \"intra\"]`. Otherwise the method just treats \"supervised\" as `split`, which we reserve after specifying the name, since for each name, there are three splits: train, dev and test.\r\n\r\nAlso we use Tsinghua server source to download data files since it is the official source referred in the paper where the dataset is released (even though it is cc-by-sa-4.0 licensed, means we can copy the data anywhere after mentioning the license\r\n). Sometimes the server just runs down due to high pressure, kinda weird (we encountered the same server problem serveral times a month when we conducted experiments on Few-NERD XD). I tried the script just now and it works perfectly!\r\n```\r\n>> dataset\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['id', 'tokens', 'ner_tags', 'fine_ner_tags'],\r\n num_rows: 131767\r\n })\r\n validation: Dataset({\r\n features: ['id', 'tokens', 'ner_tags', 'fine_ner_tags'],\r\n num_rows: 18824\r\n })\r\n test: Dataset({\r\n features: ['id', 'tokens', 'ner_tags', 'fine_ner_tags'],\r\n num_rows: 37648\r\n })\r\n})\r\n>>> dataset[\"train\"]\r\nDataset({\r\n features: ['id', 'tokens', 'ner_tags', 'fine_ner_tags'],\r\n num_rows: 131767\r\n})\r\n>>> dataset[\"train\"][0]\r\n{'id': '0', 'tokens': ['Paul', 'International', 'airport', '.'], 'ner_tags': [0, 0, 0, 0], 'fine_ner_tags': [0, 0, 0, 0]}\r\n```\r\nAnyways if you cannot stand the pain with the server and its slow download speed, you can also download the `dfki/few-nerd.py` script from HF and change the `_URLs` to your personal drive (after you once successfully download the data and upload to your cloud drive), and then load the .py script locally.\r\n\r\nHope this reply can still be any help. If you still have problems with it, feel free to ask here and I am glad to help!\r\nBest wishes.", "Hi @chen-yuxuan, thanks for your answer.\r\n\r\nJust a few comments:\r\n\r\n- Please, note that as we use `datasets.load_dataset` implementation, we can pass the configuration name as the second positional argument (no need to pass explicitly `name=`) and it downloads the 3 splits:\r\n```python\r\n In [4]: ds = load_dataset(\"dfki-nlp/few-nerd\", \"supervised\")\r\nDownloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 11.5k/11.5k [00:00<00:00, 2.85MB/s]\r\nDownloading and preparing dataset few_nerd/supervised to .cache\\huggingface\\datasets\\dfki-nlp___few_nerd\\supervised\\0.0.0\\e40882b71f037a4a1f232025899170fbe8113cd2f4a26dddd2add7222a077255...\r\nDownloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 14.6M/14.6M [01:16<00:00, 190kB/s]\r\nDownloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 11.9M/11.9M [01:14<00:00, 160kB/s]\r\nDownloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 12.0M/12.0M [01:04<00:00, 186kB/s]\r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [03:58<00:00, 79.45s/it]\r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 3.11it/s]\r\n```\r\n\r\n- On the other hand, please note that your script does not work on Windows machines, because you call `open()` without passing the encoding parameter:\r\n```\r\n~\\.cache\\huggingface\\modules\\datasets_modules\\datasets\\dfki-nlp___few-nerd\\e40882b71f037a4a1f232025899170fbe8113cd2f4a26dddd2add7222a077255\\few-nerd.py in <genexpr>(.0)\r\n 276 assert filepath[-4:] == \".txt\"\r\n 277\r\n--> 278 num_lines = sum(1 for _ in open(filepath))\r\n 279 id = 0\r\n 280\r\n\r\n.venv\\lib\\encodings\\cp1252.py in decode(self, input, final)\r\n 21 class IncrementalDecoder(codecs.IncrementalDecoder):\r\n 22 def decode(self, input, final=False):\r\n---> 23 return codecs.charmap_decode(input,self.errors,decoding_table)[0]\r\n 24\r\n 25 class StreamWriter(Codec,codecs.StreamWriter):\r\n\r\nUnicodeDecodeError: 'charmap' codec can't decode byte 0x8d in position 5238: character maps to <undefined>\r\n```\r\n\r\nIf you would like your script to be usable on Windows machines, you should pass `encoding=\"utf-8\"` to every `open()` function:\r\n- line 278: `num_lines = sum(1 for _ in open(filepath, encoding=\"utf-8\"))`\r\n- line 281: `with open(filepath, \"r\", encoding=\"utf-8\")`", "Thank you @albertvillanova for your detailed feedback!\r\n\r\n> no need to pass explicitly `name=`\r\n\r\nGood catch! I thought `split` stands before `name` in the argument list... but now it is all clear to me, sounds cool! Thanks for the explanation.\r\n\r\nAnyways in our old code it still looks bit confusing if we only want one split but the function downloads all, so to allow efficient downloading, I optimized the code a bit so that only the specified split data is downloaded. now we get\r\n```\r\n>>> x = load_dataset(\"dfki-nlp/few-nerd\", \"supervised\")\r\nDownloading and preparing dataset few_nerd/supervised to /home/user/.cache/huggingface/datasets/few_nerd/supervised/0.0.0/8e7ab598946cd5b395dcec6ea239123c8dff5b58b8e1c03b0c595b540248a885...\r\nDownloading: 100%|███████████████████████████████████████████████████████████████████| 14.6M/14.6M [01:01<00:00, 238kB/s]\r\n100%|██████████████████████████████████████████████████████████████████████| 3359329/3359329 [00:12<00:00, 275462.84it/s]\r\n100%|████████████████████████████████████████████████████████████████████████| 482037/482037 [00:01<00:00, 278633.64it/s]\r\n100%|████████████████████████████████████████████████████████████████████████| 958765/958765 [00:03<00:00, 267472.83it/s]\r\nDataset few_nerd downloaded and prepared to /home/user/.cache/huggingface/datasets/few_nerd/supervised/0.0.0/8e7ab598946cd5b395dcec6ea239123c8dff5b58b8e1c03b0c595b540248a885. Subsequent calls will reuse this data.\r\n```\r\nwhere only one progress bar indicates downloading, and the three others just indicate pre-processing for the train, dev, test set.\r\n\r\nFor the encoding issue, I have made corresponding changes for the two lines you pointed out. However, I have no windows machine at hand, I would really appreciate it if you could help test on your end.\r\n\r\nAll the updates are uploaded to HF under `dfki-nlp` account where I am working for. \r\nThank you again for your kind help!\r\n", "Hi @chen-yuxuan,\r\n\r\nI have tested on Windows and now it works perfectly, after the fixing of the encoding issue:\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n\r\nIn [2]: ds = load_dataset(\"dfki-nlp/few-nerd\", \"supervised\")\r\nDownloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 11.5k/11.5k [00:00<?, ?B/s]\r\nDownloading and preparing dataset few_nerd/supervised to C:\\Users\\username\\.cache\\huggingface\\datasets\\dfki-nlp___few_nerd\\supervised\\0.0.0\\e1ceeaee82073fea12206e4461c7cfcd67e68c8f3ebeca179bddcacee00c4511...\r\n100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3359329/3359329 [00:25<00:00, 129427.23it/s]\r\n100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 482037/482037 [00:03<00:00, 134513.66it/s]\r\n100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 958765/958765 [00:06<00:00, 143152.35it/s]\r\nDataset few_nerd downloaded and prepared to C:\\Users\\username\\.cache\\huggingface\\datasets\\dfki-nlp___few_nerd\\supervised\\0.0.0\\e1ceeaee82073fea12206e4461c7cfcd67e68c8f3ebeca179bddcacee00c4511. Subsequent calls will reuse this data.765 [00:06<00:00, 139045.03it/s]\r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 174.71it/s]\r\n\r\nIn [3]: ds\r\nOut[3]:\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['id', 'tokens', 'ner_tags', 'fine_ner_tags'],\r\n num_rows: 131767\r\n })\r\n validation: Dataset({\r\n features: ['id', 'tokens', 'ner_tags', 'fine_ner_tags'],\r\n num_rows: 18824\r\n })\r\n test: Dataset({\r\n features: ['id', 'tokens', 'ner_tags', 'fine_ner_tags'],\r\n num_rows: 37648\r\n })\r\n})\r\n```" ]
2021-08-02T22:18:57
2021-11-16T08:51:34
2021-08-03T19:45:43
NONE
null
null
null
null
## Describe the bug Cannot load `few-nerd` dataset. ## Steps to reproduce the bug ```python from datasets import load_dataset load_dataset('few-nerd', 'supervised') ``` ## Actual results Executing above code will give the following error: ``` Using the latest cached version of the module from /Users/Mehrad/.cache/huggingface/modules/datasets_modules/datasets/few-nerd/62464ace912a40a0f33a11a8310f9041c9dc3590ff2b3c77c14d83ca53cfec53 (last modified on Wed Jun 2 11:34:25 2021) since it couldn't be found locally at /Users/Mehrad/Documents/GitHub/genienlp/few-nerd/few-nerd.py, or remotely (FileNotFoundError). Downloading and preparing dataset few_nerd/supervised (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /Users/Mehrad/.cache/huggingface/datasets/few_nerd/supervised/0.0.0/62464ace912a40a0f33a11a8310f9041c9dc3590ff2b3c77c14d83ca53cfec53... Traceback (most recent call last): File "/Users/Mehrad/opt/anaconda3/lib/python3.7/site-packages/datasets/builder.py", line 693, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/Users/Mehrad/opt/anaconda3/lib/python3.7/site-packages/datasets/builder.py", line 1107, in _prepare_split disable=bool(logging.get_verbosity() == logging.NOTSET), File "/Users/Mehrad/opt/anaconda3/lib/python3.7/site-packages/tqdm/std.py", line 1133, in __iter__ for obj in iterable: File "/Users/Mehrad/.cache/huggingface/modules/datasets_modules/datasets/few-nerd/62464ace912a40a0f33a11a8310f9041c9dc3590ff2b3c77c14d83ca53cfec53/few-nerd.py", line 196, in _generate_examples with open(filepath, encoding="utf-8") as f: FileNotFoundError: [Errno 2] No such file or directory: '/Users/Mehrad/.cache/huggingface/datasets/downloads/supervised/train.json' ``` The bug is probably in identifying and downloading the dataset. If I download the json splits directly from [link](https://github.com/nbroad1881/few-nerd/tree/main/uncompressed) and put them under the downloads directory, they will be processed into arrow format correctly. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.11.0 - Python version: 3.8 - PyArrow version: 1.0.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/28717374?v=4", "events_url": "https://api.github.com/users/Mehrad0711/events{/privacy}", "followers_url": "https://api.github.com/users/Mehrad0711/followers", "following_url": "https://api.github.com/users/Mehrad0711/following{/other_user}", "gists_url": "https://api.github.com/users/Mehrad0711/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Mehrad0711", "id": 28717374, "login": "Mehrad0711", "node_id": "MDQ6VXNlcjI4NzE3Mzc0", "organizations_url": "https://api.github.com/users/Mehrad0711/orgs", "received_events_url": "https://api.github.com/users/Mehrad0711/received_events", "repos_url": "https://api.github.com/users/Mehrad0711/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Mehrad0711/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Mehrad0711/subscriptions", "type": "User", "url": "https://api.github.com/users/Mehrad0711", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2746/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2746/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
21:26:46
https://api.github.com/repos/huggingface/datasets/issues/2743
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2743/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2743/comments
https://api.github.com/repos/huggingface/datasets/issues/2743/events
https://github.com/huggingface/datasets/issues/2743
958,119,251
MDU6SXNzdWU5NTgxMTkyNTE=
2,743
Dataset JSON is incorrect
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
[ "As discussed, the metadata JSON files must be regenerated because the keys were nor properly generated and they will not be read by the builder:\r\n> Indeed there is some problem/bug while reading the datasets_info.json file: there is a mismatch with the config.name keys in the file...\r\nIn the meanwhile, in order to be able to use the datasets_info.json file content, you can create the builder without passing the name :\r\n```\r\nIn [25]: builder = datasets.load_dataset_builder(\"journalists_questions\")\r\nIn [26]: builder.info.splits\r\nOut[26]: {'train': SplitInfo(name='train', num_bytes=342296, num_examples=10077, dataset_name='journalists_questions')}\r\n```\r\n\r\nAfter regenerating the metadata JSON file for this dataset, I get the right key:\r\n```\r\n{\"plain_text\": {\"description\": \"The journalists_questions corpus (\r\n```", "Thanks!" ]
2021-08-02T13:01:26
2021-08-03T10:06:57
2021-08-03T09:25:33
COLLABORATOR
null
null
null
null
## Describe the bug The JSON file generated for https://github.com/huggingface/datasets/blob/573f3d35081cee239d1b962878206e9abe6cde91/datasets/journalists_questions/journalists_questions.py is https://github.com/huggingface/datasets/blob/573f3d35081cee239d1b962878206e9abe6cde91/datasets/journalists_questions/dataset_infos.json. The only config should be `plain_text`, but the first key in the JSON is `journalists_questions` (the dataset id) instead. ```json { "journalists_questions": { "description": "The journalists_questions corpus (version 1.0) is a collection of 10K human-written Arabic\ntweets manually labeled for question identification over Arabic tweets posted by journalists.\n", ... ``` ## Steps to reproduce the bug Look at the files. ## Expected results The first key should be `plain_text`: ```json { "plain_text": { "description": "The journalists_questions corpus (version 1.0) is a collection of 10K human-written Arabic\ntweets manually labeled for question identification over Arabic tweets posted by journalists.\n", ... ``` ## Actual results ```json { "journalists_questions": { "description": "The journalists_questions corpus (version 1.0) is a collection of 10K human-written Arabic\ntweets manually labeled for question identification over Arabic tweets posted by journalists.\n", ... ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2743/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2743/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
20:24:07
https://api.github.com/repos/huggingface/datasets/issues/2742
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2742/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2742/comments
https://api.github.com/repos/huggingface/datasets/issues/2742/events
https://github.com/huggingface/datasets/issues/2742
958,114,064
MDU6SXNzdWU5NTgxMTQwNjQ=
2,742
Improve detection of streamable file types
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" } ]
[ "maybe we should rather attempt to download a `Range` from the server and see if it works?" ]
2021-08-02T12:55:09
2021-11-12T17:18:10
2021-11-12T17:18:10
COLLABORATOR
null
null
null
null
**Is your feature request related to a problem? Please describe.** ```python from datasets import load_dataset_builder from datasets.utils.streaming_download_manager import StreamingDownloadManager builder = load_dataset_builder("journalists_questions", name="plain_text") builder._split_generators(StreamingDownloadManager(base_path=builder.base_path)) ``` raises ``` NotImplementedError: Extraction protocol for file at https://drive.google.com/uc?export=download&id=1CBrh-9OrSpKmPQBxTK_ji6mq6WTN_U9U is not implemented yet ``` But the file at https://drive.google.com/uc?export=download&id=1CBrh-9OrSpKmPQBxTK_ji6mq6WTN_U9U is a text file and it can be streamed: ```bash curl --header "Range: bytes=0-100" -L https://drive.google.com/uc\?export\=download\&id\=1CBrh-9OrSpKmPQBxTK_ji6mq6WTN_U9U 506938088174940160 yes 1 302221719412830209 yes 1 289761704907268096 yes 1 513820885032378369 yes % ``` Yet, it's wrongly categorized as a file type that cannot be streamed because the test is currently based on 1. the presence of a file extension at the end of the URL (here: no extension), and 2. the inclusion of this extension in a list of supported formats. **Describe the solution you'd like** In the case of an URL (instead of a local path), ask for the MIME type, and decide on that value? Note that it would not work in that case, because the value of `content_type` is `text/html; charset=UTF-8`. **Describe alternatives you've considered** Add a variable in the dataset script to set the data format by hand.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2742/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2742/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
102 days, 4:23:01
https://api.github.com/repos/huggingface/datasets/issues/2741
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2741/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2741/comments
https://api.github.com/repos/huggingface/datasets/issues/2741/events
https://github.com/huggingface/datasets/issues/2741
957,979,559
MDU6SXNzdWU5NTc5Nzk1NTk=
2,741
Add Hypersim dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4", "events_url": "https://api.github.com/users/osanseviero/events{/privacy}", "followers_url": "https://api.github.com/users/osanseviero/followers", "following_url": "https://api.github.com/users/osanseviero/following{/other_user}", "gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/osanseviero", "id": 7246357, "login": "osanseviero", "node_id": "MDQ6VXNlcjcyNDYzNTc=", "organizations_url": "https://api.github.com/users/osanseviero/orgs", "received_events_url": "https://api.github.com/users/osanseviero/received_events", "repos_url": "https://api.github.com/users/osanseviero/repos", "site_admin": false, "starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions", "type": "User", "url": "https://api.github.com/users/osanseviero", "user_view_type": "public" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" }, { "color": "bfdadc", "default": false, "description": "Vision datasets", "id": 3608941089, "name": "vision", "node_id": "LA_kwDODunzps7XHBIh", "url": "https://api.github.com/repos/huggingface/datasets/labels/vision" } ]
open
false
null
[]
[]
2021-08-02T10:06:50
2021-12-08T12:06:51
null
CONTRIBUTOR
null
null
null
null
## Adding a Dataset - **Name:** Hypersim - **Description:** photorealistic synthetic dataset for holistic indoor scene understanding - **Paper:** *link to the dataset paper if available* - **Data:** https://github.com/apple/ml-hypersim Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2741/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2741/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
https://api.github.com/repos/huggingface/datasets/issues/2737
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2737/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2737/comments
https://api.github.com/repos/huggingface/datasets/issues/2737/events
https://github.com/huggingface/datasets/issues/2737
957,124,881
MDU6SXNzdWU5NTcxMjQ4ODE=
2,737
SacreBLEU update
{ "avatar_url": "https://avatars.githubusercontent.com/u/46989091?v=4", "events_url": "https://api.github.com/users/devrimcavusoglu/events{/privacy}", "followers_url": "https://api.github.com/users/devrimcavusoglu/followers", "following_url": "https://api.github.com/users/devrimcavusoglu/following{/other_user}", "gists_url": "https://api.github.com/users/devrimcavusoglu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/devrimcavusoglu", "id": 46989091, "login": "devrimcavusoglu", "node_id": "MDQ6VXNlcjQ2OTg5MDkx", "organizations_url": "https://api.github.com/users/devrimcavusoglu/orgs", "received_events_url": "https://api.github.com/users/devrimcavusoglu/received_events", "repos_url": "https://api.github.com/users/devrimcavusoglu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/devrimcavusoglu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/devrimcavusoglu/subscriptions", "type": "User", "url": "https://api.github.com/users/devrimcavusoglu", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "Hi @devrimcavusoglu, \r\nI tried your code with latest version of `datasets`and `sacrebleu==1.5.1` and it's running fine after changing one small thing:\r\n```\r\nsacrebleu = datasets.load_metric('sacrebleu')\r\npredictions = [\"It is a guide to action which ensures that the military always obeys the commands of the party\"]\r\nreferences = [[\"It is a guide to action that ensures that the military will forever heed Party commands\"]] # double brackets here should do the work\r\nresults = sacrebleu.compute(predictions=predictions, references=references)\r\nprint(results)\r\noutput: {'score': 41.180376356915765, 'counts': [11, 8, 6, 4], 'totals': [18, 17, 16, 15], 'precisions': [61.111111111111114, 47.05882352941177, 37.5, 26.666666666666668], 'bp': 1.0, 'sys_len': 18, 'ref_len': 16}\r\n```", "@bhavitvyamalik hmm. I forgot double brackets, but still didn't work when used it with double brackets. It may be an isseu with platform (using win-10 currently), or versions. What is your platform and your version info for datasets, python, and sacrebleu ?", "You can check that here, I've reproduced your code in [Google colab](https://colab.research.google.com/drive/1X90fHRgMLKczOVgVk7NDEw_ciZFDjaCM?usp=sharing). Looks like there was some issue in `sacrebleu` which was fixed later from what I've found [here](https://github.com/pytorch/fairseq/issues/2049#issuecomment-622367967). Upgrading `sacrebleu` to latest version should work.", "It seems that next release of `sacrebleu` (v2.0.0) will break our `datasets` implementation to compute it. See my Google Colab: https://colab.research.google.com/drive/1SKmvvjQi6k_3OHsX5NPkZdiaJIfXyv9X?usp=sharing\r\n\r\nI'm reopening this Issue and making a Pull Request to fix it.", "> It seems that next release of `sacrebleu` (v2.0.0) will break our `datasets` implementation to compute it. See my Google Colab: https://colab.research.google.com/drive/1SKmvvjQi6k_3OHsX5NPkZdiaJIfXyv9X?usp=sharing\r\n> \r\n> I'm reopening this Issue and making a Pull Request to fix it.\r\n\r\nHow did you solve him" ]
2021-07-30T23:53:08
2021-09-22T10:47:41
2021-08-03T04:23:37
NONE
null
null
null
null
With the latest release of [sacrebleu](https://github.com/mjpost/sacrebleu), `datasets.metrics.sacrebleu` is broken, and getting error. AttributeError: module 'sacrebleu' has no attribute 'DEFAULT_TOKENIZER' this happens since in new version of sacrebleu there is no `DEFAULT_TOKENIZER`, but sacrebleu.py tries to import it anyways. This can be fixed currently with fixing `sacrebleu==1.5.0` ## Steps to reproduce the bug ```python sacrebleu= datasets.load_metric('sacrebleu') predictions = ["It is a guide to action which ensures that the military always obeys the commands of the party"] references = ["It is a guide to action that ensures that the military will forever heed Party commands"] results = sacrebleu.compute(predictions=predictions, references=references) print(results) ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.11.0 - Platform: Windows-10-10.0.19041-SP0 - Python version: Python 3.8.0 - PyArrow version: 5.0.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2737/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2737/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
3 days, 4:30:29
https://api.github.com/repos/huggingface/datasets/issues/2736
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2736/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2736/comments
https://api.github.com/repos/huggingface/datasets/issues/2736/events
https://github.com/huggingface/datasets/issues/2736
956,895,199
MDU6SXNzdWU5NTY4OTUxOTk=
2,736
Add Microsoft Building Footprints dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" }, { "color": "bfdadc", "default": false, "description": "Vision datasets", "id": 3608941089, "name": "vision", "node_id": "LA_kwDODunzps7XHBIh", "url": "https://api.github.com/repos/huggingface/datasets/labels/vision" } ]
open
false
null
[]
[ "Motivation: this can be a useful dataset for researchers working on climate change adaptation, urban studies, geography, etc. I'll see if I can figure out how to add it!" ]
2021-07-30T16:17:08
2021-12-08T12:09:03
null
MEMBER
null
null
null
null
## Adding a Dataset - **Name:** Microsoft Building Footprints - **Description:** With the goal to increase the coverage of building footprint data available as open data for OpenStreetMap and humanitarian efforts, we have released millions of building footprints as open data available to download free of charge. - **Paper:** *link to the dataset paper if available* - **Data:** https://www.microsoft.com/en-us/maps/building-footprints - **Motivation:** this can be a useful dataset for researchers working on climate change adaptation, urban studies, geography, etc. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). Reported by: @sashavor
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2736/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2736/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
https://api.github.com/repos/huggingface/datasets/issues/2735
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2735/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2735/comments
https://api.github.com/repos/huggingface/datasets/issues/2735/events
https://github.com/huggingface/datasets/issues/2735
956,889,365
MDU6SXNzdWU5NTY4ODkzNjU=
2,735
Add Open Buildings dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
open
false
null
[]
[]
2021-07-30T16:08:39
2021-07-31T05:01:25
null
MEMBER
null
null
null
null
## Adding a Dataset - **Name:** Open Buildings - **Description:** A dataset of building footprints to support social good applications. Building footprints are useful for a range of important applications, from population estimation, urban planning and humanitarian response, to environmental and climate science. This large-scale open dataset contains the outlines of buildings derived from high-resolution satellite imagery in order to support these types of uses. The project being based in Ghana, the current focus is on the continent of Africa. See: "Mapping Africa's Buildings with Satellite Imagery" https://ai.googleblog.com/2021/07/mapping-africas-buildings-with.html - **Paper:** https://arxiv.org/abs/2107.12283 - **Data:** https://sites.research.google/open-buildings/ - **Motivation:** *what are some good reasons to have this dataset* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). Reported by: @osanseviero
null
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2735/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2735/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
https://api.github.com/repos/huggingface/datasets/issues/2730
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2730/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2730/comments
https://api.github.com/repos/huggingface/datasets/issues/2730/events
https://github.com/huggingface/datasets/issues/2730
955,987,834
MDU6SXNzdWU5NTU5ODc4MzQ=
2,730
Update CommonVoice with new release
{ "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yjernite", "id": 10469459, "login": "yjernite", "node_id": "MDQ6VXNlcjEwNDY5NDU5", "organizations_url": "https://api.github.com/users/yjernite/orgs", "received_events_url": "https://api.github.com/users/yjernite/received_events", "repos_url": "https://api.github.com/users/yjernite/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "type": "User", "url": "https://api.github.com/users/yjernite", "user_view_type": "public" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
open
false
null
[]
[ "cc @patrickvonplaten?", "Does anybody know if there is a bundled link, which would allow direct data download instead of manual? \r\nSomething similar to: `https://voice-prod-bundler-ee1969a6ce8178826482b88e843c335139bd3fb4.s3.amazonaws.com/cv-corpus-6.1-2020-12-11/ab.tar.gz` ? cc @patil-suraj \r\n", "Also see: https://github.com/common-voice/common-voice-bundler/issues/15" ]
2021-07-29T15:59:59
2021-08-07T16:19:19
null
MEMBER
null
null
null
null
## Adding a Dataset - **Name:** CommonVoice mid-2021 release - **Description:** more data in CommonVoice: Languages that have increased the most by percentage are Thai (almost 20x growth, from 12 hours to 250 hours), Luganda (almost 9x growth, from 8 to 80), Esperanto (7x growth, from 100 to 840), and Tamil (almost 8x, from 24 to 220). - **Paper:** https://discourse.mozilla.org/t/common-voice-2021-mid-year-dataset-release/83812 - **Data:** https://commonvoice.mozilla.org/en/datasets - **Motivation:** More data and more varied. I think we just need to add configs in the existing dataset script. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 2, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/2730/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2730/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
https://api.github.com/repos/huggingface/datasets/issues/2728
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2728/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2728/comments
https://api.github.com/repos/huggingface/datasets/issues/2728/events
https://github.com/huggingface/datasets/issues/2728
955,892,970
MDU6SXNzdWU5NTU4OTI5NzA=
2,728
Concurrent use of same dataset (already downloaded)
{ "avatar_url": "https://avatars.githubusercontent.com/u/22492839?v=4", "events_url": "https://api.github.com/users/PierreColombo/events{/privacy}", "followers_url": "https://api.github.com/users/PierreColombo/followers", "following_url": "https://api.github.com/users/PierreColombo/following{/other_user}", "gists_url": "https://api.github.com/users/PierreColombo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/PierreColombo", "id": 22492839, "login": "PierreColombo", "node_id": "MDQ6VXNlcjIyNDkyODM5", "organizations_url": "https://api.github.com/users/PierreColombo/orgs", "received_events_url": "https://api.github.com/users/PierreColombo/received_events", "repos_url": "https://api.github.com/users/PierreColombo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/PierreColombo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PierreColombo/subscriptions", "type": "User", "url": "https://api.github.com/users/PierreColombo", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
[]
[ "Launching simultaneous job relying on the same datasets try some writing issue. I guess it is unexpected since I only need to load some already downloaded file.", "If i have two jobs that use the same dataset. I got :\r\n\r\n\r\n File \"compute_measures.py\", line 181, in <module>\r\n train_loader, val_loader, test_loader = get_dataloader(args)\r\n File \"/gpfsdswork/projects/rech/toto/intRAOcular/dataset_utils.py\", line 69, in get_dataloader\r\n dataset_train = load_dataset('paws', \"labeled_final\", split='train', download_mode=\"reuse_cache_if_exists\")\r\n File \"/gpfslocalsup/pub/anaconda-py3/2020.02/envs/pytorch-gpu-1.7.0/lib/python3.7/site-packages/datasets/load.py\", line 748, in load_dataset\r\n use_auth_token=use_auth_token,\r\n File \"/gpfslocalsup/pub/anaconda-py3/2020.02/envs/pytorch-gpu-1.7.0/lib/python3.7/site-packages/datasets/builder.py\", line 582, in download_and_prepare\r\n self._save_info()\r\n File \"/gpfslocalsup/pub/anaconda-py3/2020.02/envs/pytorch-gpu-1.7.0/lib/python3.7/site-packages/datasets/builder.py\", line 690, in _save_info\r\n self.info.write_to_directory(self._cache_dir)\r\n File \"/gpfslocalsup/pub/anaconda-py3/2020.02/envs/pytorch-gpu-1.7.0/lib/python3.7/site-packages/datasets/info.py\", line 195, in write_to_directory\r\n with open(os.path.join(dataset_info_dir, config.LICENSE_FILENAME), \"wb\") as f:\r\nFileNotFoundError: [Errno 2] No such file or directory: '/gpfswork/rech/toto/datasets/paws/labeled_final/1.1.0/09d8fae989bb569009a8f5b879ccf2924d3e5cd55bfe2e89e6dab1c0b50ecd34.incomplete/LICENSE'", "You can probably have a solution much faster than me (first time I use the library). But I suspect some write function are used when loading the dataset from cache.", "I have the same issue:\r\n```\r\nTraceback (most recent call last):\r\n File \"/dccstor/tslm/envs/anaconda3/envs/trf-a100/lib/python3.9/site-packages/datasets/builder.py\", line 652, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"/dccstor/tslm/envs/anaconda3/envs/trf-a100/lib/python3.9/site-packages/datasets/builder.py\", line 1040, in _prepare_split\r\n with ArrowWriter(features=self.info.features, path=fpath) as writer:\r\n File \"/dccstor/tslm/envs/anaconda3/envs/trf-a100/lib/python3.9/site-packages/datasets/arrow_writer.py\", line 192, in __init__\r\n self.stream = pa.OSFile(self._path, \"wb\")\r\n File \"pyarrow/io.pxi\", line 829, in pyarrow.lib.OSFile.__cinit__\r\n File \"pyarrow/io.pxi\", line 844, in pyarrow.lib.OSFile._open_writable\r\n File \"pyarrow/error.pxi\", line 122, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow/error.pxi\", line 97, in pyarrow.lib.check_status\r\nFileNotFoundError: [Errno 2] Failed to open local file '/dccstor/tslm-gen/.cache/csv/default-387f1f95c084d4df/0.0.0/2dc6629a9ff6b5697d82c25b73731dd440507a69cbce8b425db50b751e8fcfd0.incomplete/csv-validation.arrow'. Detail: [errno 2] No such file or directory\r\nDuring handling of the above exception, another exception occurred:\r\nTraceback (most recent call last):\r\n File \"/dccstor/tslm/elron/tslm-gen/train.py\", line 510, in <module>\r\n main()\r\n File \"/dccstor/tslm/elron/tslm-gen/train.py\", line 246, in main\r\n datasets = prepare_dataset(dataset_args, logger)\r\n File \"/dccstor/tslm/elron/tslm-gen/data.py\", line 157, in prepare_dataset\r\n datasets = load_dataset(extension, data_files=data_files, split=dataset_split, cache_dir=dataset_args.dataset_cache_dir, na_filter=False, download_mode=dataset_args.dataset_generate_mode)\r\n File \"/dccstor/tslm/envs/anaconda3/envs/trf-a100/lib/python3.9/site-packages/datasets/load.py\", line 742, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/dccstor/tslm/envs/anaconda3/envs/trf-a100/lib/python3.9/site-packages/datasets/builder.py\", line 574, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/dccstor/tslm/envs/anaconda3/envs/trf-a100/lib/python3.9/site-packages/datasets/builder.py\", line 654, in _download_and_prepare\r\n raise OSError(\r\nOSError: Cannot find data file. \r\nOriginal error:\r\n[Errno 2] Failed to open local file '/dccstor/tslm-gen/.cache/csv/default-387f1f95c084d4df/0.0.0/2dc6629a9ff6b5697d82c25b73731dd440507a69cbce8b425db50b751e8fcfd0.incomplete/csv-validation.arrow'. Detail: [errno 2] No such file or directory\r\n```" ]
2021-07-29T14:18:38
2021-08-02T07:25:57
null
CONTRIBUTOR
null
null
null
null
## Describe the bug When launching several jobs at the same time loading the same dataset trigger some errors see (last comments). ## Steps to reproduce the bug export HF_DATASETS_CACHE=/gpfswork/rech/toto/datasets for MODEL in "bert-base-uncased" "roberta-base" "distilbert-base-cased"; do # "bert-base-uncased" "bert-large-cased" "roberta-large" "albert-base-v1" "albert-large-v1"; do for TASK_NAME in "mrpc" "rte" 'imdb' "paws" "mnli"; do export OUTPUT_DIR=${MODEL}_${TASK_NAME} sbatch --job-name=${OUTPUT_DIR} \ --gres=gpu:1 \ --no-requeue \ --cpus-per-task=10 \ --hint=nomultithread \ --time=1:00:00 \ --output=jobinfo/${OUTPUT_DIR}_%j.out \ --error=jobinfo/${OUTPUT_DIR}_%j.err \ --qos=qos_gpu-t4 \ --wrap="module purge; module load pytorch-gpu/py3/1.7.0 ; export HF_DATASETS_OFFLINE=1; export HF_DATASETS_CACHE=/gpfswork/rech/toto/datasets; python compute_measures.py --seed=$SEED --saving_path=results --batch_size=$BATCH_SIZE --task_name=$TASK_NAME --model_name=/gpfswork/rech/toto/transformers_models/$MODEL" done done ```python # Sample code to reproduce the bug dataset_train = load_dataset('imdb', split='train', download_mode="reuse_cache_if_exists") dataset_train = dataset_train.map(lambda e: tokenizer(e['text'], truncation=True, padding='max_length'), batched=True).select(list(range(args.filter))) dataset_val = load_dataset('imdb', split='train', download_mode="reuse_cache_if_exists") dataset_val = dataset_val.map(lambda e: tokenizer(e['text'], truncation=True, padding='max_length'), batched=True).select(list(range(args.filter, args.filter + 5000))) dataset_test = load_dataset('imdb', split='test', download_mode="reuse_cache_if_exists") dataset_test = dataset_test.map(lambda e: tokenizer(e['text'], truncation=True, padding='max_length'), batched=True) ``` ## Expected results I believe I am doing something wrong with the objects. ## Actual results Traceback (most recent call last): File "/gpfslocalsup/pub/anaconda-py3/2020.02/envs/pytorch-gpu-1.7.0/lib/python3.7/site-packages/datasets/builder.py", line 652, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/gpfslocalsup/pub/anaconda-py3/2020.02/envs/pytorch-gpu-1.7.0/lib/python3.7/site-packages/datasets/builder.py", line 983, in _prepare_split check_duplicates=True, File "/gpfslocalsup/pub/anaconda-py3/2020.02/envs/pytorch-gpu-1.7.0/lib/python3.7/site-packages/datasets/arrow_writer.py", line 192, in __init__ self.stream = pa.OSFile(self._path, "wb") File "pyarrow/io.pxi", line 829, in pyarrow.lib.OSFile.__cinit__ File "pyarrow/io.pxi", line 844, in pyarrow.lib.OSFile._open_writable File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 97, in pyarrow.lib.check_status FileNotFoundError: [Errno 2] Failed to open local file '/gpfswork/rech/tts/unm25jp/datasets/paws/labeled_final/1.1.0/09d8fae989bb569009a8f5b879ccf2924d3e5cd55bfe2e89e6dab1c0b50ecd34.incomplete/paws-test.arrow'. Detail: [errno 2] No such file or directory During handling of the above exception, another exception occurred: Traceback (most recent call last): File "compute_measures.py", line 181, in <module> train_loader, val_loader, test_loader = get_dataloader(args) File "/gpfsdswork/projects/rech/toto/intRAOcular/dataset_utils.py", line 69, in get_dataloader dataset_train = load_dataset('paws', "labeled_final", split='train', download_mode="reuse_cache_if_exists") File "/gpfslocalsup/pub/anaconda-py3/2020.02/envs/pytorch-gpu-1.7.0/lib/python3.7/site-packages/datasets/load.py", line 748, in load_dataset use_auth_token=use_auth_token, File "/gpfslocalsup/pub/anaconda-py3/2020.02/envs/pytorch-gpu-1.7.0/lib/python3.7/site-packages/datasets/builder.py", line 575, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/gpfslocalsup/pub/anaconda-py3/2020.02/envs/pytorch-gpu-1.7.0/lib/python3.7/site-packages/datasets/builder.py", line 658, in _download_and_prepare + str(e) OSError: Cannot find data file. Original error: [Errno 2] Failed to open local file '/gpfswork/rech/toto/datasets/paws/labeled_final/1.1.0/09d8fae989bb569009a8f5b879ccf2924d3e5cd55bfe2e89e6dab1c0b50ecd34.incomplete/paws-test.arrow'. Detail: [errno 2] No such file or directory ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: datasets==1.8.0 - Platform: linux (jeanzay) - Python version: pyarrow==2.0.0 - PyArrow version: 3.7.8
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2728/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2728/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
https://api.github.com/repos/huggingface/datasets/issues/2727
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2727/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2727/comments
https://api.github.com/repos/huggingface/datasets/issues/2727/events
https://github.com/huggingface/datasets/issues/2727
955,812,149
MDU6SXNzdWU5NTU4MTIxNDk=
2,727
Error in loading the Arabic Billion Words Corpus
{ "avatar_url": "https://avatars.githubusercontent.com/u/9285264?v=4", "events_url": "https://api.github.com/users/M-Salti/events{/privacy}", "followers_url": "https://api.github.com/users/M-Salti/followers", "following_url": "https://api.github.com/users/M-Salti/following{/other_user}", "gists_url": "https://api.github.com/users/M-Salti/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/M-Salti", "id": 9285264, "login": "M-Salti", "node_id": "MDQ6VXNlcjkyODUyNjQ=", "organizations_url": "https://api.github.com/users/M-Salti/orgs", "received_events_url": "https://api.github.com/users/M-Salti/received_events", "repos_url": "https://api.github.com/users/M-Salti/repos", "site_admin": false, "starred_url": "https://api.github.com/users/M-Salti/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/M-Salti/subscriptions", "type": "User", "url": "https://api.github.com/users/M-Salti", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
[ "I modified the dataset loading script to catch the `IndexError` and inspect the records at which the error is happening, and I found this:\r\nFor the `Techreen` config, the error happens in 36 records when trying to find the `Text` or `Dateline` tags. All these 36 records look something like:\r\n```\r\n<Techreen>\r\n <ID>TRN_ARB_0248167</ID>\r\n <URL>http://tishreen.news.sy/tishreen/public/read/248240</URL>\r\n <Headline>Removed, because the original articles was in English</Headline>\r\n</Techreen>\r\n```\r\n\r\nand all the 288 faulty records in the `Almustaqbal` config look like:\r\n```\r\n<Almustaqbal>\r\n <ID>MTL_ARB_0028398</ID>\r\n \r\n <URL>http://www.almustaqbal.com/v4/article.aspx?type=NP&ArticleID=179015</URL>\r\n <Headline> Removed because it is not available in the original site</Headline>\r\n</Almustaqbal>\r\n```\r\n\r\nso the error is happening because the articles were removed and so the associated records lack the `Text` tag.\r\n\r\nIn this case, I think we just need to catch the `IndexError` and ignore (pass) it.\r\n", "Thanks @M-Salti for reporting this issue and for your investigation.\r\n\r\nIndeed, those `IndexError` should be catched and the corresponding record should be ignored.\r\n\r\nI'm opening a Pull Request to fix it." ]
2021-07-29T12:53:09
2021-07-30T13:03:55
2021-07-30T13:03:55
CONTRIBUTOR
null
null
null
null
## Describe the bug I get `IndexError: list index out of range` when trying to load the `Techreen` and `Almustaqbal` configs of the dataset. ## Steps to reproduce the bug ```python load_dataset("arabic_billion_words", "Techreen") load_dataset("arabic_billion_words", "Almustaqbal") ``` ## Expected results The datasets load succefully. ## Actual results ```python _extract_tags(self, sample, tag) 139 if len(out) > 0: 140 break --> 141 return out[0] 142 143 def _clean_text(self, text): IndexError: list index out of range ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.10.2 - Platform: Ubuntu 18.04.5 LTS - Python version: 3.7.11 - PyArrow version: 3.0.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2727/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2727/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
1 day, 0:10:46
https://api.github.com/repos/huggingface/datasets/issues/2724
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2724/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2724/comments
https://api.github.com/repos/huggingface/datasets/issues/2724/events
https://github.com/huggingface/datasets/issues/2724
954,919,607
MDU6SXNzdWU5NTQ5MTk2MDc=
2,724
404 Error when loading remote data files from private repo
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
[ "I guess the issue is when computing the ETags of the remote files. Indeed `use_auth_token` must be passed to `request_etags` here:\r\n\r\nhttps://github.com/huggingface/datasets/blob/35b5e4bc0cb2ed896e40f3eb2a4aa3de1cb1a6c5/src/datasets/builder.py#L160-L160", "Yes, I remember having properly implemented that: \r\n- https://github.com/huggingface/datasets/commit/7a9c62f7cef9ecc293f629f859d4375a6bd26dc8#diff-f933ce41f71c6c0d1ce658e27de62cbe0b45d777e9e68056dd012ac3eb9324f7R160\r\n- https://github.com/huggingface/datasets/pull/2628/commits/6350a03b4b830339a745f7b1da46ece784ca734c\r\n\r\nBut a subsequent refactoring accidentally removed it...", "I have opened a PR to fix it @lewtun." ]
2021-07-28T14:24:23
2021-07-29T04:58:49
2021-07-28T16:38:01
MEMBER
null
null
null
null
## Describe the bug When loading remote data files from a private repo, a 404 error is raised. ## Steps to reproduce the bug ```python url = hf_hub_url("lewtun/asr-preds-test", "preds.jsonl", repo_type="dataset") dset = load_dataset("json", data_files=url, use_auth_token=True) # HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/datasets/lewtun/asr-preds-test/resolve/main/preds.jsonl ``` ## Expected results Load dataset. ## Actual results 404 Error.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2724/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2724/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
2:13:38
https://api.github.com/repos/huggingface/datasets/issues/2722
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2722/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2722/comments
https://api.github.com/repos/huggingface/datasets/issues/2722/events
https://github.com/huggingface/datasets/issues/2722
954,446,053
MDU6SXNzdWU5NTQ0NDYwNTM=
2,722
Missing cache file
{ "avatar_url": "https://avatars.githubusercontent.com/u/33200481?v=4", "events_url": "https://api.github.com/users/PosoSAgapo/events{/privacy}", "followers_url": "https://api.github.com/users/PosoSAgapo/followers", "following_url": "https://api.github.com/users/PosoSAgapo/following{/other_user}", "gists_url": "https://api.github.com/users/PosoSAgapo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/PosoSAgapo", "id": 33200481, "login": "PosoSAgapo", "node_id": "MDQ6VXNlcjMzMjAwNDgx", "organizations_url": "https://api.github.com/users/PosoSAgapo/orgs", "received_events_url": "https://api.github.com/users/PosoSAgapo/received_events", "repos_url": "https://api.github.com/users/PosoSAgapo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/PosoSAgapo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PosoSAgapo/subscriptions", "type": "User", "url": "https://api.github.com/users/PosoSAgapo", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "This could be solved by going to the glue/ directory and delete sst2 directory, then load the dataset again will help you redownload the dataset.", "Hi ! Not sure why this file was missing, but yes the way to fix this is to delete the sst2 directory and to reload the dataset" ]
2021-07-28T03:52:07
2022-03-21T08:27:51
2022-03-21T08:27:51
NONE
null
null
null
null
Strangely missing cache file after I restart my program again. `glue_dataset = datasets.load_dataset('glue', 'sst2')` `FileNotFoundError: [Errno 2] No such file or directory: /Users/chris/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96d6053ad/dataset_info.json'`
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2722/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2722/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
236 days, 4:35:44
https://api.github.com/repos/huggingface/datasets/issues/2719
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2719/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2719/comments
https://api.github.com/repos/huggingface/datasets/issues/2719/events
https://github.com/huggingface/datasets/issues/2719
953,932,416
MDU6SXNzdWU5NTM5MzI0MTY=
2,719
Use ETag in streaming mode to detect resource updates
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
open
false
null
[]
[]
2021-07-27T14:17:09
2021-10-22T09:36:08
null
COLLABORATOR
null
null
null
null
**Is your feature request related to a problem? Please describe.** I want to cache data I generate from processing a dataset I've loaded in streaming mode, but I've currently no way to know if the remote data has been updated or not, thus I don't know when to invalidate my cache. **Describe the solution you'd like** Take the ETag of the data files into account and provide it (directly or through a hash) to give a signal that I can invalidate my cache. **Describe alternatives you've considered** None
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2719/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2719/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
https://api.github.com/repos/huggingface/datasets/issues/2716
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2716/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2716/comments
https://api.github.com/repos/huggingface/datasets/issues/2716/events
https://github.com/huggingface/datasets/issues/2716
952,902,778
MDU6SXNzdWU5NTI5MDI3Nzg=
2,716
Calling shuffle on IterableDataset will disable batching in case any functions were mapped
{ "avatar_url": "https://avatars.githubusercontent.com/u/7098967?v=4", "events_url": "https://api.github.com/users/amankhandelia/events{/privacy}", "followers_url": "https://api.github.com/users/amankhandelia/followers", "following_url": "https://api.github.com/users/amankhandelia/following{/other_user}", "gists_url": "https://api.github.com/users/amankhandelia/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/amankhandelia", "id": 7098967, "login": "amankhandelia", "node_id": "MDQ6VXNlcjcwOTg5Njc=", "organizations_url": "https://api.github.com/users/amankhandelia/orgs", "received_events_url": "https://api.github.com/users/amankhandelia/received_events", "repos_url": "https://api.github.com/users/amankhandelia/repos", "site_admin": false, "starred_url": "https://api.github.com/users/amankhandelia/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amankhandelia/subscriptions", "type": "User", "url": "https://api.github.com/users/amankhandelia", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "Hi :) Good catch ! Feel free to open a PR if you want to contribute, this would be very welcome ;)", "Have raised the PR [here](https://github.com/huggingface/datasets/pull/2717)", "Fixed by #2717." ]
2021-07-26T13:24:59
2021-07-26T18:04:43
2021-07-26T18:04:43
CONTRIBUTOR
null
null
null
null
When using dataset in streaming mode, if one applies `shuffle` method on the dataset and `map` method for which `batched=True` than the batching operation will not happen, instead `batched` will be set to `False` I did RCA on the dataset codebase, the problem is emerging from [this line of code](https://github.com/huggingface/datasets/blob/d25a0bf94d9f9a9aa6cabdf5b450b9c327d19729/src/datasets/iterable_dataset.py#L197) here as it is `self.ex_iterable.shuffle_data_sources(seed), function=self.function, batch_size=self.batch_size`, as one can see it is missing batched argument, which means that the iterator fallsback to default constructor value, which in this case is `False`. To remedy the problem we can change this line to `self.ex_iterable.shuffle_data_sources(seed), function=self.function, batched=self.batched, batch_size=self.batch_size`
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2716/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2716/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
4:39:44
https://api.github.com/repos/huggingface/datasets/issues/2714
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2714/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2714/comments
https://api.github.com/repos/huggingface/datasets/issues/2714/events
https://github.com/huggingface/datasets/issues/2714
952,580,820
MDU6SXNzdWU5NTI1ODA4MjA=
2,714
add more precise information for size
{ "avatar_url": "https://avatars.githubusercontent.com/u/1493902?v=4", "events_url": "https://api.github.com/users/pennyl67/events{/privacy}", "followers_url": "https://api.github.com/users/pennyl67/followers", "following_url": "https://api.github.com/users/pennyl67/following{/other_user}", "gists_url": "https://api.github.com/users/pennyl67/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/pennyl67", "id": 1493902, "login": "pennyl67", "node_id": "MDQ6VXNlcjE0OTM5MDI=", "organizations_url": "https://api.github.com/users/pennyl67/orgs", "received_events_url": "https://api.github.com/users/pennyl67/received_events", "repos_url": "https://api.github.com/users/pennyl67/repos", "site_admin": false, "starred_url": "https://api.github.com/users/pennyl67/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pennyl67/subscriptions", "type": "User", "url": "https://api.github.com/users/pennyl67", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
[ "We already have this information in the dataset_infos.json files of each dataset.\r\nMaybe we can parse these files in the backend to return their content with the endpoint at huggingface.co/api/datasets\r\n\r\nFor now if you want to access this info you have to load the json for each dataset. For example:\r\n- for a dataset on github like `squad` \r\n- https://raw.githubusercontent.com/huggingface/datasets/master/datasets/squad/dataset_infos.json\r\n- for a community dataset on the hub like `lhoestq/squad`:\r\n https://huggingface.co/datasets/lhoestq/squad/resolve/main/dataset_infos.json" ]
2021-07-26T07:11:03
2021-07-26T09:16:25
null
NONE
null
null
null
null
For the import into ELG, we would like a more precise description of the size of the dataset, instead of the current size categories. The size can be expressed in bytes, or any other preferred size unit. As suggested in the slack channel, perhaps this could be computed with a regex for existing datasets.
null
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2714/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2714/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
https://api.github.com/repos/huggingface/datasets/issues/2709
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2709/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2709/comments
https://api.github.com/repos/huggingface/datasets/issues/2709/events
https://github.com/huggingface/datasets/issues/2709
951,534,757
MDU6SXNzdWU5NTE1MzQ3NTc=
2,709
Missing documentation for wnut_17 (ner_tags)
{ "avatar_url": "https://avatars.githubusercontent.com/u/31095360?v=4", "events_url": "https://api.github.com/users/maxpel/events{/privacy}", "followers_url": "https://api.github.com/users/maxpel/followers", "following_url": "https://api.github.com/users/maxpel/following{/other_user}", "gists_url": "https://api.github.com/users/maxpel/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/maxpel", "id": 31095360, "login": "maxpel", "node_id": "MDQ6VXNlcjMxMDk1MzYw", "organizations_url": "https://api.github.com/users/maxpel/orgs", "received_events_url": "https://api.github.com/users/maxpel/received_events", "repos_url": "https://api.github.com/users/maxpel/repos", "site_admin": false, "starred_url": "https://api.github.com/users/maxpel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/maxpel/subscriptions", "type": "User", "url": "https://api.github.com/users/maxpel", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
[ "Hi @maxpel, thanks for reporting this issue.\r\n\r\nIndeed, the documentation in the dataset card is not complete. I’m opening a Pull Request to fix it.\r\n\r\nAs the paper explains, there are 6 entity types and we have ordered them alphabetically: `corporation`, `creative-work`, `group`, `location`, `person` and `product`. \r\n\r\nEach of these entity types has 2 possible IOB2 format tags: \r\n- `B-`: to indicate that the token is the beginning of an entity name, and the \r\n- `I-`: to indicate that the token is inside an entity name. \r\n\r\nAdditionally, there is the standalone IOB2 tag \r\n- `O`: that indicates that the token belongs to no named entity. \r\n\r\nIn total there are 13 possible tags, which correspond to the following integer numbers:\r\n\r\n0. `O`\r\n1. `B-corporation`\r\n2. `I-corporation`\r\n3. `B-creative-work`\r\n4. `I-creative-work`\r\n5. `B-group`\r\n6. `I-group`\r\n7. `B-location`\r\n8. `I-location`\r\n9. `B-person`\r\n10. `I-person`\r\n11. `B-product`\r\n12. `I-product`" ]
2021-07-23T12:25:32
2021-07-26T09:30:55
2021-07-26T09:30:55
CONTRIBUTOR
null
null
null
null
On the info page of the wnut_17 data set (https://huggingface.co/datasets/wnut_17), the model output of ner-tags is only documented for these 5 cases: `ner_tags: a list of classification labels, with possible values including O (0), B-corporation (1), I-corporation (2), B-creative-work (3), I-creative-work (4).` I trained a model with the data and it gives me 13 classes: ``` "id2label": { "0": 0, "1": 1, "2": 2, "3": 3, "4": 4, "5": 5, "6": 6, "7": 7, "8": 8, "9": 9, "10": 10, "11": 11, "12": 12 } "label2id": { "0": 0, "1": 1, "10": 10, "11": 11, "12": 12, "2": 2, "3": 3, "4": 4, "5": 5, "6": 6, "7": 7, "8": 8, "9": 9 } ``` The paper (https://www.aclweb.org/anthology/W17-4418.pdf) explains those 6 categories, but the ordering does not match: ``` 1. person 2. location (including GPE, facility) 3. corporation 4. product (tangible goods, or well-defined services) 5. creative-work (song, movie, book and so on) 6. group (subsuming music band, sports team, and non-corporate organisations) ``` I would be very helpful for me, if somebody could clarify the model ouputs and explain the "B-" and "I-" prefixes to me. Really great work with that and the other packages, I couldn't believe that training the model with that data was basically a one-liner!
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2709/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2709/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
2 days, 21:05:23
https://api.github.com/repos/huggingface/datasets/issues/2708
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2708/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2708/comments
https://api.github.com/repos/huggingface/datasets/issues/2708/events
https://github.com/huggingface/datasets/issues/2708
951,092,660
MDU6SXNzdWU5NTEwOTI2NjA=
2,708
QASC: incomplete training set
{ "avatar_url": "https://avatars.githubusercontent.com/u/2441454?v=4", "events_url": "https://api.github.com/users/danyaljj/events{/privacy}", "followers_url": "https://api.github.com/users/danyaljj/followers", "following_url": "https://api.github.com/users/danyaljj/following{/other_user}", "gists_url": "https://api.github.com/users/danyaljj/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/danyaljj", "id": 2441454, "login": "danyaljj", "node_id": "MDQ6VXNlcjI0NDE0NTQ=", "organizations_url": "https://api.github.com/users/danyaljj/orgs", "received_events_url": "https://api.github.com/users/danyaljj/received_events", "repos_url": "https://api.github.com/users/danyaljj/repos", "site_admin": false, "starred_url": "https://api.github.com/users/danyaljj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/danyaljj/subscriptions", "type": "User", "url": "https://api.github.com/users/danyaljj", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "Hi @danyaljj, thanks for reporting.\r\n\r\nUnfortunately, I have not been able to reproduce your problem. My train split has 8134 examples:\r\n```ipython\r\nIn [10]: ds[\"train\"]\r\nOut[10]:\r\nDataset({\r\n features: ['id', 'question', 'choices', 'answerKey', 'fact1', 'fact2', 'combinedfact', 'formatted_question'],\r\n num_rows: 8134\r\n})\r\n\r\nIn [11]: ds[\"train\"].shape\r\nOut[11]: (8134, 8)\r\n```\r\nand the content of the last 5 examples is:\r\n```ipython\r\nIn [12]: for i in range(8129, 8134):\r\n ...: print(json.dumps(ds[\"train\"][i]))\r\n ...:\r\n{\"id\": \"3KAKFY4PGU1LGXM77JAK2700NGCI3X\", \"question\": \"Chitin can be used for protection by whom?\", \"choices\": {\"text\": [\"Fungi\", \"People\", \"Man\", \"Fish\", \"trees\", \"Dogs\", \"animal\", \"Birds\"], \"label\": [\"A\", \"B\",\r\n \"C\", \"D\", \"E\", \"F\", \"G\", \"H\"]}, \"answerKey\": \"D\", \"fact1\": \"scales are used for protection by scaled animals\", \"fact2\": \"Fish scales are also composed of chitin.\", \"combinedfact\": \"Chitin can be used for prote\r\nction by fish.\", \"formatted_question\": \"Chitin can be used for protection by whom? (A) Fungi (B) People (C) Man (D) Fish (E) trees (F) Dogs (G) animal (H) Birds\"}\r\n{\"id\": \"336YQZE83VDAQVZ26HW59X51JZ9M5M\", \"question\": \"Which type of animal uses plates for protection?\", \"choices\": {\"text\": [\"squids\", \"reptiles\", \"sea urchins\", \"fish\", \"amphibians\", \"Frogs\", \"mammals\", \"salm\r\non\"], \"label\": [\"A\", \"B\", \"C\", \"D\", \"E\", \"F\", \"G\", \"H\"]}, \"answerKey\": \"B\", \"fact1\": \"scales are used for protection by scaled animals\", \"fact2\": \"Reptiles have scales or plates.\", \"combinedfact\": \"Reptiles use\r\n their plates for protection.\", \"formatted_question\": \"Which type of animal uses plates for protection? (A) squids (B) reptiles (C) sea urchins (D) fish (E) amphibians (F) Frogs (G) mammals (H) salmon\"}\r\n{\"id\": \"3WZ36BJEV3FGS66VGOOUYX0LN8GTBU\", \"question\": \"What are used for protection by fish?\", \"choices\": {\"text\": [\"scales\", \"fins\", \"streams.\", \"coral\", \"gills\", \"Collagen\", \"mussels\", \"whiskers\"], \"label\": [\"\r\nA\", \"B\", \"C\", \"D\", \"E\", \"F\", \"G\", \"H\"]}, \"answerKey\": \"A\", \"fact1\": \"scales are used for protection by scaled animals\", \"fact2\": \"Fish are backboned aquatic animals.\", \"combinedfact\": \"scales are used for prote\r\nction by fish \", \"formatted_question\": \"What are used for protection by fish? (A) scales (B) fins (C) streams. (D) coral (E) gills (F) Collagen (G) mussels (H) whiskers\"}\r\n{\"id\": \"3Z2R0DQ0JHDKFAO2706OYIXGNA4E28\", \"question\": \"What are pangolins covered in?\", \"choices\": {\"text\": [\"tunicates\", \"Echinoids\", \"shells\", \"exoskeleton\", \"blastoids\", \"barrel-shaped\", \"protection\", \"white\"\r\n], \"label\": [\"A\", \"B\", \"C\", \"D\", \"E\", \"F\", \"G\", \"H\"]}, \"answerKey\": \"G\", \"fact1\": \"scales are used for protection by scaled animals\", \"fact2\": \"Pangolins have an elongate and tapering body covered above with ov\r\nerlapping scales.\", \"combinedfact\": \"Pangolins are covered in overlapping protection.\", \"formatted_question\": \"What are pangolins covered in? (A) tunicates (B) Echinoids (C) shells (D) exoskeleton (E) blastoids\r\n (F) barrel-shaped (G) protection (H) white\"}\r\n{\"id\": \"3PMBY0YE272GIWPNWIF8IH5RBHVC9S\", \"question\": \"What are covered with protection?\", \"choices\": {\"text\": [\"apples\", \"trees\", \"coral\", \"clams\", \"roses\", \"wings\", \"hats\", \"fish\"], \"label\": [\"A\", \"B\", \"C\", \"D\r\n\", \"E\", \"F\", \"G\", \"H\"]}, \"answerKey\": \"H\", \"fact1\": \"scales are used for protection by scaled animals\", \"fact2\": \"Fish are covered with scales.\", \"combinedfact\": \"Fish are covered with protection\", \"formatted_q\r\nuestion\": \"What are covered with protection? (A) apples (B) trees (C) coral (D) clams (E) roses (F) wings (G) hats (H) fish\"}\r\n```\r\n\r\nCould you please load again your dataset and print its shape, like this:\r\n```python\r\nds = load_dataset(\"qasc\", split=\"train)\r\nprint(ds.shape)\r\n```\r\nand confirm which is your output?", "Hmm .... it must have been a mistake on my side. Sorry for the hassle! " ]
2021-07-22T21:59:44
2021-07-23T13:30:07
2021-07-23T13:30:07
CONTRIBUTOR
null
null
null
null
## Describe the bug The training instances are not loaded properly. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("qasc", script_version='1.10.2') def load_instances(split): instances = dataset[split] print(f"split: {split} - size: {len(instances)}") for x in instances: print(json.dumps(x)) load_instances('test') load_instances('validation') load_instances('train') ``` ## results For test and validation, we can see the examples in the output (which is good!): ``` split: test - size: 920 {"answerKey": "", "choices": {"label": ["A", "B", "C", "D", "E", "F", "G", "H"], "text": ["Anthax", "under water", "uterus", "wombs", "two", "moles", "live", "embryo"]}, "combinedfact": "", "fact1": "", "fact2": "", "formatted_question": "What type of birth do therian mammals have? (A) Anthax (B) under water (C) uterus (D) wombs (E) two (F) moles (G) live (H) embryo", "id": "3C44YUNSI1OBFBB8D36GODNOZN9DPA", "question": "What type of birth do therian mammals have?"} {"answerKey": "", "choices": {"label": ["A", "B", "C", "D", "E", "F", "G", "H"], "text": ["Corvidae", "arthropods", "birds", "backbones", "keratin", "Jurassic", "front paws", "Parakeets."]}, "combinedfact": "", "fact1": "", "fact2": "", "formatted_question": "By what time had mouse-sized viviparous mammals evolved? (A) Corvidae (B) arthropods (C) birds (D) backbones (E) keratin (F) Jurassic (G) front paws (H) Parakeets.", "id": "3B1NLC6UGZVERVLZFT7OUYQLD1SGPZ", "question": "By what time had mouse-sized viviparous mammals evolved?"} {"answerKey": "", "choices": {"label": ["A", "B", "C", "D", "E", "F", "G", "H"], "text": ["Reduced friction", "causes infection", "vital to a good life", "prevents water loss", "camouflage from consumers", "Protection against predators", "spur the growth of the plant", "a smooth surface"]}, "combinedfact": "", "fact1": "", "fact2": "", "formatted_question": "What does a plant's skin do? (A) Reduced friction (B) causes infection (C) vital to a good life (D) prevents water loss (E) camouflage from consumers (F) Protection against predators (G) spur the growth of the plant (H) a smooth surface", "id": "3QRYMNZ7FYGITFVSJET3PS0F4S0NT9", "question": "What does a plant's skin do?"} ... ``` However, only a few instances are loaded for the training split, which is not correct. ## Environment info - `datasets` version: '1.10.2' - Platform: MaxOS - Python version:3.7 - PyArrow version: 3.0.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/2441454?v=4", "events_url": "https://api.github.com/users/danyaljj/events{/privacy}", "followers_url": "https://api.github.com/users/danyaljj/followers", "following_url": "https://api.github.com/users/danyaljj/following{/other_user}", "gists_url": "https://api.github.com/users/danyaljj/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/danyaljj", "id": 2441454, "login": "danyaljj", "node_id": "MDQ6VXNlcjI0NDE0NTQ=", "organizations_url": "https://api.github.com/users/danyaljj/orgs", "received_events_url": "https://api.github.com/users/danyaljj/received_events", "repos_url": "https://api.github.com/users/danyaljj/repos", "site_admin": false, "starred_url": "https://api.github.com/users/danyaljj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/danyaljj/subscriptions", "type": "User", "url": "https://api.github.com/users/danyaljj", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2708/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2708/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
15:30:23
https://api.github.com/repos/huggingface/datasets/issues/2707
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2707/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2707/comments
https://api.github.com/repos/huggingface/datasets/issues/2707/events
https://github.com/huggingface/datasets/issues/2707
950,812,945
MDU6SXNzdWU5NTA4MTI5NDU=
2,707
404 Not Found Error when loading LAMA dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/26467159?v=4", "events_url": "https://api.github.com/users/dwil2444/events{/privacy}", "followers_url": "https://api.github.com/users/dwil2444/followers", "following_url": "https://api.github.com/users/dwil2444/following{/other_user}", "gists_url": "https://api.github.com/users/dwil2444/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dwil2444", "id": 26467159, "login": "dwil2444", "node_id": "MDQ6VXNlcjI2NDY3MTU5", "organizations_url": "https://api.github.com/users/dwil2444/orgs", "received_events_url": "https://api.github.com/users/dwil2444/received_events", "repos_url": "https://api.github.com/users/dwil2444/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dwil2444/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dwil2444/subscriptions", "type": "User", "url": "https://api.github.com/users/dwil2444", "user_view_type": "public" }
[]
closed
false
null
[]
[ "Hi @dwil2444! I was able to reproduce your error when I downgraded to v1.1.2. Updating to the latest version of Datasets fixed the error for me :)", "Hi @dwil2444, thanks for reporting.\r\n\r\nCould you please confirm which `datasets` version you were using and if the problem persists after you update it to the latest version: `pip install -U datasets`?\r\n\r\nThanks @stevhliu for the hint to fix this! ;)", "@stevhliu @albertvillanova updating to the latest version of datasets did in fact fix this issue. Thanks a lot for your help!" ]
2021-07-22T15:52:33
2021-07-26T14:29:07
2021-07-26T14:29:07
NONE
null
null
null
null
The [LAMA](https://huggingface.co/datasets/viewer/?dataset=lama) probing dataset is not available for download: Steps to Reproduce: 1. `from datasets import load_dataset` 2. `dataset = load_dataset('lama', 'trex')`. Results: `FileNotFoundError: Couldn't find file locally at lama/lama.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/lama/lama.py or https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/lama/lama.py`
{ "avatar_url": "https://avatars.githubusercontent.com/u/26467159?v=4", "events_url": "https://api.github.com/users/dwil2444/events{/privacy}", "followers_url": "https://api.github.com/users/dwil2444/followers", "following_url": "https://api.github.com/users/dwil2444/following{/other_user}", "gists_url": "https://api.github.com/users/dwil2444/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dwil2444", "id": 26467159, "login": "dwil2444", "node_id": "MDQ6VXNlcjI2NDY3MTU5", "organizations_url": "https://api.github.com/users/dwil2444/orgs", "received_events_url": "https://api.github.com/users/dwil2444/received_events", "repos_url": "https://api.github.com/users/dwil2444/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dwil2444/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dwil2444/subscriptions", "type": "User", "url": "https://api.github.com/users/dwil2444", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2707/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2707/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
3 days, 22:36:34
https://api.github.com/repos/huggingface/datasets/issues/2705
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2705/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2705/comments
https://api.github.com/repos/huggingface/datasets/issues/2705/events
https://github.com/huggingface/datasets/issues/2705
950,488,583
MDU6SXNzdWU5NTA0ODg1ODM=
2,705
404 not found error on loading WIKIANN dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/39296659?v=4", "events_url": "https://api.github.com/users/ronbutan/events{/privacy}", "followers_url": "https://api.github.com/users/ronbutan/followers", "following_url": "https://api.github.com/users/ronbutan/following{/other_user}", "gists_url": "https://api.github.com/users/ronbutan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ronbutan", "id": 39296659, "login": "ronbutan", "node_id": "MDQ6VXNlcjM5Mjk2NjU5", "organizations_url": "https://api.github.com/users/ronbutan/orgs", "received_events_url": "https://api.github.com/users/ronbutan/received_events", "repos_url": "https://api.github.com/users/ronbutan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ronbutan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ronbutan/subscriptions", "type": "User", "url": "https://api.github.com/users/ronbutan", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "Hi @ronbutan, thanks for reporting.\r\n\r\nYou are right: we have recently found that the link to the original PAN-X dataset (also called WikiANN), hosted at Dropbox, is no longer working.\r\n\r\nWe have opened an issue in the GitHub repository of the original dataset (afshinrahimi/mmner#4) and we have also contacted the author by email to ask if they are planning to fix this issue. See the details here: https://github.com/huggingface/datasets/issues/2691#issuecomment-885463027\r\n\r\nI close this issue because it is the same as in #2691. Feel free to subscribe to that other issue to be informed about any updates." ]
2021-07-22T09:55:50
2021-07-23T08:07:32
2021-07-23T08:07:32
NONE
null
null
null
null
## Describe the bug Unable to retreive wikiann English dataset ## Steps to reproduce the bug ```python from datasets import list_datasets, load_dataset, list_metrics, load_metric WIKIANN = load_dataset("wikiann","en") ``` ## Expected results Colab notebook should display successful download status ## Actual results FileNotFoundError: Couldn't find file at https://www.dropbox.com/s/12h3qqog6q4bjve/panx_dataset.tar?dl=1 ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.10.1 - Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.11 - PyArrow version: 3.0.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2705/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2705/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
22:11:42
https://api.github.com/repos/huggingface/datasets/issues/2703
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2703/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2703/comments
https://api.github.com/repos/huggingface/datasets/issues/2703/events
https://github.com/huggingface/datasets/issues/2703
950,482,284
MDU6SXNzdWU5NTA0ODIyODQ=
2,703
Bad message when config name is missing
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" } ]
[]
2021-07-22T09:47:23
2021-07-22T10:02:40
2021-07-22T10:02:40
MEMBER
null
null
null
null
When loading a dataset that have several configurations, we expect to see an error message if the user doesn't specify a config name. However in `datasets` 1.10.0 and 1.10.1 it doesn't show the right message: ```python import datasets datasets.load_dataset("glue") ``` raises ```python AttributeError: 'BuilderConfig' object has no attribute 'text_features' ``` instead of ```python ValueError: Config name is missing. Please pick one among the available configs: ['cola', 'sst2', 'mrpc', 'qqp', 'stsb', 'mnli', 'mnli_mismatched', 'mnli_matched', 'qnli', 'rte', 'wnli', 'ax'] Example of usage: `load_dataset('glue', 'cola')` ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2703/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2703/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
0:15:17
https://api.github.com/repos/huggingface/datasets/issues/2700
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2700/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2700/comments
https://api.github.com/repos/huggingface/datasets/issues/2700/events
https://github.com/huggingface/datasets/issues/2700
950,276,325
MDU6SXNzdWU5NTAyNzYzMjU=
2,700
from datasets import Dataset is failing
{ "avatar_url": "https://avatars.githubusercontent.com/u/5582286?v=4", "events_url": "https://api.github.com/users/kswamy15/events{/privacy}", "followers_url": "https://api.github.com/users/kswamy15/followers", "following_url": "https://api.github.com/users/kswamy15/following{/other_user}", "gists_url": "https://api.github.com/users/kswamy15/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kswamy15", "id": 5582286, "login": "kswamy15", "node_id": "MDQ6VXNlcjU1ODIyODY=", "organizations_url": "https://api.github.com/users/kswamy15/orgs", "received_events_url": "https://api.github.com/users/kswamy15/received_events", "repos_url": "https://api.github.com/users/kswamy15/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kswamy15/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kswamy15/subscriptions", "type": "User", "url": "https://api.github.com/users/kswamy15", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "Hi @kswamy15, thanks for reporting.\r\n\r\nWe are fixing this critical issue and making an urgent patch release of the `datasets` library today.\r\n\r\nIn the meantime, you can circumvent this issue by updating the `tqdm` library: `!pip install -U tqdm`" ]
2021-07-22T03:51:23
2021-07-22T07:23:45
2021-07-22T07:09:07
NONE
null
null
null
null
## Describe the bug A clear and concise description of what the bug is. ## Steps to reproduce the bug ```python # Sample code to reproduce the bug from datasets import Dataset ``` ## Expected results A clear and concise description of the expected results. ## Actual results Specify the actual results or traceback. /usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py in <module>() 25 import posixpath 26 import requests ---> 27 from tqdm.contrib.concurrent import thread_map 28 29 from .. import __version__, config, utils ModuleNotFoundError: No module named 'tqdm.contrib.concurrent' --------------------------------------------------------------------------- NOTE: If your import is failing due to a missing package, you can manually install dependencies using either !pip or !apt. To view examples of installing some common dependencies, click the "Open Examples" button below. --------------------------------------------------------------------------- ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: latest version as of 07/21/2021 - Platform: Google Colab - Python version: 3.7 - PyArrow version:
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2700/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2700/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
3:17:44
https://api.github.com/repos/huggingface/datasets/issues/2699
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2699/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2699/comments
https://api.github.com/repos/huggingface/datasets/issues/2699/events
https://github.com/huggingface/datasets/issues/2699
950,221,226
MDU6SXNzdWU5NTAyMjEyMjY=
2,699
cannot combine splits merging and streaming?
{ "avatar_url": "https://avatars.githubusercontent.com/u/4436747?v=4", "events_url": "https://api.github.com/users/eyaler/events{/privacy}", "followers_url": "https://api.github.com/users/eyaler/followers", "following_url": "https://api.github.com/users/eyaler/following{/other_user}", "gists_url": "https://api.github.com/users/eyaler/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/eyaler", "id": 4436747, "login": "eyaler", "node_id": "MDQ6VXNlcjQ0MzY3NDc=", "organizations_url": "https://api.github.com/users/eyaler/orgs", "received_events_url": "https://api.github.com/users/eyaler/received_events", "repos_url": "https://api.github.com/users/eyaler/repos", "site_admin": false, "starred_url": "https://api.github.com/users/eyaler/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eyaler/subscriptions", "type": "User", "url": "https://api.github.com/users/eyaler", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
[]
[ "Hi ! That's missing indeed. We'll try to implement this for the next version :)\r\n\r\nI guess we just need to implement #2564 first, and then we should be able to add support for splits combinations", "is there an update on this? ran into the same issue on 2.17.1.\r\n\r\nOn a similar note, the keyword `split=\"all\"` also does not work as intended when `streaming=True`. ", "No update so far, especially since we haven't implemented an efficient way to query `split=train[50%:]` for example. The addition of two splits should be easy though, since we have `concatenate_datasets()`", "Can you concatenate_datasets that are being streamed now? I was led to believe concatenation works on non streaming datasets only.", "Yes `concatenate_datasets` works for datasets loaded in streaming mode as well" ]
2021-07-22T01:13:25
2024-04-08T13:26:46
null
NONE
null
null
null
null
this does not work: `dataset = datasets.load_dataset('mc4','iw',split='train+validation',streaming=True)` with error: `ValueError: Bad split: train+validation. Available splits: ['train', 'validation']` these work: `dataset = datasets.load_dataset('mc4','iw',split='train+validation')` `dataset = datasets.load_dataset('mc4','iw',split='train',streaming=True)` `dataset = datasets.load_dataset('mc4','iw',split='validation',streaming=True)` i could not find a reference to this in the documentation and the error message is confusing. also would be nice to allow streaming for the merged splits
null
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2699/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2699/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
https://api.github.com/repos/huggingface/datasets/issues/2695
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2695/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2695/comments
https://api.github.com/repos/huggingface/datasets/issues/2695/events
https://github.com/huggingface/datasets/issues/2695
949,864,823
MDU6SXNzdWU5NDk4NjQ4MjM=
2,695
Cannot import load_dataset on Colab
{ "avatar_url": "https://avatars.githubusercontent.com/u/43239645?v=4", "events_url": "https://api.github.com/users/bayartsogt-ya/events{/privacy}", "followers_url": "https://api.github.com/users/bayartsogt-ya/followers", "following_url": "https://api.github.com/users/bayartsogt-ya/following{/other_user}", "gists_url": "https://api.github.com/users/bayartsogt-ya/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/bayartsogt-ya", "id": 43239645, "login": "bayartsogt-ya", "node_id": "MDQ6VXNlcjQzMjM5NjQ1", "organizations_url": "https://api.github.com/users/bayartsogt-ya/orgs", "received_events_url": "https://api.github.com/users/bayartsogt-ya/received_events", "repos_url": "https://api.github.com/users/bayartsogt-ya/repos", "site_admin": false, "starred_url": "https://api.github.com/users/bayartsogt-ya/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bayartsogt-ya/subscriptions", "type": "User", "url": "https://api.github.com/users/bayartsogt-ya", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "I'm facing the same issue on Colab today too.\r\n\r\n```\r\nModuleNotFoundError Traceback (most recent call last)\r\n<ipython-input-4-5833ac0f5437> in <module>()\r\n 3 \r\n 4 from ray import tune\r\n----> 5 from datasets import DatasetDict, Dataset\r\n 6 from datasets import load_dataset, load_metric\r\n 7 from dataclasses import dataclass\r\n\r\n7 frames\r\n/usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py in <module>()\r\n 25 import posixpath\r\n 26 import requests\r\n---> 27 from tqdm.contrib.concurrent import thread_map\r\n 28 \r\n 29 from .. import __version__, config, utils\r\n\r\nModuleNotFoundError: No module named 'tqdm.contrib.concurrent'\r\n\r\n---------------------------------------------------------------------------\r\nNOTE: If your import is failing due to a missing package, you can\r\nmanually install dependencies using either !pip or !apt.\r\n\r\nTo view examples of installing some common dependencies, click the\r\n\"Open Examples\" button below.\r\n---------------------------------------------------------------------------\r\n```", "@phosseini \r\nI think it is related to [1.10.0](https://github.com/huggingface/datasets/actions/runs/1052653701) release done 3 hours ago. (cc: @lhoestq )\r\nFor now I just downgraded to 1.9.0 and it is working fine.", "> @phosseini\r\n> I think it is related to [1.10.0](https://github.com/huggingface/datasets/actions/runs/1052653701) release done 3 hours ago. (cc: @lhoestq )\r\n> For now I just downgraded to 1.9.0 and it is working fine.\r\n\r\nSame here, downgraded to 1.9.0 for now and works fine.", "Hi, \r\n\r\nupdating tqdm to the newest version resolves the issue for me. You can do this as follows in Colab:\r\n```\r\n!pip install tqdm --upgrade\r\n```", "Hi @bayartsogt-ya and @phosseini, thanks for reporting.\r\n\r\nWe are fixing this critical issue and making an urgent patch release of the `datasets` library today.\r\n\r\nIn the meantime, as pointed out by @mariosasko, you can circumvent this issue by updating the `tqdm` library: \r\n```\r\n!pip install -U tqdm\r\n```" ]
2021-07-21T15:52:51
2021-07-22T07:26:25
2021-07-22T07:09:07
NONE
null
null
null
null
## Describe the bug Got tqdm concurrent module not found error during importing load_dataset from datasets. ## Steps to reproduce the bug Here [colab notebook](https://colab.research.google.com/drive/1pErWWnVP4P4mVHjSFUtkePd8Na_Qirg4?usp=sharing) to reproduce the error On colab: ```python !pip install datasets from datasets import load_dataset ``` ## Expected results Works without error ## Actual results Specify the actual results or traceback. ``` ModuleNotFoundError Traceback (most recent call last) <ipython-input-2-8cc7de4c69eb> in <module>() ----> 1 from datasets import load_dataset, load_metric, Metric, MetricInfo, Features, Value 2 from sklearn.metrics import mean_squared_error /usr/local/lib/python3.7/dist-packages/datasets/__init__.py in <module>() 31 ) 32 ---> 33 from .arrow_dataset import Dataset, concatenate_datasets 34 from .arrow_reader import ArrowReader, ReadInstruction 35 from .arrow_writer import ArrowWriter /usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in <module>() 40 from tqdm.auto import tqdm 41 ---> 42 from datasets.tasks.text_classification import TextClassification 43 44 from . import config, utils /usr/local/lib/python3.7/dist-packages/datasets/tasks/__init__.py in <module>() 1 from typing import Optional 2 ----> 3 from ..utils.logging import get_logger 4 from .automatic_speech_recognition import AutomaticSpeechRecognition 5 from .base import TaskTemplate /usr/local/lib/python3.7/dist-packages/datasets/utils/__init__.py in <module>() 19 20 from . import logging ---> 21 from .download_manager import DownloadManager, GenerateMode 22 from .file_utils import DownloadConfig, cached_path, hf_bucket_url, is_remote_url, temp_seed 23 from .mock_download_manager import MockDownloadManager /usr/local/lib/python3.7/dist-packages/datasets/utils/download_manager.py in <module>() 24 25 from .. import config ---> 26 from .file_utils import ( 27 DownloadConfig, 28 cached_path, /usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py in <module>() 25 import posixpath 26 import requests ---> 27 from tqdm.contrib.concurrent import thread_map 28 29 from .. import __version__, config, utils ModuleNotFoundError: No module named 'tqdm.contrib.concurrent' ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.10.0 - Platform: Colab - Python version: 3.7.11 - PyArrow version: 3.0.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 3, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 3, "url": "https://api.github.com/repos/huggingface/datasets/issues/2695/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2695/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
15:16:16
https://api.github.com/repos/huggingface/datasets/issues/2691
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2691/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2691/comments
https://api.github.com/repos/huggingface/datasets/issues/2691/events
https://github.com/huggingface/datasets/issues/2691
949,758,379
MDU6SXNzdWU5NDk3NTgzNzk=
2,691
xtreme / pan-x cannot be downloaded
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "Hi @severo, thanks for reporting.\r\n\r\nHowever I have not been able to reproduce this issue. Could you please confirm if the problem persists for you?\r\n\r\nMaybe Dropbox (where the data source is hosted) was temporarily unavailable when you tried.", "Hmmm, the file (https://www.dropbox.com/s/dl/12h3qqog6q4bjve/panx_dataset.tar) really seems to be unavailable... I tried from various connexions and machines and got the same 404 error. Maybe the dataset has been loaded from the cache in your case?", "Yes @severo, weird... I could access the file when I answered to you, but now I cannot longer access it either... Maybe it was from the cache as you point out.\r\n\r\nAnyway, I have opened an issue in the GitHub repository responsible for the original dataset: https://github.com/afshinrahimi/mmner/issues/4\r\nI have also contacted the maintainer by email.\r\n\r\nI'll keep you informed with their answer.", "Reply from the author/maintainer: \r\n> Will fix the issue and let you know during the weekend.", "The author told that apparently Dropbox has changed their policy and no longer allow downloading the file without having signed in first. The author asked Hugging Face to host their dataset." ]
2021-07-21T14:18:05
2021-07-26T09:34:22
2021-07-26T09:34:22
COLLABORATOR
null
null
null
null
## Describe the bug Dataset xtreme / pan-x cannot be loaded Seems related to https://github.com/huggingface/datasets/pull/2326 ## Steps to reproduce the bug ```python dataset = load_dataset("xtreme", "PAN-X.fr") ``` ## Expected results Load the dataset ## Actual results ``` FileNotFoundError: Couldn't find file at https://www.dropbox.com/s/12h3qqog6q4bjve/panx_dataset.tar?dl=1 ``` ## Environment info - `datasets` version: 1.9.0 - Platform: macOS-11.4-x86_64-i386-64bit - Python version: 3.8.11 - PyArrow version: 4.0.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2691/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2691/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
4 days, 19:16:17
https://api.github.com/repos/huggingface/datasets/issues/2689
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2689/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2689/comments
https://api.github.com/repos/huggingface/datasets/issues/2689/events
https://github.com/huggingface/datasets/issues/2689
949,447,104
MDU6SXNzdWU5NDk0NDcxMDQ=
2,689
cannot save the dataset to disk after rename_column
{ "avatar_url": "https://avatars.githubusercontent.com/u/25532159?v=4", "events_url": "https://api.github.com/users/PaulLerner/events{/privacy}", "followers_url": "https://api.github.com/users/PaulLerner/followers", "following_url": "https://api.github.com/users/PaulLerner/following{/other_user}", "gists_url": "https://api.github.com/users/PaulLerner/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/PaulLerner", "id": 25532159, "login": "PaulLerner", "node_id": "MDQ6VXNlcjI1NTMyMTU5", "organizations_url": "https://api.github.com/users/PaulLerner/orgs", "received_events_url": "https://api.github.com/users/PaulLerner/received_events", "repos_url": "https://api.github.com/users/PaulLerner/repos", "site_admin": false, "starred_url": "https://api.github.com/users/PaulLerner/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PaulLerner/subscriptions", "type": "User", "url": "https://api.github.com/users/PaulLerner", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "Hi ! That's because you are trying to overwrite a file that is already open and being used.\r\nIndeed `foo/dataset.arrow` is open and used by your `dataset` object.\r\n\r\nWhen you do `rename_column`, the resulting dataset reads the data from the same arrow file.\r\nIn other cases like when using `map` on the other hand, the resulting dataset reads the data from another arrow file that is the result of the map transform.\r\n\r\nTherefore overwriting a dataset after `rename_column` is not possible, but it is possible after `map`, since `rename_column` doesn't switch to using another arrow file (the actual data stay the same).", "Ok, thanks for clearing it up :)", "so what would be the right way to read a dataset, then change something and then overwrite it with the new version?", "You have to write to a new directory, then delete the former directory, and finally rename the new directory.", "This worked for me; I know it's hacky but it works:\n\n```\nds.add_column...\nds = Dataset.from_dict(ds.to_dict())\nds.save_to_disk...\n```" ]
2021-07-21T08:13:40
2025-02-11T23:23:17
2021-07-21T13:11:04
CONTRIBUTOR
null
null
null
null
## Describe the bug If you use `rename_column` and do no other modification, you will be unable to save the dataset using `save_to_disk` ## Steps to reproduce the bug ```python # Sample code to reproduce the bug In [1]: from datasets import Dataset, load_from_disk In [5]: dataset=Dataset.from_dict({'foo': [0]}) In [7]: dataset.save_to_disk('foo') In [8]: dataset=load_from_disk('foo') In [10]: dataset=dataset.rename_column('foo', 'bar') In [11]: dataset.save_to_disk('foo') --------------------------------------------------------------------------- PermissionError Traceback (most recent call last) <ipython-input-11-a3bc0d4fc339> in <module> ----> 1 dataset.save_to_disk('foo') /mnt/beegfs/projects/meerqat/anaconda3/envs/meerqat/lib/python3.7/site-packages/datasets/arrow_dataset.py in save_to_disk(self, dataset_path , fs) 597 if Path(dataset_path, config.DATASET_ARROW_FILENAME) in cache_files_paths: 598 raise PermissionError( --> 599 f"Tried to overwrite {Path(dataset_path, config.DATASET_ARROW_FILENAME)} but a dataset can't overwrite itself." 600 ) 601 if Path(dataset_path, config.DATASET_INDICES_FILENAME) in cache_files_paths: PermissionError: Tried to overwrite foo/dataset.arrow but a dataset can't overwrite itself. ``` N. B. I created the dataset from dict to enable easy reproduction but the same happens if you load an existing dataset (e.g. starting from `In [8]`) ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.8.0 - Platform: Linux-3.10.0-1160.11.1.el7.x86_64-x86_64-with-centos-7.9.2009-Core - Python version: 3.7.10 - PyArrow version: 3.0.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/25532159?v=4", "events_url": "https://api.github.com/users/PaulLerner/events{/privacy}", "followers_url": "https://api.github.com/users/PaulLerner/followers", "following_url": "https://api.github.com/users/PaulLerner/following{/other_user}", "gists_url": "https://api.github.com/users/PaulLerner/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/PaulLerner", "id": 25532159, "login": "PaulLerner", "node_id": "MDQ6VXNlcjI1NTMyMTU5", "organizations_url": "https://api.github.com/users/PaulLerner/orgs", "received_events_url": "https://api.github.com/users/PaulLerner/received_events", "repos_url": "https://api.github.com/users/PaulLerner/repos", "site_admin": false, "starred_url": "https://api.github.com/users/PaulLerner/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PaulLerner/subscriptions", "type": "User", "url": "https://api.github.com/users/PaulLerner", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2689/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2689/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
4:57:24
https://api.github.com/repos/huggingface/datasets/issues/2688
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2688/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2688/comments
https://api.github.com/repos/huggingface/datasets/issues/2688/events
https://github.com/huggingface/datasets/issues/2688
949,182,074
MDU6SXNzdWU5NDkxODIwNzQ=
2,688
hebrew language codes he and iw should be treated as aliases
{ "avatar_url": "https://avatars.githubusercontent.com/u/4436747?v=4", "events_url": "https://api.github.com/users/eyaler/events{/privacy}", "followers_url": "https://api.github.com/users/eyaler/followers", "following_url": "https://api.github.com/users/eyaler/following{/other_user}", "gists_url": "https://api.github.com/users/eyaler/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/eyaler", "id": 4436747, "login": "eyaler", "node_id": "MDQ6VXNlcjQ0MzY3NDc=", "organizations_url": "https://api.github.com/users/eyaler/orgs", "received_events_url": "https://api.github.com/users/eyaler/received_events", "repos_url": "https://api.github.com/users/eyaler/repos", "site_admin": false, "starred_url": "https://api.github.com/users/eyaler/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eyaler/subscriptions", "type": "User", "url": "https://api.github.com/users/eyaler", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "Hi @eyaler, thanks for reporting.\r\n\r\nWhile you are true with respect the Hebrew language tag (\"iw\" is deprecated and \"he\" is the preferred value), in the \"mc4\" dataset (which is a derived dataset) we have kept the language tags present in the original dataset: [Google C4](https://www.tensorflow.org/datasets/catalog/c4).", "For discoverability on the website I updated the YAML tags at the top of the mC4 dataset card https://github.com/huggingface/datasets/commit/38288087b1b02f97586e0346e8f28f4960f1fd37\r\n\r\nOnce the website is updated, mC4 will be listed in https://huggingface.co/datasets?filter=languages:he\r\n\r\n" ]
2021-07-20T23:13:52
2021-07-21T16:34:53
2021-07-21T16:34:53
NONE
null
null
null
null
https://huggingface.co/datasets/mc4 not listed when searching for hebrew datasets (he) as it uses the older language code iw, preventing discoverability.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2688/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2688/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
17:21:01
https://api.github.com/repos/huggingface/datasets/issues/2683
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2683/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2683/comments
https://api.github.com/repos/huggingface/datasets/issues/2683/events
https://github.com/huggingface/datasets/issues/2683
948,721,379
MDU6SXNzdWU5NDg3MjEzNzk=
2,683
Cache directories changed due to recent changes in how config kwargs are handled
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" } ]
[]
2021-07-20T14:37:57
2021-07-20T16:27:15
2021-07-20T16:27:15
MEMBER
null
null
null
null
Since #2659 I can see weird cache directory names with hashes in the config id, even though no additional config kwargs are passed. For example: ```python from datasets import load_dataset_builder c4_builder = load_dataset_builder("c4", "en") print(c4_builder.cache_dir) # /Users/quentinlhoest/.cache/huggingface/datasets/c4/en-174d3b7155eb68db/0.0.0/... # instead of # /Users/quentinlhoest/.cache/huggingface/datasets/c4/en/0.0.0/... ``` This issue could be annoying since it would simply ignore old cache directories for users, and regenerate datasets cc @stas00 this is what you experienced a few days ago
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2683/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2683/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
1:49:18
https://api.github.com/repos/huggingface/datasets/issues/2681
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2681/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2681/comments
https://api.github.com/repos/huggingface/datasets/issues/2681/events
https://github.com/huggingface/datasets/issues/2681
948,708,645
MDU6SXNzdWU5NDg3MDg2NDU=
2,681
5 duplicate datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "Yes this was documented in the PR that added this hf->paperswithcode mapping (https://github.com/huggingface/datasets/pull/2404) and AFAICT those are slightly distinct datasets so I think it's a wontfix\r\n\r\nFor context on the paperswithcode mapping you can also refer to https://github.com/huggingface/huggingface_hub/pull/43 which contains a lot of background discussion ", "Thanks for the antecedents. I close." ]
2021-07-20T14:25:00
2021-07-20T15:44:17
2021-07-20T15:44:17
COLLABORATOR
null
null
null
null
## Describe the bug In 5 cases, I could find a dataset on Paperswithcode which references two Hugging Face datasets as dataset loaders. They are: - https://paperswithcode.com/dataset/multinli -> https://huggingface.co/datasets/multi_nli and https://huggingface.co/datasets/multi_nli_mismatch <img width="838" alt="Capture d’écran 2021-07-20 à 16 33 58" src="https://user-images.githubusercontent.com/1676121/126342757-4625522a-f788-41a3-bd1f-2a8b9817bbf5.png"> - https://paperswithcode.com/dataset/squad -> https://huggingface.co/datasets/squad and https://huggingface.co/datasets/squad_v2 - https://paperswithcode.com/dataset/narrativeqa -> https://huggingface.co/datasets/narrativeqa and https://huggingface.co/datasets/narrativeqa_manual - https://paperswithcode.com/dataset/hate-speech-and-offensive-language -> https://huggingface.co/datasets/hate_offensive and https://huggingface.co/datasets/hate_speech_offensive - https://paperswithcode.com/dataset/newsph-nli -> https://huggingface.co/datasets/newsph and https://huggingface.co/datasets/newsph_nli Possible solutions: - don't fix (it works) - for each pair of duplicate datasets, remove one, and create an alias to the other. ## Steps to reproduce the bug Visit the Paperswithcode links, and look at the "Dataset Loaders" section ## Expected results There should only be one reference to a Hugging Face dataset loader ## Actual results Two Hugging Face dataset loaders
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2681/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2681/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
1:19:17
https://api.github.com/repos/huggingface/datasets/issues/2679
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2679/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2679/comments
https://api.github.com/repos/huggingface/datasets/issues/2679/events
https://github.com/huggingface/datasets/issues/2679
948,506,638
MDU6SXNzdWU5NDg1MDY2Mzg=
2,679
Cannot load the blog_authorship_corpus due to codec errors
{ "avatar_url": "https://avatars.githubusercontent.com/u/38069449?v=4", "events_url": "https://api.github.com/users/izaskr/events{/privacy}", "followers_url": "https://api.github.com/users/izaskr/followers", "following_url": "https://api.github.com/users/izaskr/following{/other_user}", "gists_url": "https://api.github.com/users/izaskr/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/izaskr", "id": 38069449, "login": "izaskr", "node_id": "MDQ6VXNlcjM4MDY5NDQ5", "organizations_url": "https://api.github.com/users/izaskr/orgs", "received_events_url": "https://api.github.com/users/izaskr/received_events", "repos_url": "https://api.github.com/users/izaskr/repos", "site_admin": false, "starred_url": "https://api.github.com/users/izaskr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/izaskr/subscriptions", "type": "User", "url": "https://api.github.com/users/izaskr", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
[ "Hi @izaskr, thanks for reporting.\r\n\r\nHowever the traceback you joined does not correspond to the codec error message: it is about other error `NonMatchingSplitsSizesError`. Maybe you missed some important part of your traceback...\r\n\r\nI'm going to have a look at the dataset anyway...", "Hi @izaskr, thanks again for having reported this issue.\r\n\r\nAfter investigation, I have created a Pull Request (#2685) to fix several issues with this dataset:\r\n- the `NonMatchingSplitsSizesError`\r\n- the `UnicodeDecodeError`\r\n\r\nOnce the Pull Request merged into master, you will be able to load this dataset if you install `datasets` from our GitHub repository master branch. Otherwise, you will be able to use it after our next release, by updating `datasets`: `pip install -U datasets`.", "@albertvillanova \r\nCan you shed light on how this fix works?\r\n\r\nWe're experiencing a similar issue. \r\n\r\nIf we run several runs (eg in a Wandb sweep) the first run \"works\" but then we get `NonMatchingSplitsSizesError`\r\n\r\n| run num | actual train examples # | expected example # | recorded example # |\r\n| ------- | -------------- | ----------------- | -------- |\r\n| 1 | 100 | 100 | 100 |\r\n| 2 | 102 | 100 | 102 |\r\n| 3 | 100 | 100 | 202 | \r\n| 4 | 40 | 100 | 40 |\r\n| 5 | 40 | 100 | 40 |\r\n| 6 | 40 | 100 | 40 | \r\n\r\n\r\nThe second through the nth all crash with \r\n\r\n```\r\ndatasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=19980970, num_examples=100, dataset_name='cies'), 'recorded': SplitInfo(name='train', num_bytes=40163811, num_examples=202, dataset_name='cies')}]\r\n\r\n```" ]
2021-07-20T10:13:20
2021-07-21T17:02:21
2021-07-21T13:11:58
NONE
null
null
null
null
## Describe the bug A codec error is raised while loading the blog_authorship_corpus. ## Steps to reproduce the bug ``` from datasets import load_dataset raw_datasets = load_dataset("blog_authorship_corpus") ``` ## Expected results Loading the dataset without errors. ## Actual results An error similar to the one below was raised for (what seems like) every XML file. /home/izaskr/.cache/huggingface/datasets/downloads/extracted/7cf52524f6517e168604b41c6719292e8f97abbe8f731e638b13423f4212359a/blogs/788358.male.24.Arts.Libra.xml cannot be loaded. Error message: 'utf-8' codec can't decode byte 0xe7 in position 7551: invalid continuation byte Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/izaskr/anaconda3/envs/local_vae_older/lib/python3.8/site-packages/datasets/load.py", line 856, in load_dataset builder_instance.download_and_prepare( File "/home/izaskr/anaconda3/envs/local_vae_older/lib/python3.8/site-packages/datasets/builder.py", line 583, in download_and_prepare self._download_and_prepare( File "/home/izaskr/anaconda3/envs/local_vae_older/lib/python3.8/site-packages/datasets/builder.py", line 671, in _download_and_prepare verify_splits(self.info.splits, split_dict) File "/home/izaskr/anaconda3/envs/local_vae_older/lib/python3.8/site-packages/datasets/utils/info_utils.py", line 74, in verify_splits raise NonMatchingSplitsSizesError(str(bad_splits)) datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=610252351, num_examples=532812, dataset_name='blog_authorship_corpus'), 'recorded': SplitInfo(name='train', num_bytes=614706451, num_examples=535568, dataset_name='blog_authorship_corpus')}, {'expected': SplitInfo(name='validation', num_bytes=37500394, num_examples=31277, dataset_name='blog_authorship_corpus'), 'recorded': SplitInfo(name='validation', num_bytes=32553710, num_examples=28521, dataset_name='blog_authorship_corpus')}] ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.9.0 - Platform: Linux-4.15.0-132-generic-x86_64-with-glibc2.10 - Python version: 3.8.8 - PyArrow version: 4.0.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2679/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2679/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
1 day, 2:58:38
https://api.github.com/repos/huggingface/datasets/issues/2678
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2678/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2678/comments
https://api.github.com/repos/huggingface/datasets/issues/2678/events
https://github.com/huggingface/datasets/issues/2678
948,471,222
MDU6SXNzdWU5NDg0NzEyMjI=
2,678
Import Error in Kaggle notebook
{ "avatar_url": "https://avatars.githubusercontent.com/u/47216475?v=4", "events_url": "https://api.github.com/users/prikmm/events{/privacy}", "followers_url": "https://api.github.com/users/prikmm/followers", "following_url": "https://api.github.com/users/prikmm/following{/other_user}", "gists_url": "https://api.github.com/users/prikmm/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/prikmm", "id": 47216475, "login": "prikmm", "node_id": "MDQ6VXNlcjQ3MjE2NDc1", "organizations_url": "https://api.github.com/users/prikmm/orgs", "received_events_url": "https://api.github.com/users/prikmm/received_events", "repos_url": "https://api.github.com/users/prikmm/repos", "site_admin": false, "starred_url": "https://api.github.com/users/prikmm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/prikmm/subscriptions", "type": "User", "url": "https://api.github.com/users/prikmm", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "This looks like an issue with PyArrow. Did you try reinstalling it ?", "@lhoestq I did, and then let pip handle the installation in `pip import datasets`. I also tried using conda but it gives the same error.\r\n\r\nEdit: pyarrow version on kaggle is 4.0.0, it gets replaced with 4.0.1. So, I don't think uninstalling will change anything.\r\n```\r\nInstall Trace of datasets:\r\n\r\nCollecting datasets\r\n Downloading datasets-1.9.0-py3-none-any.whl (262 kB)\r\n |████████████████████████████████| 262 kB 834 kB/s eta 0:00:01\r\nRequirement already satisfied: dill in /opt/conda/lib/python3.7/site-packages (from datasets) (0.3.4)\r\nCollecting pyarrow!=4.0.0,>=1.0.0\r\n Downloading pyarrow-4.0.1-cp37-cp37m-manylinux2014_x86_64.whl (21.8 MB)\r\n |████████████████████████████████| 21.8 MB 6.2 MB/s eta 0:00:01\r\nRequirement already satisfied: importlib-metadata in /opt/conda/lib/python3.7/site-packages (from datasets) (3.4.0)\r\nRequirement already satisfied: huggingface-hub<0.1.0 in /opt/conda/lib/python3.7/site-packages (from datasets) (0.0.8)\r\nRequirement already satisfied: pandas in /opt/conda/lib/python3.7/site-packages (from datasets) (1.2.4)\r\nRequirement already satisfied: requests>=2.19.0 in /opt/conda/lib/python3.7/site-packages (from datasets) (2.25.1)\r\nRequirement already satisfied: fsspec>=2021.05.0 in /opt/conda/lib/python3.7/site-packages (from datasets) (2021.6.1)\r\nRequirement already satisfied: multiprocess in /opt/conda/lib/python3.7/site-packages (from datasets) (0.70.12.2)\r\nRequirement already satisfied: packaging in /opt/conda/lib/python3.7/site-packages (from datasets) (20.9)\r\nCollecting xxhash\r\n Downloading xxhash-2.0.2-cp37-cp37m-manylinux2010_x86_64.whl (243 kB)\r\n |████████████████████████████████| 243 kB 23.7 MB/s eta 0:00:01\r\nRequirement already satisfied: numpy>=1.17 in /opt/conda/lib/python3.7/site-packages (from datasets) (1.19.5)\r\nRequirement already satisfied: tqdm>=4.27 in /opt/conda/lib/python3.7/site-packages (from datasets) (4.61.1)\r\nRequirement already satisfied: filelock in /opt/conda/lib/python3.7/site-packages (from huggingface-hub<0.1.0->datasets) (3.0.12)\r\nRequirement already satisfied: urllib3<1.27,>=1.21.1 in /opt/conda/lib/python3.7/site-packages (from requests>=2.19.0->datasets) (1.26.5)\r\nRequirement already satisfied: idna<3,>=2.5 in /opt/conda/lib/python3.7/site-packages (from requests>=2.19.0->datasets) (2.10)\r\nRequirement already satisfied: certifi>=2017.4.17 in /opt/conda/lib/python3.7/site-packages (from requests>=2.19.0->datasets) (2021.5.30)\r\nRequirement already satisfied: chardet<5,>=3.0.2 in /opt/conda/lib/python3.7/site-packages (from requests>=2.19.0->datasets) (4.0.0)\r\nRequirement already satisfied: typing-extensions>=3.6.4 in /opt/conda/lib/python3.7/site-packages (from importlib-metadata->datasets) (3.7.4.3)\r\nRequirement already satisfied: zipp>=0.5 in /opt/conda/lib/python3.7/site-packages (from importlib-metadata->datasets) (3.4.1)\r\nRequirement already satisfied: pyparsing>=2.0.2 in /opt/conda/lib/python3.7/site-packages (from packaging->datasets) (2.4.7)\r\nRequirement already satisfied: python-dateutil>=2.7.3 in /opt/conda/lib/python3.7/site-packages (from pandas->datasets) (2.8.1)\r\nRequirement already satisfied: pytz>=2017.3 in /opt/conda/lib/python3.7/site-packages (from pandas->datasets) (2021.1)\r\nRequirement already satisfied: six>=1.5 in /opt/conda/lib/python3.7/site-packages (from python-dateutil>=2.7.3->pandas->datasets) (1.15.0)\r\nInstalling collected packages: xxhash, pyarrow, datasets\r\n Attempting uninstall: pyarrow\r\n Found existing installation: pyarrow 4.0.0\r\n Uninstalling pyarrow-4.0.0:\r\n Successfully uninstalled pyarrow-4.0.0\r\nSuccessfully installed datasets-1.9.0 pyarrow-4.0.1 xxhash-2.0.2\r\nWARNING: Running pip as root will break packages and permissions. You should install packages reliably by using venv: https://pip.pypa.io/warnings/venv\r\n```", "You may need to restart your kaggle notebook after installing a newer version of `pyarrow`.\r\n\r\nIf it doesn't work we'll probably have to create an issue on [arrow's JIRA](https://issues.apache.org/jira/projects/ARROW/issues/), and maybe ask kaggle why it could fail", "> You may need to restart your kaggle notebook before after installing a newer version of `pyarrow`.\r\n> \r\n> If it doesn't work we'll probably have to create an issue on [arrow's JIRA](https://issues.apache.org/jira/projects/ARROW/issues/), and maybe ask kaggle why it could fail\r\n\r\nIt works after restarting.\r\nMy bad, I forgot to restart the notebook. Sorry for the trouble!" ]
2021-07-20T09:28:38
2021-07-21T13:59:26
2021-07-21T13:03:02
NONE
null
null
null
null
## Describe the bug Not able to import datasets library in kaggle notebooks ## Steps to reproduce the bug ```python !pip install datasets import datasets ``` ## Expected results No such error ## Actual results ``` ImportError Traceback (most recent call last) <ipython-input-9-652e886d387f> in <module> ----> 1 import datasets /opt/conda/lib/python3.7/site-packages/datasets/__init__.py in <module> 31 ) 32 ---> 33 from .arrow_dataset import Dataset, concatenate_datasets 34 from .arrow_reader import ArrowReader, ReadInstruction 35 from .arrow_writer import ArrowWriter /opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py in <module> 36 import pandas as pd 37 import pyarrow as pa ---> 38 import pyarrow.compute as pc 39 from multiprocess import Pool, RLock 40 from tqdm.auto import tqdm /opt/conda/lib/python3.7/site-packages/pyarrow/compute.py in <module> 16 # under the License. 17 ---> 18 from pyarrow._compute import ( # noqa 19 Function, 20 FunctionOptions, ImportError: /opt/conda/lib/python3.7/site-packages/pyarrow/_compute.cpython-37m-x86_64-linux-gnu.so: undefined symbol: _ZNK5arrow7compute15KernelSignature8ToStringEv ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.9.0 - Platform: Kaggle - Python version: 3.7.10 - PyArrow version: 4.0.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/47216475?v=4", "events_url": "https://api.github.com/users/prikmm/events{/privacy}", "followers_url": "https://api.github.com/users/prikmm/followers", "following_url": "https://api.github.com/users/prikmm/following{/other_user}", "gists_url": "https://api.github.com/users/prikmm/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/prikmm", "id": 47216475, "login": "prikmm", "node_id": "MDQ6VXNlcjQ3MjE2NDc1", "organizations_url": "https://api.github.com/users/prikmm/orgs", "received_events_url": "https://api.github.com/users/prikmm/received_events", "repos_url": "https://api.github.com/users/prikmm/repos", "site_admin": false, "starred_url": "https://api.github.com/users/prikmm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/prikmm/subscriptions", "type": "User", "url": "https://api.github.com/users/prikmm", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2678/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2678/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
1 day, 3:34:24
https://api.github.com/repos/huggingface/datasets/issues/2677
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2677/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2677/comments
https://api.github.com/repos/huggingface/datasets/issues/2677/events
https://github.com/huggingface/datasets/issues/2677
948,429,788
MDU6SXNzdWU5NDg0Mjk3ODg=
2,677
Error when downloading C4
{ "avatar_url": "https://avatars.githubusercontent.com/u/36672861?v=4", "events_url": "https://api.github.com/users/Aktsvigun/events{/privacy}", "followers_url": "https://api.github.com/users/Aktsvigun/followers", "following_url": "https://api.github.com/users/Aktsvigun/following{/other_user}", "gists_url": "https://api.github.com/users/Aktsvigun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Aktsvigun", "id": 36672861, "login": "Aktsvigun", "node_id": "MDQ6VXNlcjM2NjcyODYx", "organizations_url": "https://api.github.com/users/Aktsvigun/orgs", "received_events_url": "https://api.github.com/users/Aktsvigun/received_events", "repos_url": "https://api.github.com/users/Aktsvigun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Aktsvigun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Aktsvigun/subscriptions", "type": "User", "url": "https://api.github.com/users/Aktsvigun", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" } ]
[ "Hi Thanks for reporting !\r\nIt looks like these files are not correctly reported in the list of expected files to download, let me fix that ;)", "Alright this is fixed now. We'll do a new release soon to make the fix available.\r\n\r\nIn the meantime feel free to simply pass `ignore_verifications=True` to `load_dataset` to skip this error", "@lhoestq thank you for such a quick feedback!" ]
2021-07-20T08:37:30
2021-07-20T14:41:31
2021-07-20T14:38:10
NONE
null
null
null
null
Hi, I am trying to download `en` corpus from C4 dataset. However, I get an error caused by validation files download (see image). My code is very primitive: `datasets.load_dataset('c4', 'en')` Is this a bug or do I have some configurations missing on my server? Thanks! <img width="1014" alt="Снимок экрана 2021-07-20 в 11 37 17" src="https://user-images.githubusercontent.com/36672861/126289448-6e0db402-5f3f-485a-bf74-eb6e0271fc25.png">
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2677/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2677/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
6:00:40
https://api.github.com/repos/huggingface/datasets/issues/2670
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2670/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2670/comments
https://api.github.com/repos/huggingface/datasets/issues/2670/events
https://github.com/huggingface/datasets/issues/2670
947,120,709
MDU6SXNzdWU5NDcxMjA3MDk=
2,670
Using sharding to parallelize indexing
{ "avatar_url": "https://avatars.githubusercontent.com/u/5583410?v=4", "events_url": "https://api.github.com/users/ggdupont/events{/privacy}", "followers_url": "https://api.github.com/users/ggdupont/followers", "following_url": "https://api.github.com/users/ggdupont/following{/other_user}", "gists_url": "https://api.github.com/users/ggdupont/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ggdupont", "id": 5583410, "login": "ggdupont", "node_id": "MDQ6VXNlcjU1ODM0MTA=", "organizations_url": "https://api.github.com/users/ggdupont/orgs", "received_events_url": "https://api.github.com/users/ggdupont/received_events", "repos_url": "https://api.github.com/users/ggdupont/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ggdupont/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ggdupont/subscriptions", "type": "User", "url": "https://api.github.com/users/ggdupont", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
[]
2021-07-18T21:26:26
2021-10-07T13:33:25
null
CONTRIBUTOR
null
null
null
null
**Is your feature request related to a problem? Please describe.** Creating an elasticsearch index on large dataset could be quite long and cannot be parallelized on shard (the index creation is colliding) **Describe the solution you'd like** When working on dataset shards, if an index already exists, its mapping should be checked and if compatible, the indexing process should continue with the shard data. Additionally, at the end of the process, the `_indexes` dict should be send back to the original dataset object (from which the shards have been created) to allow to use the index for later filtering on the whole dataset. **Describe alternatives you've considered** Each dataset shard could created independent partial indices. then on the whole dataset level, indices should be all referred in `_indexes` dict and be used in querying through `get_nearest_examples()`. The drawback is that the scores will be computed independently on the partial indices leading to inconsistent values for most scoring based on corpus level statistics (tf/idf, BM25). **Additional context** The objectives is to parallelize the index creation to speed-up the process (ie surcharging the ES server which is fine to handle large load) while later enabling search on the whole dataset.
null
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 2, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 4, "url": "https://api.github.com/repos/huggingface/datasets/issues/2670/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2670/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
https://api.github.com/repos/huggingface/datasets/issues/2669
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2669/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2669/comments
https://api.github.com/repos/huggingface/datasets/issues/2669/events
https://github.com/huggingface/datasets/issues/2669
946,982,998
MDU6SXNzdWU5NDY5ODI5OTg=
2,669
Metric kwargs are not passed to underlying external metric f1_score
{ "avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4", "events_url": "https://api.github.com/users/BramVanroy/events{/privacy}", "followers_url": "https://api.github.com/users/BramVanroy/followers", "following_url": "https://api.github.com/users/BramVanroy/following{/other_user}", "gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/BramVanroy", "id": 2779410, "login": "BramVanroy", "node_id": "MDQ6VXNlcjI3Nzk0MTA=", "organizations_url": "https://api.github.com/users/BramVanroy/orgs", "received_events_url": "https://api.github.com/users/BramVanroy/received_events", "repos_url": "https://api.github.com/users/BramVanroy/repos", "site_admin": false, "starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions", "type": "User", "url": "https://api.github.com/users/BramVanroy", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
[ "Hi @BramVanroy, thanks for reporting.\r\n\r\nFirst, note that `\"min\"` is not an allowed value for `average`. According to scikit-learn [documentation](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html), `average` can only take the values: `{\"micro\", \"macro\", \"samples\", \"weighted\", \"binary\"} or None, default=\"binary\"`.\r\n\r\nSecond, you should take into account that all additional metric-specific argument should be passed in the method `compute` (and not in the method `load_metric`). You can find more information in our documentation: https://huggingface.co/docs/datasets/using_metrics.html#computing-the-metric-scores\r\n\r\nSo for example, if you would like to calculate the macro-averaged F1 score, you should use:\r\n```python\r\nimport datasets\r\n\r\nf1 = datasets.load_metric(\"f1\", keep_in_memory=True)\r\nf1.add_batch(predictions=[0,2,3], references=[1, 2, 3])\r\nf1.compute(average=\"macro\")\r\n```", "Thanks, that was it. A bit strange though, since `load_metric` had an argument `metric_init_kwargs`. I assume that that's for specific initialisation arguments whereas `average` is for the function itself." ]
2021-07-18T08:32:31
2021-07-18T18:36:05
2021-07-18T11:19:04
CONTRIBUTOR
null
null
null
null
## Describe the bug When I want to use F1 score with average="min", this keyword argument does not seem to be passed through to the underlying sklearn metric. This is evident because [sklearn](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html) throws an error telling me so. ## Steps to reproduce the bug ```python import datasets f1 = datasets.load_metric("f1", keep_in_memory=True, average="min") f1.add_batch(predictions=[0,2,3], references=[1, 2, 3]) f1.compute() ``` ## Expected results No error, because `average="min"` should be passed correctly to f1_score in sklearn. ## Actual results ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Users\bramv\.virtualenvs\pipeline-TpEsXVex\lib\site-packages\datasets\metric.py", line 402, in compute output = self._compute(predictions=predictions, references=references, **kwargs) File "C:\Users\bramv\.cache\huggingface\modules\datasets_modules\metrics\f1\82177930a325d4c28342bba0f116d73f6d92fb0c44cd67be32a07c1262b61cfe\f1.py", line 97, in _compute "f1": f1_score( File "C:\Users\bramv\.virtualenvs\pipeline-TpEsXVex\lib\site-packages\sklearn\utils\validation.py", line 63, in inner_f return f(*args, **kwargs) File "C:\Users\bramv\.virtualenvs\pipeline-TpEsXVex\lib\site-packages\sklearn\metrics\_classification.py", line 1071, in f1_score return fbeta_score(y_true, y_pred, beta=1, labels=labels, File "C:\Users\bramv\.virtualenvs\pipeline-TpEsXVex\lib\site-packages\sklearn\utils\validation.py", line 63, in inner_f return f(*args, **kwargs) File "C:\Users\bramv\.virtualenvs\pipeline-TpEsXVex\lib\site-packages\sklearn\metrics\_classification.py", line 1195, in fbeta_score _, _, f, _ = precision_recall_fscore_support(y_true, y_pred, File "C:\Users\bramv\.virtualenvs\pipeline-TpEsXVex\lib\site-packages\sklearn\utils\validation.py", line 63, in inner_f return f(*args, **kwargs) File "C:\Users\bramv\.virtualenvs\pipeline-TpEsXVex\lib\site-packages\sklearn\metrics\_classification.py", line 1464, in precision_recall_fscore_support labels = _check_set_wise_labels(y_true, y_pred, average, labels, File "C:\Users\bramv\.virtualenvs\pipeline-TpEsXVex\lib\site-packages\sklearn\metrics\_classification.py", line 1294, in _check_set_wise_labels raise ValueError("Target is %s but average='binary'. Please " ValueError: Target is multiclass but average='binary'. Please choose another average setting, one of [None, 'micro', 'macro', 'weighted']. ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.9.0 - Platform: Windows-10-10.0.19041-SP0 - Python version: 3.9.2 - PyArrow version: 4.0.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2669/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2669/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
2:46:33
https://api.github.com/repos/huggingface/datasets/issues/2663
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2663/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2663/comments
https://api.github.com/repos/huggingface/datasets/issues/2663/events
https://github.com/huggingface/datasets/issues/2663
946,552,273
MDU6SXNzdWU5NDY1NTIyNzM=
2,663
[`to_json`] add multi-proc sharding support
{ "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stas00", "id": 10676103, "login": "stas00", "node_id": "MDQ6VXNlcjEwNjc2MTAz", "organizations_url": "https://api.github.com/users/stas00/orgs", "received_events_url": "https://api.github.com/users/stas00/received_events", "repos_url": "https://api.github.com/users/stas00/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "type": "User", "url": "https://api.github.com/users/stas00", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
[ "Hi @stas00, \r\nI want to work on this issue and I was thinking why don't we use `imap` [in this loop](https://github.com/huggingface/datasets/blob/440b14d0dd428ae1b25881aa72ba7bbb8ad9ff84/src/datasets/io/json.py#L99)? This way, using offset (which is being used to slice the pyarrow table) we can convert pyarrow table to `json` using multiprocessing. I've a small code snippet for some clarity:\r\n```\r\nresult = list(\r\n pool.imap(self._apply_df, [(offset, batch_size) for offset in range(0, len(self.dataset), batch_size)])\r\n )\r\n```\r\n`_apply_df` is a function which will return `batch.to_pandas().to_json(path_or_buf=None, orient=\"records\", lines=True)` which is basically json version of the batched pyarrow table. Later on we can concatenate it to form json file? \r\n\r\nI think the only downside here is to write file from `imap` output (output would be a list and we'll need to iterate over it and write in a file) which might add a little overhead cost. What do you think about this?", "Followed up in https://github.com/huggingface/datasets/pull/2747" ]
2021-07-16T19:41:50
2021-09-13T13:56:37
2021-09-13T13:56:37
CONTRIBUTOR
null
null
null
null
As discussed on slack it appears that `to_json` is quite slow on huge datasets like OSCAR. I implemented sharded saving, which is much much faster - but the tqdm bars all overwrite each other, so it's hard to make sense of the progress, so if possible ideally this multi-proc support could be implemented internally in `to_json` via `num_proc` argument. I guess `num_proc` will be the number of shards? I think the user will need to use this feature wisely, since too many processes writing to say normal style HD is likely to be slower than one process. I'm not sure whether the user should be responsible to concatenate the shards at the end or `datasets`, either way works for my needs. The code I was using: ``` from multiprocessing import cpu_count, Process, Queue [...] filtered_dataset = concat_dataset.map(filter_short_documents, batched=True, batch_size=256, num_proc=cpu_count()) DATASET_NAME = "oscar" SHARDS = 10 def process_shard(idx): print(f"Sharding {idx}") ds_shard = filtered_dataset.shard(SHARDS, idx, contiguous=True) # ds_shard = ds_shard.shuffle() # remove contiguous=True above if shuffling print(f"Saving {DATASET_NAME}-{idx}.jsonl") ds_shard.to_json(f"{DATASET_NAME}-{idx}.jsonl", orient="records", lines=True, force_ascii=False) queue = Queue() processes = [Process(target=process_shard, args=(idx,)) for idx in range(SHARDS)] for p in processes: p.start() for p in processes: p.join() ``` Thank you! @lhoestq
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2663/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2663/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
58 days, 18:14:47
https://api.github.com/repos/huggingface/datasets/issues/2658
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2658/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2658/comments
https://api.github.com/repos/huggingface/datasets/issues/2658/events
https://github.com/huggingface/datasets/issues/2658
946,139,532
MDU6SXNzdWU5NDYxMzk1MzI=
2,658
Can't pass `sep=None` to load_dataset("csv", ...) to infer the separator via pandas.read_csv
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" } ]
[]
2021-07-16T10:05:44
2021-07-16T12:46:06
2021-07-16T12:46:06
MEMBER
null
null
null
null
When doing `load_dataset("csv", sep=None)`, the `sep` passed to `pd.read_csv` is still the default `sep=","` instead, which makes it impossible to make the csv loader infer the separator. Related to https://github.com/huggingface/datasets/pull/2656 cc @SBrandeis
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2658/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2658/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
2:40:22
https://api.github.com/repos/huggingface/datasets/issues/2657
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2657/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2657/comments
https://api.github.com/repos/huggingface/datasets/issues/2657/events
https://github.com/huggingface/datasets/issues/2657
945,822,829
MDU6SXNzdWU5NDU4MjI4Mjk=
2,657
`to_json` reporting enhancements
{ "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stas00", "id": 10676103, "login": "stas00", "node_id": "MDQ6VXNlcjEwNjc2MTAz", "organizations_url": "https://api.github.com/users/stas00/orgs", "received_events_url": "https://api.github.com/users/stas00/received_events", "repos_url": "https://api.github.com/users/stas00/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "type": "User", "url": "https://api.github.com/users/stas00", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
[]
2021-07-15T23:32:18
2021-07-15T23:33:53
null
CONTRIBUTOR
null
null
null
null
While using `to_json` 2 things came to mind that would have made the experience easier on the user: 1. Could we have a `desc` arg for the tqdm use and a fallback to just `to_json` so that it'd be clear to the user what's happening? Surely, one can just print the description before calling json, but I thought perhaps it'd help to have it self-identify like you did for other progress bars recently. 2. It took me a while to make sense of the reported numbers: ``` 22%|██▏ | 1536/7076 [12:30:57<44:09:42, 28.70s/it] ``` So iteration here happens to be 10K samples, and the total is 70M records. But the user does't know that, so the progress bar is perfect, but the numbers it reports are meaningless until one discovers that 1it=10K samples. And one still has to convert these in the head - so it's not quick. Not exactly sure what's the best way to approach this, perhaps it can be part of `desc`? or report M or K, so it'd be built-in if it were to print, e.g.: ``` 22%|██▏ | 15360K/70760K [12:30:57<44:09:42, 28.70s/it] ``` or ``` 22%|██▏ | 15.36M/70.76M [12:30:57<44:09:42, 28.70s/it] ``` (while of course remaining friendly to small datasets) I forget if tqdm lets you add a magnitude identifier to the running count. Thank you!
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2657/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2657/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
https://api.github.com/repos/huggingface/datasets/issues/2655
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2655/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2655/comments
https://api.github.com/repos/huggingface/datasets/issues/2655/events
https://github.com/huggingface/datasets/issues/2655
945,382,723
MDU6SXNzdWU5NDUzODI3MjM=
2,655
Allow the selection of multiple columns at once
{ "avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4", "events_url": "https://api.github.com/users/Dref360/events{/privacy}", "followers_url": "https://api.github.com/users/Dref360/followers", "following_url": "https://api.github.com/users/Dref360/following{/other_user}", "gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Dref360", "id": 8976546, "login": "Dref360", "node_id": "MDQ6VXNlcjg5NzY1NDY=", "organizations_url": "https://api.github.com/users/Dref360/orgs", "received_events_url": "https://api.github.com/users/Dref360/received_events", "repos_url": "https://api.github.com/users/Dref360/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Dref360/subscriptions", "type": "User", "url": "https://api.github.com/users/Dref360", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
[ "Hi! I was looking into this and hope you can clarify a point. Your my_dataset variable would be of type DatasetDict which means the alternative you've described (dict comprehension) is what makes sense. \r\nIs there a reason why you wouldn't want to convert my_dataset to a pandas df if you'd like to use it like one? Please let me know if I'm missing something.", "Hi! Sorry for the delay.\r\n\r\nIn this case, the dataset would be a `datasets.Dataset` and we want to select multiple columns, the `idx` and `label` columns for example.\r\n\r\nMy issue is that my dataset is too big for memory if I load everything into pandas.", "Hello - any update on this? Thank you.", "Please note that you could use the method `Dataset.select_columns`: https://huggingface.co/docs/datasets/v2.15.0/en/package_reference/main_classes#datasets.Dataset.select_columns", "Thank you!" ]
2021-07-15T13:30:45
2024-01-09T15:11:27
2024-01-09T07:46:28
CONTRIBUTOR
null
null
null
null
**Is your feature request related to a problem? Please describe.** Similar to pandas, it would be great if we could select multiple columns at once. **Describe the solution you'd like** ```python my_dataset = ... # Has columns ['idx', 'sentence', 'label'] idx, label = my_dataset[['idx', 'label']] ``` **Describe alternatives you've considered** we can do `[dataset[col] for col in ('idx', 'label')]` **Additional context** This is of course very minor.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 8, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 8, "url": "https://api.github.com/repos/huggingface/datasets/issues/2655/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2655/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
907 days, 18:15:43
https://api.github.com/repos/huggingface/datasets/issues/2654
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2654/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2654/comments
https://api.github.com/repos/huggingface/datasets/issues/2654/events
https://github.com/huggingface/datasets/issues/2654
945,167,231
MDU6SXNzdWU5NDUxNjcyMzE=
2,654
Give a user feedback if the dataset he loads is streamable or not
{ "avatar_url": "https://avatars.githubusercontent.com/u/32632186?v=4", "events_url": "https://api.github.com/users/philschmid/events{/privacy}", "followers_url": "https://api.github.com/users/philschmid/followers", "following_url": "https://api.github.com/users/philschmid/following{/other_user}", "gists_url": "https://api.github.com/users/philschmid/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/philschmid", "id": 32632186, "login": "philschmid", "node_id": "MDQ6VXNlcjMyNjMyMTg2", "organizations_url": "https://api.github.com/users/philschmid/orgs", "received_events_url": "https://api.github.com/users/philschmid/received_events", "repos_url": "https://api.github.com/users/philschmid/repos", "site_admin": false, "starred_url": "https://api.github.com/users/philschmid/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/philschmid/subscriptions", "type": "User", "url": "https://api.github.com/users/philschmid", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" } ]
[ "#self-assign", "I understand it already raises a `NotImplementedError` exception, eg:\r\n\r\n```\r\n>>> dataset = load_dataset(\"journalists_questions\", name=\"plain_text\", split=\"train\", streaming=True)\r\n\r\n[...]\r\nNotImplementedError: Extraction protocol for file at https://drive.google.com/uc?export=download&id=1CBrh-9OrSpKmPQBxTK_ji6mq6WTN_U9U is not implemented yet\r\n```\r\n" ]
2021-07-15T09:07:27
2021-08-02T11:03:21
null
CONTRIBUTOR
null
null
null
null
**Is your feature request related to a problem? Please describe.** I would love to know if a `dataset` is with the current implementation streamable or not. **Describe the solution you'd like** We could show a warning when a dataset is loaded with `load_dataset('...',streaming=True)` when its lot streamable, e.g. if it is an archive. **Describe alternatives you've considered** Add a new metadata tag for "streaming"
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2654/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2654/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
https://api.github.com/repos/huggingface/datasets/issues/2653
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2653/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2653/comments
https://api.github.com/repos/huggingface/datasets/issues/2653/events
https://github.com/huggingface/datasets/issues/2653
945,102,321
MDU6SXNzdWU5NDUxMDIzMjE=
2,653
Add SD task for SUPERB
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
[ "Note that this subset requires us to:\r\n\r\n* generate the LibriMix corpus from LibriSpeech\r\n* prepare the corpus for diarization\r\n\r\nAs suggested by @lhoestq we should perform these steps locally and add the prepared data to this public repo on the Hub: https://huggingface.co/datasets/superb/superb-data\r\n\r\nThen we can use the URLs for the files to load the data in `superb`'s dataset loading script.\r\n\r\nFor consistency, I suggest we name the folders in `superb-data` in the same way as the configs in the dataset loading script - e.g. use `sd` for speech diarization in both places :)", "@lewtun @lhoestq: \r\n\r\nI have already generated the LibriMix corpus and prepared the corpus for diarization. The output is 3 dirs (train, dev, test), each one containing 6 files: reco2dur rttm segments spk2utt utt2spk wav.scp\r\n\r\nNext steps:\r\n- Upload these files to the superb-data repo\r\n- Transcribe the corresponding s3prl processing of these files into our superb loading script\r\n\r\nNote that processing of these files is a bit more intricate than usual datasets: https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/diarization/dataset.py#L233\r\n\r\n" ]
2021-07-15T07:51:40
2021-08-04T17:03:52
2021-08-04T17:03:52
MEMBER
null
null
null
null
Include the SD (Speaker Diarization) task as described in the [SUPERB paper](https://arxiv.org/abs/2105.01051) and `s3prl` [instructions](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#sd-speaker-diarization). Steps: - [x] Generate the LibriMix corpus - [x] Prepare the corpus for diarization - [x] Upload these files to the superb-data repo - [x] Transcribe the corresponding s3prl processing of these files into our superb loading script - [ ] README: tags + description sections Related to #2619. cc: @lewtun
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2653/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2653/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
20 days, 9:12:12
https://api.github.com/repos/huggingface/datasets/issues/2651
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2651/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2651/comments
https://api.github.com/repos/huggingface/datasets/issues/2651/events
https://github.com/huggingface/datasets/issues/2651
944,796,961
MDU6SXNzdWU5NDQ3OTY5NjE=
2,651
Setting log level higher than warning does not suppress progress bar
{ "avatar_url": "https://avatars.githubusercontent.com/u/1147443?v=4", "events_url": "https://api.github.com/users/Isa-rentacs/events{/privacy}", "followers_url": "https://api.github.com/users/Isa-rentacs/followers", "following_url": "https://api.github.com/users/Isa-rentacs/following{/other_user}", "gists_url": "https://api.github.com/users/Isa-rentacs/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Isa-rentacs", "id": 1147443, "login": "Isa-rentacs", "node_id": "MDQ6VXNlcjExNDc0NDM=", "organizations_url": "https://api.github.com/users/Isa-rentacs/orgs", "received_events_url": "https://api.github.com/users/Isa-rentacs/received_events", "repos_url": "https://api.github.com/users/Isa-rentacs/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Isa-rentacs/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Isa-rentacs/subscriptions", "type": "User", "url": "https://api.github.com/users/Isa-rentacs", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "Hi,\r\n\r\nyou can suppress progress bars by patching logging as follows:\r\n```python\r\nimport datasets\r\nimport logging\r\ndatasets.logging.get_verbosity = lambda: logging.NOTSET\r\n# map call ...\r\n```\r\nEDIT: now you have to use `disable_progress_bar `", "Thank you, it worked :)", "See https://github.com/huggingface/datasets/issues/2528 for reference", "Note also that you can disable the progress bar with\r\n\r\n```python\r\nfrom datasets.utils import disable_progress_bar\r\ndisable_progress_bar()\r\n```\r\n\r\nSee https://github.com/huggingface/datasets/blob/8814b393984c1c2e1800ba370de2a9f7c8644908/src/datasets/utils/tqdm_utils.py#L84", "Now the library officially recommends `set_progress_bar_enabled(False)`\r\n\r\n```py\r\nfrom datasets.utils import set_progress_bar_enabled\r\n\r\nset_progress_bar_enabled(False)\r\n```\r\n\r\nsource:\r\n\r\nhttps://github.com/huggingface/datasets/blob/1fd47120ace13626c528367787ffa13e1a26e6c0/src/datasets/utils/tqdm_utils.py#L83-L88\r\n\r\n", "From https://github.com/huggingface/datasets/pull/3897, `disable_progress_bar` is the function you should use", "Now ``disable_progress_bar`` function is in ``datasets/src/datasets/utils/logging.py``.\r\nhttps://github.com/huggingface/datasets/blob/aa555a299ad73c65e3f997a764e9d211675ab05d/src/datasets/utils/logging.py#L233-L236\r\nAnd the method mentioned in https://github.com/huggingface/datasets/issues/2651#issuecomment-880270774 is not working now." ]
2021-07-14T21:06:51
2022-07-08T14:51:57
2021-07-15T03:41:35
NONE
null
null
null
null
## Describe the bug I would like to disable progress bars for `.map` method (and other methods like `.filter` and `load_dataset` as well). According to #1627 one can suppress it by setting log level higher than `warning`, however doing so doesn't suppress it with version 1.9.0. I also tried to set `DATASETS_VERBOSITY` environment variable to `error` or `critical` but it also didn't work. ## Steps to reproduce the bug ```python import datasets from datasets.utils.logging import set_verbosity_error set_verbosity_error() def dummy_map(batch): return batch common_voice_train = datasets.load_dataset("common_voice", "de", split="train") common_voice_test = datasets.load_dataset("common_voice", "de", split="test") common_voice_train.map(dummy_map) ``` ## Expected results - The progress bar for `.map` call won't be shown ## Actual results - The progress bar for `.map` is still shown ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.9.0 - Platform: Linux-5.4.0-1045-aws-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.5 - PyArrow version: 4.0.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/1147443?v=4", "events_url": "https://api.github.com/users/Isa-rentacs/events{/privacy}", "followers_url": "https://api.github.com/users/Isa-rentacs/followers", "following_url": "https://api.github.com/users/Isa-rentacs/following{/other_user}", "gists_url": "https://api.github.com/users/Isa-rentacs/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Isa-rentacs", "id": 1147443, "login": "Isa-rentacs", "node_id": "MDQ6VXNlcjExNDc0NDM=", "organizations_url": "https://api.github.com/users/Isa-rentacs/orgs", "received_events_url": "https://api.github.com/users/Isa-rentacs/received_events", "repos_url": "https://api.github.com/users/Isa-rentacs/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Isa-rentacs/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Isa-rentacs/subscriptions", "type": "User", "url": "https://api.github.com/users/Isa-rentacs", "user_view_type": "public" }
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2651/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2651/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
6:34:44
https://api.github.com/repos/huggingface/datasets/issues/2650
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2650/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2650/comments
https://api.github.com/repos/huggingface/datasets/issues/2650/events
https://github.com/huggingface/datasets/issues/2650
944,672,565
MDU6SXNzdWU5NDQ2NzI1NjU=
2,650
[load_dataset] shard and parallelize the process
{ "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stas00", "id": 10676103, "login": "stas00", "node_id": "MDQ6VXNlcjEwNjc2MTAz", "organizations_url": "https://api.github.com/users/stas00/orgs", "received_events_url": "https://api.github.com/users/stas00/received_events", "repos_url": "https://api.github.com/users/stas00/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "type": "User", "url": "https://api.github.com/users/stas00", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4", "events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}", "followers_url": "https://api.github.com/users/TevenLeScao/followers", "following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}", "gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/TevenLeScao", "id": 26709476, "login": "TevenLeScao", "node_id": "MDQ6VXNlcjI2NzA5NDc2", "organizations_url": "https://api.github.com/users/TevenLeScao/orgs", "received_events_url": "https://api.github.com/users/TevenLeScao/received_events", "repos_url": "https://api.github.com/users/TevenLeScao/repos", "site_admin": false, "starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions", "type": "User", "url": "https://api.github.com/users/TevenLeScao", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4", "events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}", "followers_url": "https://api.github.com/users/TevenLeScao/followers", "following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}", "gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/TevenLeScao", "id": 26709476, "login": "TevenLeScao", "node_id": "MDQ6VXNlcjI2NzA5NDc2", "organizations_url": "https://api.github.com/users/TevenLeScao/orgs", "received_events_url": "https://api.github.com/users/TevenLeScao/received_events", "repos_url": "https://api.github.com/users/TevenLeScao/repos", "site_admin": false, "starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions", "type": "User", "url": "https://api.github.com/users/TevenLeScao", "user_view_type": "public" } ]
[ "I need the same feature for distributed training", "I think @TevenLeScao is exploring adding multiprocessing in `GeneratorBasedBuilder._prepare_split` - feel free to post updates here :)", "Posted a PR to address the building side, still needs something to load sharded arrow files + tests", "Closing as this feature has been implemented in #5107" ]
2021-07-14T18:04:58
2023-11-28T19:11:41
2023-11-28T19:11:40
CONTRIBUTOR
null
null
null
null
- Some huge datasets take forever to build the first time. (e.g. oscar/en) as it's done in a single cpu core. - If the build crashes, everything done up to that point gets lost Request: Shard the build over multiple arrow files, which would enable: - much faster build by parallelizing the build process - if the process crashed, the completed arrow files don't need to be re-built again Thank you! @lhoestq
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 5, "-1": 0, "confused": 0, "eyes": 3, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 2, "total_count": 10, "url": "https://api.github.com/repos/huggingface/datasets/issues/2650/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2650/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
867 days, 1:06:42
https://api.github.com/repos/huggingface/datasets/issues/2649
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2649/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2649/comments
https://api.github.com/repos/huggingface/datasets/issues/2649/events
https://github.com/huggingface/datasets/issues/2649
944,651,229
MDU6SXNzdWU5NDQ2NTEyMjk=
2,649
adding progress bar / ETA for `load_dataset`
{ "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stas00", "id": 10676103, "login": "stas00", "node_id": "MDQ6VXNlcjEwNjc2MTAz", "organizations_url": "https://api.github.com/users/stas00/orgs", "received_events_url": "https://api.github.com/users/stas00/received_events", "repos_url": "https://api.github.com/users/stas00/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "type": "User", "url": "https://api.github.com/users/stas00", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
[ "Is this done now? I see progress bars when using `load_dataset`.", "There are progress bars when downloading data and when preparing them as Arrow files.\r\n\r\nThe \"total silence\" part mentioned in OP refer to checksums verifications which have had some changes in the latest release 2.10:\r\n- they're disabled by default (other less costly verifications are still done like checking the generated dataset size)\r\n- they're using a tqdm bar as well" ]
2021-07-14T17:34:39
2023-03-27T10:32:49
null
CONTRIBUTOR
null
null
null
null
Please consider: ``` Downloading and preparing dataset oscar/unshuffled_deduplicated_en (download: 462.40 GiB, generated: 1.18 TiB, post-processed: Unknown size, total: 1.63 TiB) to cache/oscar/unshuffled_deduplicated_en/1.0.0/84838bd49d2295f62008383b05620571535451d84545037bb94d6f3501651df2... HF google storage unreachable. Downloading and preparing it from source ``` and no indication whatsoever of whether things work well or when it'll be done. It's important to have an estimated completion time for when doing slurm jobs since some instances have a cap on run-time. I think for this particular job it sat for 30min in total silence and then after 30min it started generating: ``` 897850 examples [07:24, 10286.71 examples/s] ``` which is already great! Request: 1. ETA - knowing how many hours to allocate for a slurm job 2. progress bar - helps to know things are working and aren't stuck and where we are at. Thank you! @lhoestq
null
{ "+1": 7, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 7, "url": "https://api.github.com/repos/huggingface/datasets/issues/2649/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2649/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
https://api.github.com/repos/huggingface/datasets/issues/2648
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2648/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2648/comments
https://api.github.com/repos/huggingface/datasets/issues/2648/events
https://github.com/huggingface/datasets/issues/2648
944,484,522
MDU6SXNzdWU5NDQ0ODQ1MjI=
2,648
Add web_split dataset for Paraphase and Rephrase benchmark
{ "avatar_url": "https://avatars.githubusercontent.com/u/26653468?v=4", "events_url": "https://api.github.com/users/bhadreshpsavani/events{/privacy}", "followers_url": "https://api.github.com/users/bhadreshpsavani/followers", "following_url": "https://api.github.com/users/bhadreshpsavani/following{/other_user}", "gists_url": "https://api.github.com/users/bhadreshpsavani/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/bhadreshpsavani", "id": 26653468, "login": "bhadreshpsavani", "node_id": "MDQ6VXNlcjI2NjUzNDY4", "organizations_url": "https://api.github.com/users/bhadreshpsavani/orgs", "received_events_url": "https://api.github.com/users/bhadreshpsavani/received_events", "repos_url": "https://api.github.com/users/bhadreshpsavani/repos", "site_admin": false, "starred_url": "https://api.github.com/users/bhadreshpsavani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhadreshpsavani/subscriptions", "type": "User", "url": "https://api.github.com/users/bhadreshpsavani", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/26653468?v=4", "events_url": "https://api.github.com/users/bhadreshpsavani/events{/privacy}", "followers_url": "https://api.github.com/users/bhadreshpsavani/followers", "following_url": "https://api.github.com/users/bhadreshpsavani/following{/other_user}", "gists_url": "https://api.github.com/users/bhadreshpsavani/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/bhadreshpsavani", "id": 26653468, "login": "bhadreshpsavani", "node_id": "MDQ6VXNlcjI2NjUzNDY4", "organizations_url": "https://api.github.com/users/bhadreshpsavani/orgs", "received_events_url": "https://api.github.com/users/bhadreshpsavani/received_events", "repos_url": "https://api.github.com/users/bhadreshpsavani/repos", "site_admin": false, "starred_url": "https://api.github.com/users/bhadreshpsavani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhadreshpsavani/subscriptions", "type": "User", "url": "https://api.github.com/users/bhadreshpsavani", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/26653468?v=4", "events_url": "https://api.github.com/users/bhadreshpsavani/events{/privacy}", "followers_url": "https://api.github.com/users/bhadreshpsavani/followers", "following_url": "https://api.github.com/users/bhadreshpsavani/following{/other_user}", "gists_url": "https://api.github.com/users/bhadreshpsavani/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/bhadreshpsavani", "id": 26653468, "login": "bhadreshpsavani", "node_id": "MDQ6VXNlcjI2NjUzNDY4", "organizations_url": "https://api.github.com/users/bhadreshpsavani/orgs", "received_events_url": "https://api.github.com/users/bhadreshpsavani/received_events", "repos_url": "https://api.github.com/users/bhadreshpsavani/repos", "site_admin": false, "starred_url": "https://api.github.com/users/bhadreshpsavani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhadreshpsavani/subscriptions", "type": "User", "url": "https://api.github.com/users/bhadreshpsavani", "user_view_type": "public" } ]
[ "#take" ]
2021-07-14T14:24:36
2021-07-14T14:26:12
null
CONTRIBUTOR
null
null
null
null
## Describe: For getting simple sentences from complex sentence there are dataset and task like wiki_split that is available in hugging face datasets. This web_split is a very similar dataset. There some research paper which states that by combining these two datasets we if we train the model it will yield better results on both tests data. This dataset is made from web NLG data. All the dataset related details are provided in the below repository Github link: https://github.com/shashiongithub/Split-and-Rephrase
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2648/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2648/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
https://api.github.com/repos/huggingface/datasets/issues/2646
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2646/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2646/comments
https://api.github.com/repos/huggingface/datasets/issues/2646/events
https://github.com/huggingface/datasets/issues/2646
944,379,954
MDU6SXNzdWU5NDQzNzk5NTQ=
2,646
downloading of yahoo_answers_topics dataset failed
{ "avatar_url": "https://avatars.githubusercontent.com/u/66781249?v=4", "events_url": "https://api.github.com/users/vikrant7k/events{/privacy}", "followers_url": "https://api.github.com/users/vikrant7k/followers", "following_url": "https://api.github.com/users/vikrant7k/following{/other_user}", "gists_url": "https://api.github.com/users/vikrant7k/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/vikrant7k", "id": 66781249, "login": "vikrant7k", "node_id": "MDQ6VXNlcjY2NzgxMjQ5", "organizations_url": "https://api.github.com/users/vikrant7k/orgs", "received_events_url": "https://api.github.com/users/vikrant7k/received_events", "repos_url": "https://api.github.com/users/vikrant7k/repos", "site_admin": false, "starred_url": "https://api.github.com/users/vikrant7k/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vikrant7k/subscriptions", "type": "User", "url": "https://api.github.com/users/vikrant7k", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "Hi ! I just tested and it worked fine today for me.\r\n\r\nI think this is because the dataset is stored on Google Drive which has a quota limit for the number of downloads per day, see this similar issue https://github.com/huggingface/datasets/issues/996 \r\n\r\nFeel free to try again today, now that the quota was reset", "Fixed once data URL was replaced:\r\n- #4023" ]
2021-07-14T12:31:05
2022-08-04T08:28:24
2022-08-04T08:28:24
NONE
null
null
null
null
## Describe the bug I get an error datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files when I try to download the yahoo_answers_topics dataset ## Steps to reproduce the bug self.dataset = load_dataset( 'yahoo_answers_topics', cache_dir=self.config['yahoo_cache_dir'], split='train[:90%]') # Sample code to reproduce the bug self.dataset = load_dataset( 'yahoo_answers_topics', cache_dir=self.config['yahoo_cache_dir'], split='train[:90%]') ## Expected results A clear and concise description of the expected results. ## Actual results Specify the actual results or traceback. datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2646/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2646/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
385 days, 19:57:19
https://api.github.com/repos/huggingface/datasets/issues/2645
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2645/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2645/comments
https://api.github.com/repos/huggingface/datasets/issues/2645/events
https://github.com/huggingface/datasets/issues/2645
944,374,284
MDU6SXNzdWU5NDQzNzQyODQ=
2,645
load_dataset processing failed with OS error after downloading a dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/40395156?v=4", "events_url": "https://api.github.com/users/fake-warrior8/events{/privacy}", "followers_url": "https://api.github.com/users/fake-warrior8/followers", "following_url": "https://api.github.com/users/fake-warrior8/following{/other_user}", "gists_url": "https://api.github.com/users/fake-warrior8/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/fake-warrior8", "id": 40395156, "login": "fake-warrior8", "node_id": "MDQ6VXNlcjQwMzk1MTU2", "organizations_url": "https://api.github.com/users/fake-warrior8/orgs", "received_events_url": "https://api.github.com/users/fake-warrior8/received_events", "repos_url": "https://api.github.com/users/fake-warrior8/repos", "site_admin": false, "starred_url": "https://api.github.com/users/fake-warrior8/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fake-warrior8/subscriptions", "type": "User", "url": "https://api.github.com/users/fake-warrior8", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "Hi ! It looks like an issue with pytorch.\r\n\r\nCould you try to run `import torch` and see if it raises an error ?", "> Hi ! It looks like an issue with pytorch.\r\n> \r\n> Could you try to run `import torch` and see if it raises an error ?\r\n\r\nIt works. Thank you!" ]
2021-07-14T12:23:53
2021-07-15T09:34:02
2021-07-15T09:34:02
NONE
null
null
null
null
## Describe the bug After downloading a dataset like opus100, there is a bug that OSError: Cannot find data file. Original error: dlopen: cannot load any more object with static TLS ## Steps to reproduce the bug ```python from datasets import load_dataset this_dataset = load_dataset('opus100', 'af-en') ``` ## Expected results there is no error when running load_dataset. ## Actual results Specify the actual results or traceback. Traceback (most recent call last): File "/home/anaconda3/lib/python3.6/site-packages/datasets/builder.py", line 652, in _download_and_prep self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/anaconda3/lib/python3.6/site-packages/datasets/builder.py", line 989, in _prepare_split example = self.info.features.encode_example(record) File "/home/anaconda3/lib/python3.6/site-packages/datasets/features.py", line 952, in encode_example example = cast_to_python_objects(example) File "/home/anaconda3/lib/python3.6/site-packages/datasets/features.py", line 219, in cast_to_python_ob return _cast_to_python_objects(obj)[0] File "/home/anaconda3/lib/python3.6/site-packages/datasets/features.py", line 165, in _cast_to_python_o import torch File "/home/anaconda3/lib/python3.6/site-packages/torch/__init__.py", line 188, in <module> _load_global_deps() File "/home/anaconda3/lib/python3.6/site-packages/torch/__init__.py", line 141, in _load_global_deps ctypes.CDLL(lib_path, mode=ctypes.RTLD_GLOBAL) File "/home/anaconda3/lib/python3.6/ctypes/__init__.py", line 348, in __init__ self._handle = _dlopen(self._name, mode) OSError: dlopen: cannot load any more object with static TLS During handling of the above exception, another exception occurred: Traceback (most recent call last): File "download_hub_opus100.py", line 9, in <module> this_dataset = load_dataset('opus100', language_pair) File "/home/anaconda3/lib/python3.6/site-packages/datasets/load.py", line 748, in load_dataset use_auth_token=use_auth_token, File "/home/anaconda3/lib/python3.6/site-packages/datasets/builder.py", line 575, in download_and_prepa dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/anaconda3/lib/python3.6/site-packages/datasets/builder.py", line 658, in _download_and_prep + str(e) OSError: Cannot find data file. Original error: dlopen: cannot load any more object with static TLS ## Environment info - `datasets` version: 1.8.0 - Platform: Linux-3.13.0-32-generic-x86_64-with-debian-jessie-sid - Python version: 3.6.6 - PyArrow version: 3.0.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/40395156?v=4", "events_url": "https://api.github.com/users/fake-warrior8/events{/privacy}", "followers_url": "https://api.github.com/users/fake-warrior8/followers", "following_url": "https://api.github.com/users/fake-warrior8/following{/other_user}", "gists_url": "https://api.github.com/users/fake-warrior8/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/fake-warrior8", "id": 40395156, "login": "fake-warrior8", "node_id": "MDQ6VXNlcjQwMzk1MTU2", "organizations_url": "https://api.github.com/users/fake-warrior8/orgs", "received_events_url": "https://api.github.com/users/fake-warrior8/received_events", "repos_url": "https://api.github.com/users/fake-warrior8/repos", "site_admin": false, "starred_url": "https://api.github.com/users/fake-warrior8/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fake-warrior8/subscriptions", "type": "User", "url": "https://api.github.com/users/fake-warrior8", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2645/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2645/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
21:10:09
https://api.github.com/repos/huggingface/datasets/issues/2644
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2644/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2644/comments
https://api.github.com/repos/huggingface/datasets/issues/2644/events
https://github.com/huggingface/datasets/issues/2644
944,254,748
MDU6SXNzdWU5NDQyNTQ3NDg=
2,644
Batched `map` not allowed to return 0 items
{ "avatar_url": "https://avatars.githubusercontent.com/u/1177582?v=4", "events_url": "https://api.github.com/users/pcuenca/events{/privacy}", "followers_url": "https://api.github.com/users/pcuenca/followers", "following_url": "https://api.github.com/users/pcuenca/following{/other_user}", "gists_url": "https://api.github.com/users/pcuenca/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/pcuenca", "id": 1177582, "login": "pcuenca", "node_id": "MDQ6VXNlcjExNzc1ODI=", "organizations_url": "https://api.github.com/users/pcuenca/orgs", "received_events_url": "https://api.github.com/users/pcuenca/received_events", "repos_url": "https://api.github.com/users/pcuenca/repos", "site_admin": false, "starred_url": "https://api.github.com/users/pcuenca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pcuenca/subscriptions", "type": "User", "url": "https://api.github.com/users/pcuenca", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "Hi ! Thanks for reporting. Indeed it looks like type inference makes it fail. We should probably just ignore this step until a non-empty batch is passed.", "Sounds good! Do you want me to propose a PR? I'm quite busy right now, but if it's not too urgent I could take a look next week.", "Sure if you're interested feel free to open a PR :)\r\n\r\nYou can also ping me anytime if you have questions or if I can help !", "Sorry to ping you, @lhoestq, did you have a chance to take a look at the proposed PR? Thank you!", "Yes and it's all good, thank you :)\r\n\r\nFeel free to close this issue if it's good for you", "Everything's good, thanks!" ]
2021-07-14T09:58:19
2021-07-26T14:55:15
2021-07-26T14:55:15
MEMBER
null
null
null
null
## Describe the bug I'm trying to use `map` to filter a large dataset by selecting rows that match an expensive condition (files referenced by one of the columns need to exist in the filesystem, so we have to `stat` them). According to [the documentation](https://huggingface.co/docs/datasets/processing.html#augmenting-the-dataset), `a batch mapped function can take as input a batch of size N and return a batch of size M where M can be greater or less than N and can even be zero`. However, when the returned batch has a size of zero (neither item in the batch fulfilled the condition), we get an `index out of bounds` error. I think that `arrow_writer.py` is [trying to infer the returned types using the first element returned](https://github.com/huggingface/datasets/blob/master/src/datasets/arrow_writer.py#L100), but no elements were returned in this case. For this error to happen, I'm returning a dictionary that contains empty lists for the keys I want to keep, see below. If I return an empty dictionary instead (no keys), then a different error eventually occurs. ## Steps to reproduce the bug ```python def select_rows(examples): # `key` is a column name that exists in the original dataset # The following line simulates no matches found, so we return an empty batch result = {'key': []} return result filtered_dataset = dataset.map( select_rows, remove_columns = dataset.column_names, batched = True, num_proc = 1, desc = "Selecting rows with images that exist" ) ``` The code above immediately triggers the exception. If we use the following instead: ```python def select_rows(examples): # `key` is a column name that exists in the original dataset result = {'key': []} # or defaultdict or whatever # code to check for condition and append elements to result # some_items_found will be set to True if there were any matching elements in the batch return result if some_items_found else {} ``` Then it _seems_ to work, but it eventually fails with some sort of schema error. I believe it may happen when an empty batch is followed by a non-empty one, but haven't set up a test to verify it. In my opinion, returning a dictionary with empty lists and valid column names should be accepted as a valid result with zero items. ## Expected results The dataset would be filtered and only the matching fields would be returned. ## Actual results An exception is encountered, as described. Using a workaround makes it fail further along the line. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.9.1.dev0 - Platform: Linux-5.4.0-53-generic-x86_64-with-glibc2.17 - Python version: 3.8.10 - PyArrow version: 4.0.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/1177582?v=4", "events_url": "https://api.github.com/users/pcuenca/events{/privacy}", "followers_url": "https://api.github.com/users/pcuenca/followers", "following_url": "https://api.github.com/users/pcuenca/following{/other_user}", "gists_url": "https://api.github.com/users/pcuenca/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/pcuenca", "id": 1177582, "login": "pcuenca", "node_id": "MDQ6VXNlcjExNzc1ODI=", "organizations_url": "https://api.github.com/users/pcuenca/orgs", "received_events_url": "https://api.github.com/users/pcuenca/received_events", "repos_url": "https://api.github.com/users/pcuenca/repos", "site_admin": false, "starred_url": "https://api.github.com/users/pcuenca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pcuenca/subscriptions", "type": "User", "url": "https://api.github.com/users/pcuenca", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2644/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2644/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
12 days, 4:56:56
https://api.github.com/repos/huggingface/datasets/issues/2643
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2643/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2643/comments
https://api.github.com/repos/huggingface/datasets/issues/2643/events
https://github.com/huggingface/datasets/issues/2643
944,220,273
MDU6SXNzdWU5NDQyMjAyNzM=
2,643
Enum used in map functions will raise a RecursionError with dill.
{ "avatar_url": "https://avatars.githubusercontent.com/u/100702?v=4", "events_url": "https://api.github.com/users/jorgeecardona/events{/privacy}", "followers_url": "https://api.github.com/users/jorgeecardona/followers", "following_url": "https://api.github.com/users/jorgeecardona/following{/other_user}", "gists_url": "https://api.github.com/users/jorgeecardona/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jorgeecardona", "id": 100702, "login": "jorgeecardona", "node_id": "MDQ6VXNlcjEwMDcwMg==", "organizations_url": "https://api.github.com/users/jorgeecardona/orgs", "received_events_url": "https://api.github.com/users/jorgeecardona/received_events", "repos_url": "https://api.github.com/users/jorgeecardona/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jorgeecardona/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jorgeecardona/subscriptions", "type": "User", "url": "https://api.github.com/users/jorgeecardona", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
[]
[ "I'm running into this as well. (Thank you so much for reporting @jorgeecardona — was staring at this massive stack trace and unsure what exactly was wrong!)", "Hi ! Thanks for reporting :)\r\n\r\nUntil this is fixed on `dill`'s side, we could implement a custom saving in our Pickler indefined in utils.py_utils.py\r\nThere is already a suggestion in this message about how to do it:\r\nhttps://github.com/uqfoundation/dill/issues/250#issuecomment-852566284\r\n\r\nLet me know if such a workaround could help, and feel free to open a PR if you want to contribute !", "I have the same bug.\r\nthe code is as follows:\r\n![image](https://user-images.githubusercontent.com/84262181/139785849-620dd4ac-86ce-4212-8163-942bbca305aa.png)\r\nthe error is: \r\n![image](https://user-images.githubusercontent.com/84262181/139785899-88a9bd75-c60b-45a5-b819-830c7c096f3d.png)\r\n\r\nLook for the solution for this bug.", "Hi ! I think your RecursionError comes from a different issue @BitcoinNLPer , could you open a separate issue please ?\r\n\r\nAlso which dataset are you using ? I tried loading `CodedotAI/code_clippy` but I get a different error\r\n```python\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/Users/quentinlhoest/Desktop/hf/datasets/src/datasets/load.py\", line 1615, in load_dataset\r\n **config_kwargs,\r\n File \"/Users/quentinlhoest/Desktop/hf/datasets/src/datasets/load.py\", line 1446, in load_dataset_builder\r\n builder_cls = import_main_class(dataset_module.module_path)\r\n File \"/Users/quentinlhoest/Desktop/hf/datasets/src/datasets/load.py\", line 101, in import_main_class\r\n module = importlib.import_module(module_path)\r\n File \"/Users/quentinlhoest/.virtualenvs/hf-datasets/lib/python3.7/importlib/__init__.py\", line 127, in import_module\r\n return _bootstrap._gcd_import(name[level:], package, level)\r\n File \"<frozen importlib._bootstrap>\", line 1006, in _gcd_import\r\n File \"<frozen importlib._bootstrap>\", line 983, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 967, in _find_and_load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 677, in _load_unlocked\r\n File \"<frozen importlib._bootstrap_external>\", line 728, in exec_module\r\n File \"<frozen importlib._bootstrap>\", line 219, in _call_with_frames_removed\r\n File \"/Users/quentinlhoest/.cache/huggingface/modules/datasets_modules/datasets/CodedotAI___code_clippy/d332f69d036e8c80f47bc9a96d676c3fa30cb50af7bb81e2d4d12e80b83efc4d/code_clippy.py\", line 66, in <module>\r\n url_elements = results.find_all(\"a\")\r\nAttributeError: 'NoneType' object has no attribute 'find_all'\r\n```" ]
2021-07-14T09:16:08
2021-11-02T09:51:11
null
NONE
null
null
null
null
## Describe the bug Enums used in functions pass to `map` will fail at pickling with a maximum recursion exception as described here: https://github.com/uqfoundation/dill/issues/250#issuecomment-852566284 In my particular case, I use an enum to define an argument with fixed options using the `TraininigArguments` dataclass as base class and the `HfArgumentParser`. In the same file I use a `ds.map` that tries to pickle the content of the module including the definition of the enum that runs into the dill bug described above. ## Steps to reproduce the bug ```python from datasets import load_dataset from enum import Enum class A(Enum): a = 'a' def main(): a = A.a def f(x): return {} if a == a.a else x ds = load_dataset('cnn_dailymail', '3.0.0')['test'] ds = ds.map(f, num_proc=15) if __name__ == "__main__": main() ``` ## Expected results The known problem with dill could be prevented as explained in the link above (workaround.) Since `HFArgumentParser` nicely uses the enum class for choices it makes sense to also deal with this bug under the hood. ## Actual results ```python File "/home/xxxx/miniconda3/lib/python3.8/site-packages/dill/_dill.py", line 1373, in save_type pickler.save_reduce(_create_type, (type(obj), obj.__name__, File "/home/xxxx/miniconda3/lib/python3.8/pickle.py", line 690, in save_reduce save(args) File "/home/xxxx/miniconda3/lib/python3.8/pickle.py", line 558, in save f(self, obj) # Call unbound method with explicit self File "/home/xxxx/miniconda3/lib/python3.8/pickle.py", line 899, in save_tuple save(element) File "/home/xxxx/miniconda3/lib/python3.8/pickle.py", line 534, in save self.framer.commit_frame() File "/home/xxxx/miniconda3/lib/python3.8/pickle.py", line 220, in commit_frame if f.tell() >= self._FRAME_SIZE_TARGET or force: RecursionError: maximum recursion depth exceeded while calling a Python object ``` ## Environment info - `datasets` version: 1.8.0 - Platform: Linux-5.9.0-4-amd64-x86_64-with-glibc2.10 - Python version: 3.8.5 - PyArrow version: 3.0.0
null
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2643/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2643/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
https://api.github.com/repos/huggingface/datasets/issues/2642
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2642/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2642/comments
https://api.github.com/repos/huggingface/datasets/issues/2642/events
https://github.com/huggingface/datasets/issues/2642
944,175,697
MDU6SXNzdWU5NDQxNzU2OTc=
2,642
Support multi-worker with streaming dataset (IterableDataset).
{ "avatar_url": "https://avatars.githubusercontent.com/u/31893406?v=4", "events_url": "https://api.github.com/users/changjonathanc/events{/privacy}", "followers_url": "https://api.github.com/users/changjonathanc/followers", "following_url": "https://api.github.com/users/changjonathanc/following{/other_user}", "gists_url": "https://api.github.com/users/changjonathanc/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/changjonathanc", "id": 31893406, "login": "changjonathanc", "node_id": "MDQ6VXNlcjMxODkzNDA2", "organizations_url": "https://api.github.com/users/changjonathanc/orgs", "received_events_url": "https://api.github.com/users/changjonathanc/received_events", "repos_url": "https://api.github.com/users/changjonathanc/repos", "site_admin": false, "starred_url": "https://api.github.com/users/changjonathanc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/changjonathanc/subscriptions", "type": "User", "url": "https://api.github.com/users/changjonathanc", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
[ "Hi ! This is a great idea :)\r\nI think we could have something similar to what we have in `datasets.Dataset.map`, i.e. a `num_proc` parameter that tells how many processes to spawn to parallelize the data processing. \r\n\r\nRegarding AUTOTUNE, this could be a nice feature as well, we could see how to add it in a second step", "Any update on this feature request?", "Not yet, I'm happy to provide some guidance if someone wants to give it a shot though.\r\n\r\nThe code that applies the `map` function is in `iterable_dataset.py`, in `MappedExamplesIterable.__iter__`" ]
2021-07-14T08:22:58
2024-05-03T10:11:04
null
CONTRIBUTOR
null
null
null
null
**Is your feature request related to a problem? Please describe.** The current `.map` does not support multi-process, CPU can become bottleneck if the pre-processing is complex (e.g. t5 span masking). **Describe the solution you'd like** Ideally `.map` should support multi-worker like tfds, with `AUTOTUNE`. **Describe alternatives you've considered** A simpler solution is to shard the dataset and process it in parallel with pytorch dataloader. The shard does not need to be of equal size. * https://pytorch.org/docs/stable/data.html#torch.utils.data.IterableDataset **Additional context**
null
{ "+1": 3, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 3, "url": "https://api.github.com/repos/huggingface/datasets/issues/2642/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2642/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
https://api.github.com/repos/huggingface/datasets/issues/2641
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2641/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2641/comments
https://api.github.com/repos/huggingface/datasets/issues/2641/events
https://github.com/huggingface/datasets/issues/2641
943,838,085
MDU6SXNzdWU5NDM4MzgwODU=
2,641
load_dataset("financial_phrasebank") NonMatchingChecksumError
{ "avatar_url": "https://avatars.githubusercontent.com/u/13956255?v=4", "events_url": "https://api.github.com/users/courtmckay/events{/privacy}", "followers_url": "https://api.github.com/users/courtmckay/followers", "following_url": "https://api.github.com/users/courtmckay/following{/other_user}", "gists_url": "https://api.github.com/users/courtmckay/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/courtmckay", "id": 13956255, "login": "courtmckay", "node_id": "MDQ6VXNlcjEzOTU2MjU1", "organizations_url": "https://api.github.com/users/courtmckay/orgs", "received_events_url": "https://api.github.com/users/courtmckay/received_events", "repos_url": "https://api.github.com/users/courtmckay/repos", "site_admin": false, "starred_url": "https://api.github.com/users/courtmckay/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/courtmckay/subscriptions", "type": "User", "url": "https://api.github.com/users/courtmckay", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "Hi! It's probably because this dataset is stored on google drive and it has a per day quota limit. It should work if you retry, I was able to initiate the download.\r\n\r\nSimilar issue [here](https://github.com/huggingface/datasets/issues/2646)", "Hi ! Loading the dataset works on my side as well.\r\nFeel free to try again and let us know if it works for you know", "Thank you! I've been trying periodically for the past month, and no luck yet with this particular dataset. Just tried again and still hitting the checksum error.\r\n\r\nCode:\r\n\r\n`dataset = load_dataset(\"financial_phrasebank\", \"sentences_allagree\") `\r\n\r\nTraceback:\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nNonMatchingChecksumError Traceback (most recent call last)\r\n<ipython-input-2-55cc2144f31e> in <module>\r\n----> 1 dataset = load_dataset(\"financial_phrasebank\", \"sentences_allagree\")\r\n\r\n/opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, task, streaming, **config_kwargs)\r\n 859 ignore_verifications=ignore_verifications,\r\n 860 try_from_hf_gcs=try_from_hf_gcs,\r\n--> 861 use_auth_token=use_auth_token,\r\n 862 )\r\n 863 \r\n\r\n/opt/conda/lib/python3.7/site-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)\r\n 582 if not downloaded_from_gcs:\r\n 583 self._download_and_prepare(\r\n--> 584 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n 585 )\r\n 586 # Sync info\r\n\r\n/opt/conda/lib/python3.7/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 642 if verify_infos:\r\n 643 verify_checksums(\r\n--> 644 self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), \"dataset source files\"\r\n 645 )\r\n 646 \r\n\r\n/opt/conda/lib/python3.7/site-packages/datasets/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name)\r\n 38 if len(bad_urls) > 0:\r\n 39 error_msg = \"Checksums didn't match\" + for_verification_name + \":\\n\"\r\n---> 40 raise NonMatchingChecksumError(error_msg + str(bad_urls))\r\n 41 logger.info(\"All the checksums matched successfully\" + for_verification_name)\r\n 42 \r\n\r\nNonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https://www.researchgate.net/profile/Pekka_Malo/publication/251231364_FinancialPhraseBank-v10/data/0c96051eee4fb1d56e000000/FinancialPhraseBank-v10.zip']\r\n```", "Fixed once data files are hosted on the Hub:\r\n- #4598" ]
2021-07-13T21:21:49
2022-08-04T08:30:08
2022-08-04T08:30:08
NONE
null
null
null
null
## Describe the bug Attempting to download the financial_phrasebank dataset results in a NonMatchingChecksumError ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("financial_phrasebank", 'sentences_allagree') ``` ## Expected results I expect to see the financial_phrasebank dataset downloaded successfully ## Actual results NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://www.researchgate.net/profile/Pekka_Malo/publication/251231364_FinancialPhraseBank-v10/data/0c96051eee4fb1d56e000000/FinancialPhraseBank-v10.zip'] ## Environment info - `datasets` version: 1.9.0 - Platform: Linux-4.14.232-177.418.amzn2.x86_64-x86_64-with-debian-10.6 - Python version: 3.7.10 - PyArrow version: 4.0.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2641/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2641/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
386 days, 11:08:19
https://api.github.com/repos/huggingface/datasets/issues/2630
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2630/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2630/comments
https://api.github.com/repos/huggingface/datasets/issues/2630/events
https://github.com/huggingface/datasets/issues/2630
942,102,956
MDU6SXNzdWU5NDIxMDI5NTY=
2,630
Progress bars are not properly rendered in Jupyter notebook
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "To add my experience when trying to debug this issue:\r\n\r\nSeems like previously the workaround given [here](https://github.com/tqdm/tqdm/issues/485#issuecomment-473338308) worked around this issue. But with the latest version of jupyter/tqdm I still get terminal warnings that IPython tried to send a message from a forked process.", "Hi @mludv, thanks for the hint!!! :) \r\n\r\nWe will definitely take it into account to try to fix this issue... It seems somehow related to `multiprocessing` and `tqdm`..." ]
2021-07-12T14:07:13
2022-02-03T15:55:33
2022-02-03T15:55:33
MEMBER
null
null
null
null
## Describe the bug The progress bars are not Jupyter widgets; regular progress bars appear (like in a terminal). ## Steps to reproduce the bug ```python ds.map(tokenize, num_proc=10) ``` ## Expected results Jupyter widgets displaying the progress bars. ## Actual results Simple plane progress bars. cc: Reported by @thomwolf
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2630/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2630/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
206 days, 1:48:20
https://api.github.com/repos/huggingface/datasets/issues/2629
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2629/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2629/comments
https://api.github.com/repos/huggingface/datasets/issues/2629/events
https://github.com/huggingface/datasets/issues/2629
941,819,205
MDU6SXNzdWU5NDE4MTkyMDU=
2,629
Load datasets from the Hub without requiring a dataset script
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" } ]
[ "This is so cool, let us know if we can help with anything on the hub side (@Pierrci @elishowk) 🎉 " ]
2021-07-12T08:45:17
2021-08-25T14:18:08
2021-08-25T14:18:08
MEMBER
null
null
null
null
As a user I would like to be able to upload my csv/json/text/parquet/etc. files in a dataset repository on the Hugging Face Hub and be able to load this dataset with `load_dataset` without having to implement a dataset script. Moreover I would like to be able to specify which file goes into which split using the `data_files` argument. This feature should be compatible with private repositories and dataset streaming. This can be implemented by checking the extension of the files in the dataset repository and then by using the right dataset builder that is already packaged in the library (csv/json/text/parquet/etc.)
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 7, "hooray": 2, "laugh": 0, "rocket": 2, "total_count": 11, "url": "https://api.github.com/repos/huggingface/datasets/issues/2629/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2629/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
44 days, 5:32:51
https://api.github.com/repos/huggingface/datasets/issues/2625
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2625/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2625/comments
https://api.github.com/repos/huggingface/datasets/issues/2625/events
https://github.com/huggingface/datasets/issues/2625
941,439,922
MDU6SXNzdWU5NDE0Mzk5MjI=
2,625
⚛️😇⚙️🔑
{ "avatar_url": "https://avatars.githubusercontent.com/u/50596661?v=4", "events_url": "https://api.github.com/users/hustlen0mics/events{/privacy}", "followers_url": "https://api.github.com/users/hustlen0mics/followers", "following_url": "https://api.github.com/users/hustlen0mics/following{/other_user}", "gists_url": "https://api.github.com/users/hustlen0mics/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/hustlen0mics", "id": 50596661, "login": "hustlen0mics", "node_id": "MDQ6VXNlcjUwNTk2NjYx", "organizations_url": "https://api.github.com/users/hustlen0mics/orgs", "received_events_url": "https://api.github.com/users/hustlen0mics/received_events", "repos_url": "https://api.github.com/users/hustlen0mics/repos", "site_admin": false, "starred_url": "https://api.github.com/users/hustlen0mics/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hustlen0mics/subscriptions", "type": "User", "url": "https://api.github.com/users/hustlen0mics", "user_view_type": "public" }
[]
closed
false
null
[]
[]
2021-07-11T12:14:34
2021-07-12T05:55:59
2021-07-12T05:55:59
NONE
null
null
null
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2625/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2625/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
17:41:25
https://api.github.com/repos/huggingface/datasets/issues/2624
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2624/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2624/comments
https://api.github.com/repos/huggingface/datasets/issues/2624/events
https://github.com/huggingface/datasets/issues/2624
941,318,247
MDU6SXNzdWU5NDEzMTgyNDc=
2,624
can't set verbosity for `metric.py`
{ "avatar_url": "https://avatars.githubusercontent.com/u/66082334?v=4", "events_url": "https://api.github.com/users/thomas-happify/events{/privacy}", "followers_url": "https://api.github.com/users/thomas-happify/followers", "following_url": "https://api.github.com/users/thomas-happify/following{/other_user}", "gists_url": "https://api.github.com/users/thomas-happify/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thomas-happify", "id": 66082334, "login": "thomas-happify", "node_id": "MDQ6VXNlcjY2MDgyMzM0", "organizations_url": "https://api.github.com/users/thomas-happify/orgs", "received_events_url": "https://api.github.com/users/thomas-happify/received_events", "repos_url": "https://api.github.com/users/thomas-happify/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thomas-happify/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomas-happify/subscriptions", "type": "User", "url": "https://api.github.com/users/thomas-happify", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "Thanks @thomas-happify for reporting and thanks @mariosasko for the fix." ]
2021-07-10T20:23:45
2021-07-12T05:54:29
2021-07-12T05:54:29
NONE
null
null
null
null
## Describe the bug ``` [2021-07-10 20:13:11,528][datasets.utils.filelock][INFO] - Lock 139705371374976 acquired on /root/.cache/huggingface/metrics/seqeval/default/default_experiment-1-0.arrow.lock [2021-07-10 20:13:11,529][datasets.arrow_writer][INFO] - Done writing 32 examples in 6100 bytes /root/.cache/huggingface/metrics/seqeval/default/default_experiment-1-0.arrow. [2021-07-10 20:13:11,531][datasets.arrow_dataset][INFO] - Set __getitem__(key) output type to python objects for no columns (when key is int or slice) and don't output other (un-formatted) columns. [2021-07-10 20:13:11,543][/conda/envs/myenv/lib/python3.8/site-packages/datasets/metric.py][INFO] - Removing /root/.cache/huggingface/metrics/seqeval/default/default_experiment-1-0.arrow ``` As you can see, `datasets` logging come from different places. `filelock`, `arrow_writer` & `arrow_dataset` comes from `datasets.*` which are expected However, `metric.py` logging comes from `/conda/envs/myenv/lib/python3.8/site-packages/datasets/` So when setting `datasets.utils.logging.set_verbosity_error()`, it still logs the last message which is annoying during evaluation. I had to do ``` logging.getLogger("/conda/envs/myenv/lib/python3.8/site-packages/datasets/metric").setLevel(logging.ERROR) ``` to fully mute these messages ## Expected results it shouldn't log these messages when setting `datasets.utils.logging.set_verbosity_error()` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: tried both 1.8.0 & 1.9.0 - Platform: Ubuntu 18.04.5 LTS - Python version: 3.8.10 - PyArrow version: 3.0.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2624/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2624/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
1 day, 9:30:44
https://api.github.com/repos/huggingface/datasets/issues/2622
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2622/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2622/comments
https://api.github.com/repos/huggingface/datasets/issues/2622/events
https://github.com/huggingface/datasets/issues/2622
941,127,785
MDU6SXNzdWU5NDExMjc3ODU=
2,622
Integration with AugLy
{ "avatar_url": "https://avatars.githubusercontent.com/u/890615?v=4", "events_url": "https://api.github.com/users/Darktex/events{/privacy}", "followers_url": "https://api.github.com/users/Darktex/followers", "following_url": "https://api.github.com/users/Darktex/following{/other_user}", "gists_url": "https://api.github.com/users/Darktex/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Darktex", "id": 890615, "login": "Darktex", "node_id": "MDQ6VXNlcjg5MDYxNQ==", "organizations_url": "https://api.github.com/users/Darktex/orgs", "received_events_url": "https://api.github.com/users/Darktex/received_events", "repos_url": "https://api.github.com/users/Darktex/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Darktex/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Darktex/subscriptions", "type": "User", "url": "https://api.github.com/users/Darktex", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
[ "Hi,\r\n\r\nyou can define your own custom formatting with `Dataset.set_transform()` and then run the tokenizer with the batches of augmented data as follows:\r\n```python\r\ndset = load_dataset(\"imdb\", split=\"train\") # Let's say we are working with the IMDB dataset\r\ndset.set_transform(lambda ex: {\"text\": augly_text_augmentation(ex[\"text\"])}, columns=\"text\", output_all_columns=True)\r\ndataloader = torch.utils.data.DataLoader(dset, batch_size=32)\r\nfor epoch in range(5):\r\n for batch in dataloader:\r\n tokenizer_output = tokenizer(batch.pop(\"text\"), padding=True, truncation=True, return_tensors=\"pt\")\r\n batch.update(tokenizer_output)\r\n output = model(**batch)\r\n ...\r\n```", "Preprocessing functions/augmentations, unless super generic, should be defined in separate libraries, so I'm closing this issue." ]
2021-07-10T00:03:09
2023-07-20T13:18:48
2023-07-20T13:18:47
NONE
null
null
null
null
**Is your feature request related to a problem? Please describe.** Facebook recently launched a library, [AugLy](https://github.com/facebookresearch/AugLy) , that has a unified API for augmentations for image, video and text. It would be pretty exciting to have it hooked up to HF libraries so that we can make NLP models robust to misspellings or to punctuation, or emojis etc. Plus, with Transformers supporting more CV use cases, having augmentations support becomes crucial. **Describe the solution you'd like** The biggest difference between augmentations and preprocessing is that preprocessing happens only once, but you are running augmentations once per epoch. AugLy operates on text directly, so this breaks the typical workflow where we would run the tokenizer once, set format to pt tensors and be ready for the Dataloader. **Describe alternatives you've considered** One possible way of implementing these is to make a custom Dataset class where getitem(i) runs the augmentation and the tokenizer every time, though this would slow training down considerably given we wouldn't even run the tokenizer in batches.
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2622/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2622/timeline
null
not_planned
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
740 days, 13:15:38
https://api.github.com/repos/huggingface/datasets/issues/2618
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2618/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2618/comments
https://api.github.com/repos/huggingface/datasets/issues/2618/events
https://github.com/huggingface/datasets/issues/2618
940,852,640
MDU6SXNzdWU5NDA4NTI2NDA=
2,618
`filelock.py` Error
{ "avatar_url": "https://avatars.githubusercontent.com/u/27999909?v=4", "events_url": "https://api.github.com/users/liyucheng09/events{/privacy}", "followers_url": "https://api.github.com/users/liyucheng09/followers", "following_url": "https://api.github.com/users/liyucheng09/following{/other_user}", "gists_url": "https://api.github.com/users/liyucheng09/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/liyucheng09", "id": 27999909, "login": "liyucheng09", "node_id": "MDQ6VXNlcjI3OTk5OTA5", "organizations_url": "https://api.github.com/users/liyucheng09/orgs", "received_events_url": "https://api.github.com/users/liyucheng09/received_events", "repos_url": "https://api.github.com/users/liyucheng09/repos", "site_admin": false, "starred_url": "https://api.github.com/users/liyucheng09/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/liyucheng09/subscriptions", "type": "User", "url": "https://api.github.com/users/liyucheng09", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "Hi @liyucheng09, thanks for reporting.\r\n\r\nApparently this issue has to do with your environment setup. One question: is your data in an NFS share? Some people have reported this error when using `fcntl` to write to an NFS share... If this is the case, then it might be that your NFS just may not be set up to provide file locks. You should ask your system administrator, or try these commands in the terminal:\r\n```shell\r\nsudo systemctl enable rpc-statd\r\nsudo systemctl start rpc-statd\r\n```", "We now use the `filelock` package for file locking, so please report this issue in their repo if it's still causing you problems." ]
2021-07-09T15:12:49
2024-06-21T06:14:07
2023-11-23T19:06:19
NONE
null
null
null
null
## Describe the bug It seems that the `filelock.py` went error. ``` >>> ds=load_dataset('xsum') ^CTraceback (most recent call last): File "/user/HS502/yl02706/.conda/envs/lyc/lib/python3.6/site-packages/datasets/utils/filelock.py", line 402, in _acquire fcntl.flock(fd, fcntl.LOCK_EX | fcntl.LOCK_NB) OSError: [Errno 37] No locks available ``` According to error log, it is OSError, but there is an `except` in the `_acquire` function. ``` def _acquire(self): open_mode = os.O_WRONLY | os.O_CREAT | os.O_EXCL | os.O_TRUNC try: fd = os.open(self._lock_file, open_mode) except (IOError, OSError): pass else: self._lock_file_fd = fd return None ``` I don't know why it stucked rather than `pass` directly. I am not quite familiar with filelock operation, so any help is highly appriciated. ## Steps to reproduce the bug ```python ds = load_dataset('xsum') ``` ## Expected results A clear and concise description of the expected results. ## Actual results ``` >>> ds=load_dataset('xsum') ^CTraceback (most recent call last): File "/user/HS502/yl02706/.conda/envs/lyc/lib/python3.6/site-packages/datasets/utils/filelock.py", line 402, in _acquire fcntl.flock(fd, fcntl.LOCK_EX | fcntl.LOCK_NB) OSError: [Errno 37] No locks available During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/user/HS502/yl02706/.conda/envs/lyc/lib/python3.6/site-packages/datasets/load.py", line 818, in load_dataset use_auth_token=use_auth_token, File "/user/HS502/yl02706/.conda/envs/lyc/lib/python3.6/site-packages/datasets/load.py", line 470, in prepare_module with FileLock(lock_path): File "/user/HS502/yl02706/.conda/envs/lyc/lib/python3.6/site-packages/datasets/utils/filelock.py", line 323, in __enter__ self.acquire() File "/user/HS502/yl02706/.conda/envs/lyc/lib/python3.6/site-packages/datasets/utils/filelock.py", line 272, in acquire self._acquire() File "/user/HS502/yl02706/.conda/envs/lyc/lib/python3.6/site-packages/datasets/utils/filelock.py", line 402, in _acquire fcntl.flock(fd, fcntl.LOCK_EX | fcntl.LOCK_NB) KeyboardInterrupt ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.9.0 - Platform: Linux-4.15.0-135-generic-x86_64-with-debian-buster-sid - Python version: 3.6.13 - PyArrow version: 4.0.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2618/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2618/timeline
null
not_planned
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
867 days, 3:53:30
https://api.github.com/repos/huggingface/datasets/issues/2615
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2615/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2615/comments
https://api.github.com/repos/huggingface/datasets/issues/2615/events
https://github.com/huggingface/datasets/issues/2615
940,794,339
MDU6SXNzdWU5NDA3OTQzMzk=
2,615
Jsonlines export error
{ "avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4", "events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}", "followers_url": "https://api.github.com/users/TevenLeScao/followers", "following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}", "gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/TevenLeScao", "id": 26709476, "login": "TevenLeScao", "node_id": "MDQ6VXNlcjI2NzA5NDc2", "organizations_url": "https://api.github.com/users/TevenLeScao/orgs", "received_events_url": "https://api.github.com/users/TevenLeScao/received_events", "repos_url": "https://api.github.com/users/TevenLeScao/repos", "site_admin": false, "starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions", "type": "User", "url": "https://api.github.com/users/TevenLeScao", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }, { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" } ]
[ "Thanks for reporting @TevenLeScao! I'm having a look...", "(not sure what just happened on the assignations sorry)", "For some reason this happens (both `datasets` version are on master) only on Python 3.6 and not Python 3.8.", "@TevenLeScao we are using `pandas` to serialize the dataset to JSON Lines. So it must be due to pandas. Could you please check the pandas version causing the issue?", "@TevenLeScao I have just checked it: this was a bug in `pandas` and it was fixed in version 1.2: https://github.com/pandas-dev/pandas/pull/36898", "Thanks ! I'm creating a PR", "Well I though it was me who has taken on this issue... 😅 ", "Sorry, I was also talking to teven offline so I already had the PR ready before noticing x)", "I was also already working in my PR... Nevermind. Next time we should pay attention if there is somebody (self-)assigned to an issue and if he/she is still working on it before overtaking it... 😄 ", "The fix is available on `master` @TevenLeScao , thanks for reporting" ]
2021-07-09T14:02:05
2021-07-09T15:29:07
2021-07-09T15:28:33
CONTRIBUTOR
null
null
null
null
## Describe the bug When exporting large datasets in jsonlines (c4 in my case) the created file has an error every 9999 lines: the 9999th and 10000th are concatenated, thus breaking the jsonlines format. This sounds like it is related to batching, which is by 10000 by default ## Steps to reproduce the bug This what I'm running: in python: ``` from datasets import load_dataset ptb = load_dataset("ptb_text_only") ptb["train"].to_json("ptb.jsonl") ``` then out of python: ``` head -10000 ptb.jsonl ``` ## Expected results Properly separated lines ## Actual results The last line is a concatenation of two lines ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.9.1.dev0 - Platform: Linux-5.4.0-1046-gcp-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyArrow version: 4.0.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2615/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2615/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
1:26:28
https://api.github.com/repos/huggingface/datasets/issues/2607
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2607/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2607/comments
https://api.github.com/repos/huggingface/datasets/issues/2607/events
https://github.com/huggingface/datasets/issues/2607
938,796,902
MDU6SXNzdWU5Mzg3OTY5MDI=
2,607
Streaming local gzip compressed JSON line files is not working
{ "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thomwolf", "id": 7353373, "login": "thomwolf", "node_id": "MDQ6VXNlcjczNTMzNzM=", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "repos_url": "https://api.github.com/users/thomwolf/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "type": "User", "url": "https://api.github.com/users/thomwolf", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
[ "Updating to pyarrow-4.0.1 didn't fix the issue", "Here is an exemple dataset with 2 of these compressed JSON files: https://huggingface.co/datasets/thomwolf/github-python", "Hi @thomwolf, thanks for reporting.\r\n\r\nIt seems this might be due to the fact that the JSON Dataset builder uses `pyarrow.json` (`paj.read_json`) to read the data without using the Python standard `open(file,...` (which is the one patched with `xopen` to work in streaming mode).\r\n\r\nThis has to be fixed.", "Sorry for reopening this, but I'm having the same issue as @thomwolf when streaming a gzipped JSON Lines file from the hub. Or is that just not possible by definition?\r\nI installed `datasets`in editable mode from source (so probably includes the fix from #2608 ?): \r\n```\r\n>>> datasets.__version__\r\n'1.9.1.dev0'\r\n```\r\n\r\n```\r\n>>> msmarco = datasets.load_dataset(\"webis/msmarco\", \"corpus\", streaming=True)\r\nUsing custom data configuration corpus-174d3b7155eb68db\r\n>>> msmarco_iter = iter(msmarco['train'])\r\n>>> print(next(msmarco_iter))\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/media/ssd/TREC/msmarco/datasets/src/datasets/iterable_dataset.py\", line 338, in __iter__\r\n for key, example in self._iter():\r\n File \"/media/ssd/TREC/msmarco/datasets/src/datasets/iterable_dataset.py\", line 335, in _iter\r\n yield from ex_iterable\r\n File \"/media/ssd/TREC/msmarco/datasets/src/datasets/iterable_dataset.py\", line 78, in __iter__\r\n for key, example in self.generate_examples_fn(**self.kwargs):\r\n File \"/home/christopher/.cache/huggingface/modules/datasets_modules/datasets/msmarco/eb63dff8d83107168e973c7a655a6082d37e08d71b4ac39a0afada479c138745/msmarco.py\", line 96, in _generate_examples\r\n with gzip.open(file, \"rt\", encoding=\"utf-8\") as f:\r\n File \"/usr/lib/python3.6/gzip.py\", line 53, in open\r\n binary_file = GzipFile(filename, gz_mode, compresslevel)\r\n File \"/usr/lib/python3.6/gzip.py\", line 163, in __init__\r\n fileobj = self.myfileobj = builtins.open(filename, mode or 'rb')\r\nFileNotFoundError: [Errno 2] No such file or directory: 'https://huggingface.co/datasets/webis/msmarco/resolve/main/msmarco_doc_00.gz'\r\n```\r\n\r\nLoading the dataset without streaming set to True, works fine.", "Hi ! To make the streaming work, we extend `open` in the dataset builder to work with urls.\r\n\r\nTherefore you just need to use `open` before using `gzip.open`:\r\n```diff\r\n- with gzip.open(file, \"rt\", encoding=\"utf-8\") as f:\r\n+ with gzip.open(open(file, \"rb\"), \"rt\", encoding=\"utf-8\") as f:\r\n```\r\n\r\nYou can see that it is the case for oscar.py and c4.py for example:\r\n\r\nhttps://github.com/huggingface/datasets/blob/8814b393984c1c2e1800ba370de2a9f7c8644908/datasets/oscar/oscar.py#L358-L358\r\n\r\nhttps://github.com/huggingface/datasets/blob/8814b393984c1c2e1800ba370de2a9f7c8644908/datasets/c4/c4.py#L88-L88\r\n\r\n", "@lhoestq Sorry I missed that. Thank you Quentin!" ]
2021-07-07T11:36:33
2021-07-20T09:50:19
2021-07-08T16:08:41
MEMBER
null
null
null
null
## Describe the bug Using streaming to iterate on local gzip compressed JSON files raise a file not exist error ## Steps to reproduce the bug ```python from datasets import load_dataset streamed_dataset = load_dataset('json', split='train', data_files=data_files, streaming=True) next(iter(streamed_dataset)) ``` ## Actual results ``` FileNotFoundError Traceback (most recent call last) <ipython-input-6-27a664e29784> in <module> ----> 1 next(iter(streamed_dataset)) ~/Documents/GitHub/datasets/src/datasets/iterable_dataset.py in __iter__(self) 336 337 def __iter__(self): --> 338 for key, example in self._iter(): 339 if self.features: 340 # we encode the example for ClassLabel feature types for example ~/Documents/GitHub/datasets/src/datasets/iterable_dataset.py in _iter(self) 333 else: 334 ex_iterable = self._ex_iterable --> 335 yield from ex_iterable 336 337 def __iter__(self): ~/Documents/GitHub/datasets/src/datasets/iterable_dataset.py in __iter__(self) 76 77 def __iter__(self): ---> 78 for key, example in self.generate_examples_fn(**self.kwargs): 79 yield key, example 80 ~/Documents/GitHub/datasets/src/datasets/iterable_dataset.py in wrapper(**kwargs) 282 def wrapper(**kwargs): 283 python_formatter = PythonFormatter() --> 284 for key, table in generate_tables_fn(**kwargs): 285 batch = python_formatter.format_batch(table) 286 for i, example in enumerate(_batch_to_examples(batch)): ~/Documents/GitHub/datasets/src/datasets/packaged_modules/json/json.py in _generate_tables(self, files, original_files) 85 file, 86 read_options=self.config.pa_read_options, ---> 87 parse_options=self.config.pa_parse_options, 88 ) 89 except pa.ArrowInvalid as err: ~/miniconda2/envs/datasets/lib/python3.7/site-packages/pyarrow/_json.pyx in pyarrow._json.read_json() ~/miniconda2/envs/datasets/lib/python3.7/site-packages/pyarrow/_json.pyx in pyarrow._json._get_reader() ~/miniconda2/envs/datasets/lib/python3.7/site-packages/pyarrow/io.pxi in pyarrow.lib.get_input_stream() ~/miniconda2/envs/datasets/lib/python3.7/site-packages/pyarrow/io.pxi in pyarrow.lib.get_native_file() ~/miniconda2/envs/datasets/lib/python3.7/site-packages/pyarrow/io.pxi in pyarrow.lib.OSFile.__cinit__() ~/miniconda2/envs/datasets/lib/python3.7/site-packages/pyarrow/io.pxi in pyarrow.lib.OSFile._open_readable() ~/miniconda2/envs/datasets/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status() ~/miniconda2/envs/datasets/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status() FileNotFoundError: [Errno 2] Failed to open local file 'gzip://file-000000000000.json::/Users/thomwolf/github-dataset/file-000000000000.json.gz'. Detail: [errno 2] No such file or directory ``` ## Environment info - `datasets` version: 1.9.1.dev0 - Platform: Darwin-19.6.0-x86_64-i386-64bit - Python version: 3.7.7 - PyArrow version: 1.0.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2607/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2607/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
1 day, 4:32:08
https://api.github.com/repos/huggingface/datasets/issues/2606
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2606/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2606/comments
https://api.github.com/repos/huggingface/datasets/issues/2606/events
https://github.com/huggingface/datasets/issues/2606
938,763,684
MDU6SXNzdWU5Mzg3NjM2ODQ=
2,606
[Metrics] addition of wiki_split metrics
{ "avatar_url": "https://avatars.githubusercontent.com/u/26653468?v=4", "events_url": "https://api.github.com/users/bhadreshpsavani/events{/privacy}", "followers_url": "https://api.github.com/users/bhadreshpsavani/followers", "following_url": "https://api.github.com/users/bhadreshpsavani/following{/other_user}", "gists_url": "https://api.github.com/users/bhadreshpsavani/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/bhadreshpsavani", "id": 26653468, "login": "bhadreshpsavani", "node_id": "MDQ6VXNlcjI2NjUzNDY4", "organizations_url": "https://api.github.com/users/bhadreshpsavani/orgs", "received_events_url": "https://api.github.com/users/bhadreshpsavani/received_events", "repos_url": "https://api.github.com/users/bhadreshpsavani/repos", "site_admin": false, "starred_url": "https://api.github.com/users/bhadreshpsavani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhadreshpsavani/subscriptions", "type": "User", "url": "https://api.github.com/users/bhadreshpsavani", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "d4c5f9", "default": false, "description": "Requesting to add a new metric", "id": 2459308248, "name": "metric request", "node_id": "MDU6TGFiZWwyNDU5MzA4MjQ4", "url": "https://api.github.com/repos/huggingface/datasets/labels/metric%20request" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/26653468?v=4", "events_url": "https://api.github.com/users/bhadreshpsavani/events{/privacy}", "followers_url": "https://api.github.com/users/bhadreshpsavani/followers", "following_url": "https://api.github.com/users/bhadreshpsavani/following{/other_user}", "gists_url": "https://api.github.com/users/bhadreshpsavani/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/bhadreshpsavani", "id": 26653468, "login": "bhadreshpsavani", "node_id": "MDQ6VXNlcjI2NjUzNDY4", "organizations_url": "https://api.github.com/users/bhadreshpsavani/orgs", "received_events_url": "https://api.github.com/users/bhadreshpsavani/received_events", "repos_url": "https://api.github.com/users/bhadreshpsavani/repos", "site_admin": false, "starred_url": "https://api.github.com/users/bhadreshpsavani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhadreshpsavani/subscriptions", "type": "User", "url": "https://api.github.com/users/bhadreshpsavani", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/26653468?v=4", "events_url": "https://api.github.com/users/bhadreshpsavani/events{/privacy}", "followers_url": "https://api.github.com/users/bhadreshpsavani/followers", "following_url": "https://api.github.com/users/bhadreshpsavani/following{/other_user}", "gists_url": "https://api.github.com/users/bhadreshpsavani/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/bhadreshpsavani", "id": 26653468, "login": "bhadreshpsavani", "node_id": "MDQ6VXNlcjI2NjUzNDY4", "organizations_url": "https://api.github.com/users/bhadreshpsavani/orgs", "received_events_url": "https://api.github.com/users/bhadreshpsavani/received_events", "repos_url": "https://api.github.com/users/bhadreshpsavani/repos", "site_admin": false, "starred_url": "https://api.github.com/users/bhadreshpsavani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhadreshpsavani/subscriptions", "type": "User", "url": "https://api.github.com/users/bhadreshpsavani", "user_view_type": "public" } ]
[ "#take" ]
2021-07-07T10:56:04
2021-07-12T22:34:31
2021-07-12T22:34:31
CONTRIBUTOR
null
null
null
null
**Is your feature request related to a problem? Please describe.** While training the model on sentence split the task in English we require to evaluate the trained model on `Exact Match`, `SARI` and `BLEU` score like this ![image](https://user-images.githubusercontent.com/26653468/124746876-ff5a3380-df3e-11eb-9a01-4b48db7a6694.png) While training we require metrics which can give all the output Currently, we don't have an exact match for text normalized data **Describe the solution you'd like** A custom metrics for wiki_split that can calculate these three values and provide it in the form of a single dictionary For exact match, we can refer to [this](https://github.com/huggingface/transformers/blob/master/src/transformers/data/metrics/squad_metrics.py) **Describe alternatives you've considered** Two metrics are already present one more can be added for an exact match then we can run all three metrics in training script #self-assign
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2606/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2606/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
5 days, 11:38:27
https://api.github.com/repos/huggingface/datasets/issues/2604
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2604/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2604/comments
https://api.github.com/repos/huggingface/datasets/issues/2604/events
https://github.com/huggingface/datasets/issues/2604
938,602,237
MDU6SXNzdWU5Mzg2MDIyMzc=
2,604
Add option to delete temporary files (e.g. extracted files) when loading dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thomwolf", "id": 7353373, "login": "thomwolf", "node_id": "MDQ6VXNlcjczNTMzNzM=", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "repos_url": "https://api.github.com/users/thomwolf/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "type": "User", "url": "https://api.github.com/users/thomwolf", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
[ "Hi !\r\nIf we want something more general, we could either\r\n1. delete the extracted files after the arrow data generation automatically, or \r\n2. delete each extracted file during the arrow generation right after it has been closed.\r\n\r\nSolution 2 is better to save disk space during the arrow generation. Is it what you had in mind ?\r\n\r\nThe API could look like\r\n```python\r\nload_dataset(..., delete_extracted_files_after_usage=True)\r\n```\r\n\r\nIn terms of implementation, here are some directions we could take for each solution:\r\n1. get the list of the extracted files from the DownloadManager and then delete them after the dataset is processed. This can be implemented in `download_and_prepare` I guess\r\n2. maybe wrap and mock `open` in the builder to make it delete the file when the file is closed.", "Also, if I delete the extracted files they need to be re-extracted again instead of loading from the Arrow cache files", "I think we already opened an issue about this topic (suggested by @stas00): duplicated of #2481?\r\n\r\nThis is in our TODO list... 😅 ", "I think the deletion of each extracted file could be implemented in our CacheManager and ExtractManager (once merged to master: #2295, #2277). 😉 ", "Oh yes sorry, I didn't check if this was a duplicate", "Nevermind @thomwolf, I just mentioned the other issue so that both appear linked in GitHub and we do not forget to close both once we make the corresponding Pull Request... That was the main reason! 😄 ", "Ok yes. I think this is an important feature to be able to use large datasets which are pretty much always compressed files.\r\n\r\nIn particular now this requires to keep the extracted file on the drive if you want to avoid reprocessing the dataset so in my case, this require using always ~400GB of drive instead of just 200GB (which is already significant). \r\n\r\nTwo nice features would be to:\r\n- allow to delete the extracted files without loosing the ability to load the dataset from the cached arrow-file\r\n- streamlined decompression when only the currently read file is extracted - this might require to read the list of files from the extracted archives before processing them?", "Here is a sample dataset with 2 such large compressed JSON files for debugging: https://huggingface.co/datasets/thomwolf/github-python", "Note that I'm confirming that with the current master branch of dataset, deleting extracted files (without deleting the arrow cache file) lead to **re-extracting** these files when reloading the dataset instead of directly loading the arrow cache file.", "Hi ! That's weird, it doesn't do that on my side (tested on master on my laptop by deleting the `extracted` folder in the download cache directory). You tested with one of the files at https://huggingface.co/datasets/thomwolf/github-python that you have locally ?", "Yes it’s when I load local compressed JSON line files with load_dataset(‘json’, data_files=…) ", "@thomwolf I'm sorry but I can't reproduce this problem. I'm also using: \r\n```python\r\nds = load_dataset(\"json\", split=\"train\", data_files=data_files, cache_dir=cache_dir)\r\n```\r\nafter having removed the extracted files:\r\n```python\r\nassert sorted((cache_dir / \"downloads\" / \"extracted\").iterdir()) == []\r\n```\r\n\r\nI get the logging message:\r\n```shell\r\nWARNING datasets.builder:builder.py:531 Reusing dataset json ...\r\n```", "Do you confirm the extracted folder stays empty after reloading?", "> \r\n> \r\n> Do you confirm the extracted folder stays empty after reloading?\r\n\r\nYes, I have the above mentioned assertion on the emptiness of the extracted folder:\r\n```python\r\nassert sorted((cache_dir / \"downloads\" / \"extracted\").iterdir()) == []\r\n```\r\n" ]
2021-07-07T07:56:16
2021-07-19T09:08:18
2021-07-19T09:08:18
MEMBER
null
null
null
null
I'm loading a dataset constituted of 44 GB of compressed JSON files. When loading the dataset with the JSON script, extracting the files create about 200 GB of uncompressed files before creating the 180GB of arrow cache tables Having a simple way to delete the extracted files after usage (or even better, to stream extraction/delete) would be nice to avoid disk cluter. I can maybe tackle this one in the JSON script unless you want a more general solution.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2604/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2604/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
12 days, 1:12:02
https://api.github.com/repos/huggingface/datasets/issues/2600
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2600/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2600/comments
https://api.github.com/repos/huggingface/datasets/issues/2600/events
https://github.com/huggingface/datasets/issues/2600
938,086,745
MDU6SXNzdWU5MzgwODY3NDU=
2,600
Crash when using multiprocessing (`num_proc` > 1) on `filter` and all samples are discarded
{ "avatar_url": "https://avatars.githubusercontent.com/u/4904985?v=4", "events_url": "https://api.github.com/users/mxschmdt/events{/privacy}", "followers_url": "https://api.github.com/users/mxschmdt/followers", "following_url": "https://api.github.com/users/mxschmdt/following{/other_user}", "gists_url": "https://api.github.com/users/mxschmdt/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mxschmdt", "id": 4904985, "login": "mxschmdt", "node_id": "MDQ6VXNlcjQ5MDQ5ODU=", "organizations_url": "https://api.github.com/users/mxschmdt/orgs", "received_events_url": "https://api.github.com/users/mxschmdt/received_events", "repos_url": "https://api.github.com/users/mxschmdt/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mxschmdt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxschmdt/subscriptions", "type": "User", "url": "https://api.github.com/users/mxschmdt", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[]
2021-07-06T16:53:25
2021-07-07T12:50:31
2021-07-07T12:50:31
CONTRIBUTOR
null
null
null
null
## Describe the bug If `filter` is applied to a dataset using multiprocessing (`num_proc` > 1) and all sharded datasets are empty afterwards (due to all samples being discarded), the program crashes. ## Steps to reproduce the bug ```python from datasets import Dataset data = Dataset.from_dict({'id': [0,1]}) data.filter(lambda x: False, num_proc=2) ``` ## Expected results An empty table should be returned without crashing. ## Actual results ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/user/venv/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 185, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home/user/venv/lib/python3.8/site-packages/datasets/fingerprint.py", line 397, in wrapper out = func(self, *args, **kwargs) File "/home/user/venv/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2143, in filter return self.map( File "/home/user/venv/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1738, in map result = concatenate_datasets(transformed_shards) File "/home/user/venv/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 3267, in concatenate_datasets table = concat_tables(tables_to_concat, axis=axis) File "/home/user/venv/lib/python3.8/site-packages/datasets/table.py", line 853, in concat_tables return ConcatenationTable.from_tables(tables, axis=axis) File "/home/user/venv/lib/python3.8/site-packages/datasets/table.py", line 713, in from_tables blocks = to_blocks(tables[0]) IndexError: list index out of range ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.9.0 - Platform: Linux-5.12.11-300.fc34.x86_64-x86_64-with-glibc2.2.5 - Python version: 3.8.10 - PyArrow version: 3.0.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2600/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2600/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
19:57:06
https://api.github.com/repos/huggingface/datasets/issues/2598
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2598/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2598/comments
https://api.github.com/repos/huggingface/datasets/issues/2598/events
https://github.com/huggingface/datasets/issues/2598
937,930,632
MDU6SXNzdWU5Mzc5MzA2MzI=
2,598
Unable to download omp dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/25797960?v=4", "events_url": "https://api.github.com/users/erikadistefano/events{/privacy}", "followers_url": "https://api.github.com/users/erikadistefano/followers", "following_url": "https://api.github.com/users/erikadistefano/following{/other_user}", "gists_url": "https://api.github.com/users/erikadistefano/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/erikadistefano", "id": 25797960, "login": "erikadistefano", "node_id": "MDQ6VXNlcjI1Nzk3OTYw", "organizations_url": "https://api.github.com/users/erikadistefano/orgs", "received_events_url": "https://api.github.com/users/erikadistefano/received_events", "repos_url": "https://api.github.com/users/erikadistefano/repos", "site_admin": false, "starred_url": "https://api.github.com/users/erikadistefano/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/erikadistefano/subscriptions", "type": "User", "url": "https://api.github.com/users/erikadistefano", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
[ "Hi @erikadistefano , thanks for reporting the issue.\r\n\r\nI have created a Pull Request that should fix it. \r\n\r\nOnce merged into master, feel free to update your installed `datasets` library (either by installing it from our GitHub master branch or waiting until our next release) to be able to load omp dataset." ]
2021-07-06T14:00:52
2021-07-07T12:56:35
2021-07-07T12:56:35
NONE
null
null
null
null
## Describe the bug The omp dataset cannot be downloaded because of a DuplicatedKeysError ## Steps to reproduce the bug from datasets import load_dataset omp = load_dataset('omp', 'posts_labeled') print(omp) ## Expected results This code should download the omp dataset and print the dictionary ## Actual results Downloading and preparing dataset omp/posts_labeled (download: 1.27 MiB, generated: 13.31 MiB, post-processed: Unknown size, total: 14.58 MiB) to /home/erika_distefano/.cache/huggingface/datasets/omp/posts_labeled/1.1.0/2fe5b067be3bff1d4588d5b0cbb9b5b22ae1b9d5b026a8ff572cd389f862735b... 0 examples [00:00, ? examples/s]2021-07-06 09:43:55.868815: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.11.0 Traceback (most recent call last): File "/home/erika_distefano/.local/lib/python3.6/site-packages/datasets/builder.py", line 990, in _prepare_split writer.write(example, key) File "/home/erika_distefano/.local/lib/python3.6/site-packages/datasets/arrow_writer.py", line 338, in write self.check_duplicate_keys() File "/home/erika_distefano/.local/lib/python3.6/site-packages/datasets/arrow_writer.py", line 349, in check_duplicate_keys raise DuplicatedKeysError(key) datasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET ! Found duplicate Key: 3326 Keys should be unique and deterministic in nature During handling of the above exception, another exception occurred: Traceback (most recent call last): File "hf_datasets.py", line 32, in <module> omp = load_dataset('omp', 'posts_labeled') File "/home/erika_distefano/.local/lib/python3.6/site-packages/datasets/load.py", line 748, in load_dataset use_auth_token=use_auth_token, File "/home/erika_distefano/.local/lib/python3.6/site-packages/datasets/builder.py", line 575, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/erika_distefano/.local/lib/python3.6/site-packages/datasets/builder.py", line 652, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/erika_distefano/.local/lib/python3.6/site-packages/datasets/builder.py", line 992, in _prepare_split num_examples, num_bytes = writer.finalize() File "/home/erika_distefano/.local/lib/python3.6/site-packages/datasets/arrow_writer.py", line 409, in finalize self.check_duplicate_keys() File "/home/erika_distefano/.local/lib/python3.6/site-packages/datasets/arrow_writer.py", line 349, in check_duplicate_keys raise DuplicatedKeysError(key) datasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET ! Found duplicate Key: 3326 Keys should be unique and deterministic in nature ## Environment info - `datasets` version: 1.8.0 - Platform: Ubuntu 18.04.4 LTS - Python version: 3.6.9 - PyArrow version: 3.0.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2598/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2598/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
22:55:43
https://api.github.com/repos/huggingface/datasets/issues/2596
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2596/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2596/comments
https://api.github.com/repos/huggingface/datasets/issues/2596/events
https://github.com/huggingface/datasets/issues/2596
937,598,914
MDU6SXNzdWU5Mzc1OTg5MTQ=
2,596
Transformer Class on dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/18707623?v=4", "events_url": "https://api.github.com/users/arita37/events{/privacy}", "followers_url": "https://api.github.com/users/arita37/followers", "following_url": "https://api.github.com/users/arita37/following{/other_user}", "gists_url": "https://api.github.com/users/arita37/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/arita37", "id": 18707623, "login": "arita37", "node_id": "MDQ6VXNlcjE4NzA3NjIz", "organizations_url": "https://api.github.com/users/arita37/orgs", "received_events_url": "https://api.github.com/users/arita37/received_events", "repos_url": "https://api.github.com/users/arita37/repos", "site_admin": false, "starred_url": "https://api.github.com/users/arita37/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/arita37/subscriptions", "type": "User", "url": "https://api.github.com/users/arita37", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
[ "Hi ! Do you have an example in mind that shows how this could be useful ?", "Example:\n\nMerge 2 datasets into one datasets\n\nLabel extraction from dataset\n\ndataset(text, label)\n —> dataset(text, newlabel)\n\nTextCleaning.\n\n\nFor image dataset, \nTransformation are easier (ie linear algebra).\n\n\n\n\n\n\n> On Jul 6, 2021, at 17:39, Quentin Lhoest ***@***.***> wrote:\n> \n> \n> Hi ! Do you have an example in mind that shows how this could be useful ?\n> \n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub, or unsubscribe.\n", "There are already a few transformations that you can apply on a dataset using methods like `dataset.map()`.\r\nYou can find examples in the documentation here:\r\nhttps://huggingface.co/docs/datasets/processing.html\r\n\r\nYou can merge two datasets with `concatenate_datasets()` or do label extraction with `dataset.map()` for example", "Ok, sure.\n\nThanks for pointing on functional part.\nMy question is more\n“Philosophical”/Design perspective.\n\nThere are 2 perspetive:\n Add transformation methods to \n Dataset Class\n\n\n OR Create a Transformer Class\n which operates on Dataset Class.\n\nT(Dataset) —> Dataset\n\ndatasetnew = MyTransform.transform(dataset)\ndatasetNew.save(path)\n\n\nWhat would be the difficulty\nof implementing a Transformer Class\noperating at dataset level ?\n\n\nthanks\n\n\n\n\n\n\n\n\n\n> On Jul 6, 2021, at 22:00, Quentin Lhoest ***@***.***> wrote:\n> \n> \n> There are already a few transformations that you can apply on a dataset using methods like dataset.map().\n> You can find examples in the documentation here:\n> https://huggingface.co/docs/datasets/processing.html\n> \n> You can merge two datasets with concatenate_datasets() or do label extraction with dataset.map() for example\n> \n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub, or unsubscribe.\n", "I can imagine that this would be a useful API to implement processing pipelines as transforms. They could be used to perform higher level transforms compared to the atomic transforms allowed by methods like map, filter, etc.\r\n\r\nI guess if you find any transform that could be useful for text dataset processing, image dataset processing etc. we could definitely start having such transforms :)", "Thanks for reply.\n\nWhat would be the constraints\nto have\nDataset —> Dataset consistency ?\n\nMain issue would be\nlarger than memory dataset and\nserialization on disk.\n\nTechnically,\none still process at atomic level\nand try to wrap the full results\ninto Dataset…. (!)\n\nWhat would you think ?\n\n\n\n\n\n\n\n\n> On Jul 7, 2021, at 16:51, Quentin Lhoest ***@***.***> wrote:\n> \n> \n> I can imagine that this would be a useful API to implement processing pipelines as transforms. They could be used to perform higher level transforms compared to the atomic transforms allowed by methods like map, filter, etc.\n> \n> I guess if you find any transform that could be useful for text dataset processing, image dataset processing etc. we could definitely start having such transforms :)\n> \n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub, or unsubscribe.\n", "We can be pretty flexible and not impose any constraints for transforms.\r\n\r\nMoreover, this library is designed to support datasets bigger than memory. The datasets are loaded from the disk via memory mapping, without filling up RAM. Even processing functions like `map` work in a batched fashion to not fill up your RAM. So this shouldn't be an issue", "Ok thanks.\n\nBut, Dataset has various flavors.\nIn current design of Dataset,\n how the serialization on disk is done (?)\n\n\nThe main issue is serialization \nof newdataset= Transform(Dataset)\n (ie thats why am referring to Out Of memory dataset…):\n\n Should be part of Transform or part of dataset ?\n\n\n\n\nMaybe, not, since the output is aimed to feed model in memory (?)\n\n\n\n\n\n\n\n\n> On Jul 7, 2021, at 18:04, Quentin Lhoest ***@***.***> wrote:\n> \n> \n> We can be pretty flexible and not impose any constraints for transforms.\n> \n> Moreover, this library is designed to support datasets bigger than memory. The datasets are loaded from the disk via memory mapping, without filling up RAM. Even processing functions like map work in a batched fashion to not fill up your RAM. So this shouldn't be an issue\n> \n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub, or unsubscribe.\n", "I'm not sure I understand, could you elaborate a bit more please ?\r\n\r\nEach dataset is a wrapper of a PyArrow Table that contains all the data. The table is loaded from an arrow file on the disk.\r\nWe have an ArrowWriter and ArrowReader class to write/read arrow tables on disk or in in-memory buffers." ]
2021-07-06T07:27:15
2022-11-02T14:26:09
2022-11-02T14:26:09
NONE
null
null
null
null
Just wondering if you have intenttion to create TransformerClass : dataset --> dataset and make determnistic transformation (ie not fit).
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2596/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2596/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
484 days, 6:58:54
https://api.github.com/repos/huggingface/datasets/issues/2595
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2595/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2595/comments
https://api.github.com/repos/huggingface/datasets/issues/2595/events
https://github.com/huggingface/datasets/issues/2595
937,483,120
MDU6SXNzdWU5Mzc0ODMxMjA=
2,595
ModuleNotFoundError: No module named 'datasets.tasks' while importing common voice datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/41314912?v=4", "events_url": "https://api.github.com/users/profsatwinder/events{/privacy}", "followers_url": "https://api.github.com/users/profsatwinder/followers", "following_url": "https://api.github.com/users/profsatwinder/following{/other_user}", "gists_url": "https://api.github.com/users/profsatwinder/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/profsatwinder", "id": 41314912, "login": "profsatwinder", "node_id": "MDQ6VXNlcjQxMzE0OTEy", "organizations_url": "https://api.github.com/users/profsatwinder/orgs", "received_events_url": "https://api.github.com/users/profsatwinder/received_events", "repos_url": "https://api.github.com/users/profsatwinder/repos", "site_admin": false, "starred_url": "https://api.github.com/users/profsatwinder/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/profsatwinder/subscriptions", "type": "User", "url": "https://api.github.com/users/profsatwinder", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "Hi @profsatwinder.\r\n\r\nIt looks like you are using an old version of `datasets`. Please update it with `pip install -U datasets` and indicate if the problem persists.", "@albertvillanova Thanks for the information. I updated it to 1.9.0 and the issue is resolved. Thanks again. " ]
2021-07-06T03:20:55
2021-07-06T05:59:49
2021-07-06T05:59:49
NONE
null
null
null
null
Error traceback: --------------------------------------------------------------------------- ModuleNotFoundError Traceback (most recent call last) <ipython-input-8-a7b592d3bca0> in <module>() 1 from datasets import load_dataset, load_metric 2 ----> 3 common_voice_train = load_dataset("common_voice", "pa-IN", split="train+validation") 4 common_voice_test = load_dataset("common_voice", "pa-IN", split="test") 9 frames /root/.cache/huggingface/modules/datasets_modules/datasets/common_voice/078d412587e9efeb0ae2e574da99c31e18844c496008d53dc5c60f4159ed639b/common_voice.py in <module>() 19 20 import datasets ---> 21 from datasets.tasks import AutomaticSpeechRecognition 22 23 ModuleNotFoundError: No module named 'datasets.tasks'
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2595/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2595/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
2:38:54
https://api.github.com/repos/huggingface/datasets/issues/2591
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2591/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2591/comments
https://api.github.com/repos/huggingface/datasets/issues/2591/events
https://github.com/huggingface/datasets/issues/2591
936,957,975
MDU6SXNzdWU5MzY5NTc5NzU=
2,591
Cached dataset overflowing disk space
{ "avatar_url": "https://avatars.githubusercontent.com/u/1704131?v=4", "events_url": "https://api.github.com/users/BirgerMoell/events{/privacy}", "followers_url": "https://api.github.com/users/BirgerMoell/followers", "following_url": "https://api.github.com/users/BirgerMoell/following{/other_user}", "gists_url": "https://api.github.com/users/BirgerMoell/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/BirgerMoell", "id": 1704131, "login": "BirgerMoell", "node_id": "MDQ6VXNlcjE3MDQxMzE=", "organizations_url": "https://api.github.com/users/BirgerMoell/orgs", "received_events_url": "https://api.github.com/users/BirgerMoell/received_events", "repos_url": "https://api.github.com/users/BirgerMoell/repos", "site_admin": false, "starred_url": "https://api.github.com/users/BirgerMoell/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BirgerMoell/subscriptions", "type": "User", "url": "https://api.github.com/users/BirgerMoell", "user_view_type": "public" }
[]
closed
false
null
[]
[ "Hi! I'm transferring this issue over to `datasets`", "I'm using the datasets concatenate dataset to combine the datasets and then train.\r\ntrain_dataset = concatenate_datasets([dataset1, dataset2, common_voice_train])\r\n\r\n", "Hi @BirgerMoell.\r\n\r\nYou have several options:\r\n- to set caching to be stored on a different path location, other than the default one (`~/.cache/huggingface/datasets`):\r\n - either setting the environment variable `HF_DATASETS_CACHE` with the path to the new cache location\r\n - or by passing it with the parameter `cache_dir` when loading each of the datasets: `dataset = load_dataset(..., cache_dir=your_new_location)`\r\n\r\n You can get all the information in the docs: https://huggingface.co/docs/datasets/loading_datasets.html#cache-directory\r\n- I wouldn't recommend disabling caching, because current implementation generates cache files anyway, although in a temporary directory and they are deleted when the session closes. See details here: https://huggingface.co/docs/datasets/processing.html#enable-or-disable-caching\r\n- You could alternatively load the datasets in streaming mode. This is a new feature which allows loading the datasets without downloading the entire files. More information here: https://huggingface.co/docs/datasets/dataset_streaming.html", "Hi @BirgerMoell,\r\n\r\nWe are planning to add a new feature to datasets, which could be interesting in your case: Add the option to delete temporary files (decompressed files) from the cache directory (see: #2481, #2604).\r\n\r\nWe will ping you once this feature is implemented, so that the size of your cache directory will be considerably reduced." ]
2021-07-05T10:43:19
2021-07-19T09:08:19
2021-07-19T09:08:19
CONTRIBUTOR
null
null
null
null
I'm training a Swedish Wav2vec2 model on a Linux GPU and having issues that the huggingface cached dataset folder is completely filling up my disk space (I'm training on a dataset of around 500 gb). The cache folder is 500gb (and now my disk space is full). Is there a way to toggle caching or set the caching to be stored on a different device (I have another drive with 4 tb that could hold the caching files). This might not technically be a bug, but I was unsure and I felt that the bug was the closest one. Traceback (most recent call last): File "/home/birger/miniconda3/envs/wav2vec2/lib/python3.7/site-packages/multiprocess/pool.py", line 121, in worker result = (True, func(*args, **kwds)) File "/home/birger/miniconda3/envs/wav2vec2/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 186, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home/birger/miniconda3/envs/wav2vec2/lib/python3.7/site-packages/datasets/fingerprint.py", line 397, in wrapper out = func(self, *args, **kwargs) File "/home/birger/miniconda3/envs/wav2vec2/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1983, in _map_single writer.finalize() File "/home/birger/miniconda3/envs/wav2vec2/lib/python3.7/site-packages/datasets/arrow_writer.py", line 418, in finalize self.pa_writer.close() File "pyarrow/ipc.pxi", line 402, in pyarrow.lib._CRecordBatchWriter.close File "pyarrow/error.pxi", line 97, in pyarrow.lib.check_status OSError: [Errno 28] Error writing bytes to file. Detail: [errno 28] No space left on device """ The above exception was the direct cause of the following exception:
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2591/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2591/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
13 days, 22:25:00
https://api.github.com/repos/huggingface/datasets/issues/2585
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2585/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2585/comments
https://api.github.com/repos/huggingface/datasets/issues/2585/events
https://github.com/huggingface/datasets/issues/2585
936,484,419
MDU6SXNzdWU5MzY0ODQ0MTk=
2,585
sqaud_v2 dataset contains misalignment between the answer text and the context value at the answer index
{ "avatar_url": "https://avatars.githubusercontent.com/u/9354454?v=4", "events_url": "https://api.github.com/users/mmajurski/events{/privacy}", "followers_url": "https://api.github.com/users/mmajurski/followers", "following_url": "https://api.github.com/users/mmajurski/following{/other_user}", "gists_url": "https://api.github.com/users/mmajurski/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mmajurski", "id": 9354454, "login": "mmajurski", "node_id": "MDQ6VXNlcjkzNTQ0NTQ=", "organizations_url": "https://api.github.com/users/mmajurski/orgs", "received_events_url": "https://api.github.com/users/mmajurski/received_events", "repos_url": "https://api.github.com/users/mmajurski/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mmajurski/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mmajurski/subscriptions", "type": "User", "url": "https://api.github.com/users/mmajurski", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
[ "Hi @mmajurski, thanks for reporting this issue.\r\n\r\nIndeed this misalignment arises because the source dataset context field contains leading blank spaces (and these are counted within the answer_start), while our datasets loading script removes these leading blank spaces.\r\n\r\nI'm going to fix our script so that all leading blank spaces in the source dataset are kept, and there is no misalignment between the answer text and the answer_start within the context.", "If you are going to be altering the data cleaning from the source Squad dataset, here is one thing to consider.\r\nThere are occasional double spaces separating words which it might be nice to get rid of. \r\n\r\nEither way, thank you." ]
2021-07-04T15:39:49
2021-07-07T13:18:51
2021-07-07T13:18:51
NONE
null
null
null
null
## Describe the bug The built in huggingface squad_v2 dataset that you can access via datasets.load_dataset contains mis-alignment between the answers['text'] and the characters in the context at the location specified by answers['answer_start']. For example: id = '56d1f453e7d4791d009025bd' answers = {'text': ['Pure Land'], 'answer_start': [146]} However the actual text in context at location 146 is 'ure Land,' Which is an off-by-one error from the correct answer. ## Steps to reproduce the bug ```python import datasets def check_context_answer_alignment(example): for a_idx in range(len(example['answers']['text'])): # check raw dataset for answer consistency between context and answer answer_text = example['answers']['text'][a_idx] a_st_idx = example['answers']['answer_start'][a_idx] a_end_idx = a_st_idx + len(example['answers']['text'][a_idx]) answer_text_from_context = example['context'][a_st_idx:a_end_idx] if answer_text != answer_text_from_context: #print(example['id']) return False return True dataset = datasets.load_dataset('squad_v2', split='train', keep_in_memory=True) start_len = len(dataset) dataset = dataset.filter(check_context_answer_alignment, num_proc=1, keep_in_memory=True) end_len = len(dataset) print('{} instances contain mis-alignment between the answer text and answer index.'.format(start_len - end_len)) ``` ## Expected results This code should result in 0 rows being filtered out from the dataset. ## Actual results This filter command results in 258 rows being flagged as containing a discrepancy between the text contained within answers['text'] and the text in example['context'] at the answers['answer_start'] location. This code will reproduce the problem and produce the following count: "258 instances contain mis-alignment between the answer text and answer index." ## Environment info Steps to rebuilt the Conda environment: ``` # create a virtual environment to stuff all these packages into conda create -n round8 python=3.8 -y # activate the virtual environment conda activate round8 # install pytorch (best done through conda to handle cuda dependencies) conda install pytorch torchvision torchtext cudatoolkit=11.1 -c pytorch-lts -c nvidia pip install jsonpickle transformers datasets matplotlib ``` OS: Ubuntu 20.04 Python 3.8 Result of `conda env export`: ``` name: round8 channels: - pytorch-lts - nvidia - defaults dependencies: - _libgcc_mutex=0.1=main - _openmp_mutex=4.5=1_gnu - blas=1.0=mkl - brotlipy=0.7.0=py38h27cfd23_1003 - bzip2=1.0.8=h7b6447c_0 - ca-certificates=2021.5.25=h06a4308_1 - certifi=2021.5.30=py38h06a4308_0 - cffi=1.14.5=py38h261ae71_0 - chardet=4.0.0=py38h06a4308_1003 - cryptography=3.4.7=py38hd23ed53_0 - cudatoolkit=11.1.74=h6bb024c_0 - ffmpeg=4.2.2=h20bf706_0 - freetype=2.10.4=h5ab3b9f_0 - gmp=6.2.1=h2531618_2 - gnutls=3.6.15=he1e5248_0 - idna=2.10=pyhd3eb1b0_0 - intel-openmp=2021.2.0=h06a4308_610 - jpeg=9b=h024ee3a_2 - lame=3.100=h7b6447c_0 - lcms2=2.12=h3be6417_0 - ld_impl_linux-64=2.35.1=h7274673_9 - libffi=3.3=he6710b0_2 - libgcc-ng=9.3.0=h5101ec6_17 - libgomp=9.3.0=h5101ec6_17 - libidn2=2.3.1=h27cfd23_0 - libopus=1.3.1=h7b6447c_0 - libpng=1.6.37=hbc83047_0 - libstdcxx-ng=9.3.0=hd4cf53a_17 - libtasn1=4.16.0=h27cfd23_0 - libtiff=4.2.0=h85742a9_0 - libunistring=0.9.10=h27cfd23_0 - libuv=1.40.0=h7b6447c_0 - libvpx=1.7.0=h439df22_0 - libwebp-base=1.2.0=h27cfd23_0 - lz4-c=1.9.3=h2531618_0 - mkl=2021.2.0=h06a4308_296 - mkl-service=2.3.0=py38h27cfd23_1 - mkl_fft=1.3.0=py38h42c9631_2 - mkl_random=1.2.1=py38ha9443f7_2 - ncurses=6.2=he6710b0_1 - nettle=3.7.3=hbbd107a_1 - ninja=1.10.2=hff7bd54_1 - numpy=1.20.2=py38h2d18471_0 - numpy-base=1.20.2=py38hfae3a4d_0 - olefile=0.46=py_0 - openh264=2.1.0=hd408876_0 - openssl=1.1.1k=h27cfd23_0 - pillow=8.2.0=py38he98fc37_0 - pip=21.1.2=py38h06a4308_0 - pycparser=2.20=py_2 - pyopenssl=20.0.1=pyhd3eb1b0_1 - pysocks=1.7.1=py38h06a4308_0 - python=3.8.10=h12debd9_8 - pytorch=1.8.1=py3.8_cuda11.1_cudnn8.0.5_0 - readline=8.1=h27cfd23_0 - requests=2.25.1=pyhd3eb1b0_0 - setuptools=52.0.0=py38h06a4308_0 - six=1.16.0=pyhd3eb1b0_0 - sqlite=3.35.4=hdfb4753_0 - tk=8.6.10=hbc83047_0 - torchtext=0.9.1=py38 - torchvision=0.9.1=py38_cu111 - typing_extensions=3.7.4.3=pyha847dfd_0 - urllib3=1.26.4=pyhd3eb1b0_0 - wheel=0.36.2=pyhd3eb1b0_0 - x264=1!157.20191217=h7b6447c_0 - xz=5.2.5=h7b6447c_0 - zlib=1.2.11=h7b6447c_3 - zstd=1.4.9=haebb681_0 - pip: - click==8.0.1 - cycler==0.10.0 - datasets==1.8.0 - dill==0.3.4 - filelock==3.0.12 - fsspec==2021.6.0 - huggingface-hub==0.0.8 - joblib==1.0.1 - jsonpickle==2.0.0 - kiwisolver==1.3.1 - matplotlib==3.4.2 - multiprocess==0.70.12.2 - packaging==20.9 - pandas==1.2.4 - pyarrow==3.0.0 - pyparsing==2.4.7 - python-dateutil==2.8.1 - pytz==2021.1 - regex==2021.4.4 - sacremoses==0.0.45 - tokenizers==0.10.3 - tqdm==4.49.0 - transformers==4.6.1 - xxhash==2.0.2 prefix: /home/mmajurski/anaconda3/envs/round8 ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2585/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2585/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
2 days, 21:39:02
https://api.github.com/repos/huggingface/datasets/issues/2583
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2583/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2583/comments
https://api.github.com/repos/huggingface/datasets/issues/2583/events
https://github.com/huggingface/datasets/issues/2583
936,034,976
MDU6SXNzdWU5MzYwMzQ5NzY=
2,583
Error iteration over IterableDataset using Torch DataLoader
{ "avatar_url": "https://avatars.githubusercontent.com/u/12227436?v=4", "events_url": "https://api.github.com/users/LeenaShekhar/events{/privacy}", "followers_url": "https://api.github.com/users/LeenaShekhar/followers", "following_url": "https://api.github.com/users/LeenaShekhar/following{/other_user}", "gists_url": "https://api.github.com/users/LeenaShekhar/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/LeenaShekhar", "id": 12227436, "login": "LeenaShekhar", "node_id": "MDQ6VXNlcjEyMjI3NDM2", "organizations_url": "https://api.github.com/users/LeenaShekhar/orgs", "received_events_url": "https://api.github.com/users/LeenaShekhar/received_events", "repos_url": "https://api.github.com/users/LeenaShekhar/repos", "site_admin": false, "starred_url": "https://api.github.com/users/LeenaShekhar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LeenaShekhar/subscriptions", "type": "User", "url": "https://api.github.com/users/LeenaShekhar", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "Hi ! This is because you first need to format the dataset for pytorch:\r\n\r\n```python\r\n>>> import torch\r\n>>> from datasets import load_dataset\r\n>>> dataset = load_dataset('oscar', \"unshuffled_deduplicated_en\", split='train', streaming=True)\r\n>>> torch_iterable_dataset = dataset.with_format(\"torch\")\r\n>>> assert isinstance(torch_iterable_dataset, torch.utils.data.IterableDataset)\r\n>>> dataloader = torch.utils.data.DataLoader(torch_iterable_dataset, batch_size=4)\r\n>>> next(iter(dataloader))\r\n{'id': tensor([0, 1, 2, 3]), 'text': ['Mtendere Village was inspired...]}\r\n```\r\n\r\nThis is because the pytorch dataloader expects a subclass of `torch.utils.data.IterableDataset`. Since you can't pass an arbitrary iterable to a pytorch dataloader, you first need to build an object that inherits from `torch.utils.data.IterableDataset` using `with_format(\"torch\")` for example.\r\n", "Thank you for that and the example! \r\n\r\nWhat you said makes total sense; I just somehow missed that and assumed HF IterableDataset was a subclass of Torch IterableDataset. " ]
2021-07-02T19:55:58
2021-07-20T09:04:45
2021-07-05T23:48:23
NONE
null
null
null
null
## Describe the bug I have an IterableDataset (created using streaming=True) and I am trying to create batches using Torch DataLoader class by passing this IterableDataset to it. This throws error which is pasted below. I can do the same by using Torch IterableDataset. One thing I noticed is that in the former case when I look at the dataloader.sampler class I get torch.utils.data.sampler.SequentialSampler while the latter one gives torch.utils.data.dataloader._InfiniteConstantSampler. I am not sure if this is how it is meant to be used, but that's what seemed reasonable to me. ## Steps to reproduce the bug 1. Does not work. ```python >>> from datasets import load_dataset >>> dataset = load_dataset('oscar', "unshuffled_deduplicated_en", split='train', streaming=True) >>> dataloader = torch.utils.data.DataLoader(dataset, batch_size=4) >>> dataloader.sampler <torch.utils.data.sampler.SequentialSampler object at 0x7f245a510208> >>> for batch in dataloader: ... print(batch) ``` 2. Works. ```python import torch from torch.utils.data import Dataset, IterableDataset, DataLoader class CustomIterableDataset(IterableDataset): 'Characterizes a dataset for PyTorch' def __init__(self, data): 'Initialization' self.data = data def __iter__(self): return iter(self.data) data = list(range(12)) dataset = CustomIterableDataset(data) dataloader = DataLoader(dataset, batch_size=4) print("dataloader: ", dataloader.sampler) for batch in dataloader: print(batch) ``` ## Expected results To get batches of data with the batch size as 4. Output from the latter one (2) though Datasource is different here so actual data is different. dataloader: <torch.utils.data.dataloader._InfiniteConstantSampler object at 0x7f1cc29e2c50> tensor([0, 1, 2, 3]) tensor([4, 5, 6, 7]) tensor([ 8, 9, 10, 11]) ## Actual results <torch.utils.data.sampler.SequentialSampler object at 0x7f245a510208> ... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/data/leshekha/lib/HFDatasets/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 435, in __next__ data = self._next_data() File "/data/leshekha/lib/HFDatasets/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 474, in _next_data index = self._next_index() # may raise StopIteration File "/data/leshekha/lib/HFDatasets/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 427, in _next_index return next(self._sampler_iter) # may raise StopIteration File "/data/leshekha/lib/HFDatasets/lib/python3.6/site-packages/torch/utils/data/sampler.py", line 227, in __iter__ for idx in self.sampler: File "/data/leshekha/lib/HFDatasets/lib/python3.6/site-packages/torch/utils/data/sampler.py", line 67, in __iter__ return iter(range(len(self.data_source))) TypeError: object of type 'IterableDataset' has no len() ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: '1.8.1.dev0' - Platform: Linux - Python version: Python 3.6.8 - PyArrow version: '3.0.0'
{ "avatar_url": "https://avatars.githubusercontent.com/u/12227436?v=4", "events_url": "https://api.github.com/users/LeenaShekhar/events{/privacy}", "followers_url": "https://api.github.com/users/LeenaShekhar/followers", "following_url": "https://api.github.com/users/LeenaShekhar/following{/other_user}", "gists_url": "https://api.github.com/users/LeenaShekhar/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/LeenaShekhar", "id": 12227436, "login": "LeenaShekhar", "node_id": "MDQ6VXNlcjEyMjI3NDM2", "organizations_url": "https://api.github.com/users/LeenaShekhar/orgs", "received_events_url": "https://api.github.com/users/LeenaShekhar/received_events", "repos_url": "https://api.github.com/users/LeenaShekhar/repos", "site_admin": false, "starred_url": "https://api.github.com/users/LeenaShekhar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LeenaShekhar/subscriptions", "type": "User", "url": "https://api.github.com/users/LeenaShekhar", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2583/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2583/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
3 days, 3:52:25
https://api.github.com/repos/huggingface/datasets/issues/2573
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2573/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2573/comments
https://api.github.com/repos/huggingface/datasets/issues/2573/events
https://github.com/huggingface/datasets/issues/2573
934,584,745
MDU6SXNzdWU5MzQ1ODQ3NDU=
2,573
Finding right block-size with JSON loading difficult for user
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
[]
[ "This was actually a second error arising from a too small block-size in the json reader.\r\n\r\nFinding the right block size is difficult for the layman user" ]
2021-07-01T08:48:35
2021-07-01T19:10:53
null
MEMBER
null
null
null
null
As reported by @thomwolf, while loading a JSON Lines file with "json" loading script, he gets > json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 383)
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2573/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2573/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
https://api.github.com/repos/huggingface/datasets/issues/2572
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2572/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2572/comments
https://api.github.com/repos/huggingface/datasets/issues/2572/events
https://github.com/huggingface/datasets/issues/2572
934,573,767
MDU6SXNzdWU5MzQ1NzM3Njc=
2,572
Support Zstandard compressed files
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
[ "I am trying to load a dataset using Hugging Face Datasets load_dataset method. I am getting the value error as show below. Can someone help with this? I am using Windows laptop and Google Colab notebook.\r\n\r\n```\r\n!pip install zstandard\r\nfrom datasets import load_dataset\r\n\r\nlds = load_dataset(\r\n \"json\",\r\n data_files=\"https://the-eye.eu/public/AI/pile_preliminary_components/FreeLaw_Opinions.jsonl.zst\",\r\n split=\"train\",\r\n streaming=True,\r\n)\r\n\r\nWARNING:datasets.builder:Using custom data configuration default-a1d9e8eaedd958cd\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n[<ipython-input-12-5b4fdcb8e6d5>](https://localhost:8080/#) in <module>\r\n 6 )\r\n 7 \r\n----> 8 next(iter(law_dataset_streamed))\r\n\r\n17 frames\r\n[/usr/local/lib/python3.8/dist-packages/fsspec/core.py](https://localhost:8080/#) in get_compression(urlpath, compression)\r\n 485 compression = infer_compression(urlpath)\r\n 486 if compression is not None and compression not in compr:\r\n--> 487 raise ValueError(\"Compression type %s not supported\" % compression)\r\n 488 return compression\r\n 489 \r\n\r\nValueError: Compression type zstd not supported\r\n```", "I just tried on google colab and this works:\r\n```python\r\n!pip install zstandard\r\n!pip install datasets\r\nfrom datasets import load_dataset\r\n\r\nlds = load_dataset(\r\n \"json\",\r\n data_files=\"https://the-eye.eu/public/AI/pile_preliminary_components/FreeLaw_Opinions.jsonl.zst\",\r\n split=\"train\",\r\n streaming=True,\r\n)\r\nnext(iter(lds))\r\n```\r\n\r\nCan you check that you have a correct installation of `zstandard` ?", "@lhoestq please note [this](https://github.com/huggingface/datasets/issues/2572#issuecomment-1363718916) is a duplicate of:\r\n- #5388", "Oh thanks I missed that one !", "> I just tried on google colab and this works:\r\n> \r\n> ```python\r\n> !pip install zstandard\r\n> !pip install datasets\r\n> from datasets import load_dataset\r\n> \r\n> lds = load_dataset(\r\n> \"json\",\r\n> data_files=\"https://the-eye.eu/public/AI/pile_preliminary_components/FreeLaw_Opinions.jsonl.zst\",\r\n> split=\"train\",\r\n> streaming=True,\r\n> )\r\n> next(iter(lds))\r\n> ```\r\n> \r\n> Can you check that you have a correct installation of `zstandard` ?\r\n\r\nI was downloading datasets first then was doing zstandard installation and that was causing the issue. This was highlighted by the Hugging Face staff and that helped. Now the issue is resolved. Thank you." ]
2021-07-01T08:37:04
2023-01-03T15:34:01
2021-07-05T10:50:27
MEMBER
null
null
null
null
Add support for Zstandard compressed files: https://facebook.github.io/zstd/
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2572/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2572/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
4 days, 2:13:23
https://api.github.com/repos/huggingface/datasets/issues/2569
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2569/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2569/comments
https://api.github.com/repos/huggingface/datasets/issues/2569/events
https://github.com/huggingface/datasets/issues/2569
933,015,797
MDU6SXNzdWU5MzMwMTU3OTc=
2,569
Weights of model checkpoint not initialized for RobertaModel for Bertscore
{ "avatar_url": "https://avatars.githubusercontent.com/u/2980993?v=4", "events_url": "https://api.github.com/users/suzyahyah/events{/privacy}", "followers_url": "https://api.github.com/users/suzyahyah/followers", "following_url": "https://api.github.com/users/suzyahyah/following{/other_user}", "gists_url": "https://api.github.com/users/suzyahyah/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/suzyahyah", "id": 2980993, "login": "suzyahyah", "node_id": "MDQ6VXNlcjI5ODA5OTM=", "organizations_url": "https://api.github.com/users/suzyahyah/orgs", "received_events_url": "https://api.github.com/users/suzyahyah/received_events", "repos_url": "https://api.github.com/users/suzyahyah/repos", "site_admin": false, "starred_url": "https://api.github.com/users/suzyahyah/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/suzyahyah/subscriptions", "type": "User", "url": "https://api.github.com/users/suzyahyah", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
[ "Hi @suzyahyah, thanks for reporting.\r\n\r\nThe message you get is indeed not an error message, but a warning coming from Hugging Face `transformers`. The complete warning message is:\r\n```\r\nSome weights of the model checkpoint at roberta-large were not used when initializing RobertaModel: ['lm_head.decoder.weight', 'lm_head.dense.weight', 'lm_head.dense.bias', 'lm_head.layer_norm.bias', 'lm_head.bias', 'lm_head.layer_norm.weight']\r\n- This IS expected if you are initializing RobertaModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\r\n- This IS NOT expected if you are initializing RobertaModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\r\n```\r\n\r\nIn this case, this behavior IS expected and you can safely ignore the warning message.\r\n\r\nThe reason is that you are just using RoBERTa to get the contextual embeddings of the input sentences/tokens, thus leaving away its head layer, whose weights are ignored.\r\n\r\nFeel free to reopen this issue if you need further explanations.", "Hi @suzyahyah, I have created a Pull Request to filter out that warning message in this specific case, since the behavior is as expected and the warning message can only cause confusion for users (as in your case)." ]
2021-06-29T18:55:23
2021-07-01T07:08:59
2021-06-30T07:35:49
NONE
null
null
null
null
When applying bertscore out of the box, ```Some weights of the model checkpoint at roberta-large were not used when initializing RobertaModel: ['lm_head.decoder.weight', 'lm_head.bias', 'lm_head.dense.bias', 'lm_head.layer_norm.bias', 'lm_head.dense.weight', 'lm_head.layer_norm.weight']``` Following the typical usage from https://huggingface.co/docs/datasets/loading_metrics.html ``` from datasets import load_metric metric = load_metric('bertscore') # Example of typical usage for batch in dataset: inputs, references = batch predictions = model(inputs) metric.add_batch(predictions=predictions, references=references) score = metric.compute(lang="en") #score = metric.compute(model_type="roberta-large") # gives the same error ``` I am concerned about this because my usage shouldn't require any further fine-tuning and most people would expect to use BertScore out of the box? I realised the huggingface code is a wrapper around https://github.com/Tiiiger/bert_score, but I think this repo is anyway relying on the model code and weights from huggingface repo.... ## Environment info - `datasets` version: 1.7.0 - Platform: Linux-5.4.0-1041-aws-x86_64-with-glibc2.27 - Python version: 3.9.5 - PyArrow version: 3.0.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2569/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2569/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
12:40:26
https://api.github.com/repos/huggingface/datasets/issues/2564
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2564/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2564/comments
https://api.github.com/repos/huggingface/datasets/issues/2564/events
https://github.com/huggingface/datasets/issues/2564
932,389,639
MDU6SXNzdWU5MzIzODk2Mzk=
2,564
concatenate_datasets for iterable datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" } ]
[ "It is probably worth noting here that the [documentation](https://huggingface.co/docs/datasets/process#concatenate) is misleading (indicating that it does work for IterableDatasets):\r\n\r\n> You can also mix several datasets together by taking alternating examples from each one to create a new dataset. This is known as interleaving, and you can use it with [interleave_datasets()](https://huggingface.co/docs/datasets/v2.2.1/en/package_reference/main_classes#datasets.interleave_datasets). **Both [interleave_datasets()](https://huggingface.co/docs/datasets/v2.2.1/en/package_reference/main_classes#datasets.interleave_datasets) and [concatenate_datasets()](https://huggingface.co/docs/datasets/v2.2.1/en/package_reference/main_classes#datasets.concatenate_datasets) will work with regular [Dataset](https://huggingface.co/docs/datasets/v2.2.1/en/package_reference/main_classes#datasets.Dataset) and [IterableDataset](https://huggingface.co/docs/datasets/v2.2.1/en/package_reference/main_classes#datasets.IterableDataset) objects**. Refer to the [Stream](https://huggingface.co/docs/datasets/stream#interleave) section for an example of how it’s used. ", "Thanks for the heads up, I'll fix that" ]
2021-06-29T08:59:41
2022-06-28T21:15:04
2022-06-28T21:15:04
MEMBER
null
null
null
null
Currently `concatenate_datasets` only works for map-style `Dataset`. It would be nice to have it work for `IterableDataset` objects as well. It would simply chain the iterables of the iterable datasets.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 5, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 5, "url": "https://api.github.com/repos/huggingface/datasets/issues/2564/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2564/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
364 days, 12:15:23
https://api.github.com/repos/huggingface/datasets/issues/2563
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2563/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2563/comments
https://api.github.com/repos/huggingface/datasets/issues/2563/events
https://github.com/huggingface/datasets/issues/2563
932,387,639
MDU6SXNzdWU5MzIzODc2Mzk=
2,563
interleave_datasets for map-style datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" } ]
[]
2021-06-29T08:57:24
2021-07-01T09:33:33
2021-07-01T09:33:33
MEMBER
null
null
null
null
Currently the `interleave_datasets` functions only works for `IterableDataset`. Let's make it work for map-style `Dataset` objects as well. It would work the same way: either alternate between the datasets in order or randomly given probabilities specified by the user.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2563/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2563/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
2 days, 0:36:09
https://api.github.com/repos/huggingface/datasets/issues/2561
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2561/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2561/comments
https://api.github.com/repos/huggingface/datasets/issues/2561/events
https://github.com/huggingface/datasets/issues/2561
932,321,725
MDU6SXNzdWU5MzIzMjE3MjU=
2,561
Existing cache for local dataset builder file updates is ignored with `ignore_verifications=True`
{ "avatar_url": "https://avatars.githubusercontent.com/u/3616806?v=4", "events_url": "https://api.github.com/users/apsdehal/events{/privacy}", "followers_url": "https://api.github.com/users/apsdehal/followers", "following_url": "https://api.github.com/users/apsdehal/following{/other_user}", "gists_url": "https://api.github.com/users/apsdehal/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/apsdehal", "id": 3616806, "login": "apsdehal", "node_id": "MDQ6VXNlcjM2MTY4MDY=", "organizations_url": "https://api.github.com/users/apsdehal/orgs", "received_events_url": "https://api.github.com/users/apsdehal/received_events", "repos_url": "https://api.github.com/users/apsdehal/repos", "site_admin": false, "starred_url": "https://api.github.com/users/apsdehal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/apsdehal/subscriptions", "type": "User", "url": "https://api.github.com/users/apsdehal", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "Hi ! I just tried to reproduce what you said:\r\n- create a local builder class\r\n- use `load_dataset`\r\n- update the builder class code\r\n- use `load_dataset` again (with or without `ignore_verifications=True`)\r\nAnd it creates a new cache, as expected.\r\n\r\nWhat modifications did you do to your builder's code ?", "Hi @lhoestq. Thanks for your reply. I just did minor modifications for which it should not regenerate cache (for e.g. Adding a print statement). Overall, regardless of cache miss, there should be an explicit option to allow reuse of existing cache if author knows cache shouldn't be affected.", "The cache is based on the hash of the dataset builder's code, so changing the code makes it recompute the cache.\r\n\r\nYou could still rename the cache directory of your previous computation to the new expected cache directory if you want to avoid having to recompute it and if you're sure that it would generate the exact same result.\r\n\r\nThe verifications are data integrity verifications: it checks the checksums of the downloaded files, as well as the size of the generated splits.", "Hi @apsdehal,\r\n\r\nIf you decide to follow @lhoestq's suggestion to rename the cache directory of your previous computation to the new expected cache directory, you can do the following to get the name of the new expected cache directory once #2500 is merged:\r\n```python\r\nfrom datasets import load_dataset_builder\r\ndataset_builder = load_dataset_builder(\"path/to/your/dataset\")\r\nprint(dataset_builder.cache_dir)\r\n```\r\n\r\nThis way, you don't have to recompute the hash of the dataset script yourself each time you modify the script." ]
2021-06-29T07:43:03
2022-08-04T11:58:36
2022-08-04T11:58:36
CONTRIBUTOR
null
null
null
null
## Describe the bug If i have local file defining a dataset builder class and I load it using `load_dataset` functionality, the existing cache is ignored whenever the file is update even with `ignore_verifications=True`. This slows down debugging and cache generator for very large datasets. ## Steps to reproduce the bug - Create a local dataset builder class - load the local builder class file using `load_dataset` and let the cache build - update the file's content - The cache should rebuilt. ## Expected results With `ignore_verifications=True`, `load_dataset` should pick up existing cache. ## Actual results Creates new cache. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.8.0 - Platform: Linux-5.4.0-52-generic-x86_64-with-debian-bullseye-sid - Python version: 3.7.7 - PyArrow version: 3.0.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2561/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2561/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
401 days, 4:15:33
https://api.github.com/repos/huggingface/datasets/issues/2559
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2559/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2559/comments
https://api.github.com/repos/huggingface/datasets/issues/2559/events
https://github.com/huggingface/datasets/issues/2559
931,849,724
MDU6SXNzdWU5MzE4NDk3MjQ=
2,559
Memory usage consistently increases when processing a dataset with `.map`
{ "avatar_url": "https://avatars.githubusercontent.com/u/3616806?v=4", "events_url": "https://api.github.com/users/apsdehal/events{/privacy}", "followers_url": "https://api.github.com/users/apsdehal/followers", "following_url": "https://api.github.com/users/apsdehal/following{/other_user}", "gists_url": "https://api.github.com/users/apsdehal/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/apsdehal", "id": 3616806, "login": "apsdehal", "node_id": "MDQ6VXNlcjM2MTY4MDY=", "organizations_url": "https://api.github.com/users/apsdehal/orgs", "received_events_url": "https://api.github.com/users/apsdehal/received_events", "repos_url": "https://api.github.com/users/apsdehal/repos", "site_admin": false, "starred_url": "https://api.github.com/users/apsdehal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/apsdehal/subscriptions", "type": "User", "url": "https://api.github.com/users/apsdehal", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "Hi ! Can you share the function you pass to `map` ?\r\nI know you mentioned it would be hard to share some code but this would really help to understand what happened", "This is the same behavior as in #4883, so I'm closing this issue as a duplicate. " ]
2021-06-28T18:31:58
2023-07-20T13:34:10
2023-07-20T13:34:10
CONTRIBUTOR
null
null
null
null
## Describe the bug I have a HF dataset with image paths stored in it and I am trying to load those image paths using `.map` with `num_proc=80`. I am noticing that the memory usage consistently keeps on increasing with time. I tried using `DEFAULT_WRITER_BATCH_SIZE=10` in the builder to decrease arrow writer's batch size but that doesn't seem to help. ## Steps to reproduce the bug Providing code as it is would be hard. I can provide a MVP if that helps. ## Expected results Memory usage should become consistent after some time following the launch of processing. ## Actual results Memory usage keeps on increasing. ## Environment info - `datasets` version: 1.8.0 - Platform: Linux-5.4.0-52-generic-x86_64-with-debian-bullseye-sid - Python version: 3.7.7 - PyArrow version: 3.0.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2559/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2559/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
751 days, 19:02:12
https://api.github.com/repos/huggingface/datasets/issues/2556
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2556/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2556/comments
https://api.github.com/repos/huggingface/datasets/issues/2556/events
https://github.com/huggingface/datasets/issues/2556
931,595,872
MDU6SXNzdWU5MzE1OTU4NzI=
2,556
Better DuplicateKeysError error to help the user debug the issue
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "7057ff", "default": true, "description": "Good for newcomers", "id": 1935892877, "name": "good first issue", "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/20517962?v=4", "events_url": "https://api.github.com/users/VijayKalmath/events{/privacy}", "followers_url": "https://api.github.com/users/VijayKalmath/followers", "following_url": "https://api.github.com/users/VijayKalmath/following{/other_user}", "gists_url": "https://api.github.com/users/VijayKalmath/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/VijayKalmath", "id": 20517962, "login": "VijayKalmath", "node_id": "MDQ6VXNlcjIwNTE3OTYy", "organizations_url": "https://api.github.com/users/VijayKalmath/orgs", "received_events_url": "https://api.github.com/users/VijayKalmath/received_events", "repos_url": "https://api.github.com/users/VijayKalmath/repos", "site_admin": false, "starred_url": "https://api.github.com/users/VijayKalmath/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/VijayKalmath/subscriptions", "type": "User", "url": "https://api.github.com/users/VijayKalmath", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/20517962?v=4", "events_url": "https://api.github.com/users/VijayKalmath/events{/privacy}", "followers_url": "https://api.github.com/users/VijayKalmath/followers", "following_url": "https://api.github.com/users/VijayKalmath/following{/other_user}", "gists_url": "https://api.github.com/users/VijayKalmath/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/VijayKalmath", "id": 20517962, "login": "VijayKalmath", "node_id": "MDQ6VXNlcjIwNTE3OTYy", "organizations_url": "https://api.github.com/users/VijayKalmath/orgs", "received_events_url": "https://api.github.com/users/VijayKalmath/received_events", "repos_url": "https://api.github.com/users/VijayKalmath/repos", "site_admin": false, "starred_url": "https://api.github.com/users/VijayKalmath/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/VijayKalmath/subscriptions", "type": "User", "url": "https://api.github.com/users/VijayKalmath", "user_view_type": "public" } ]
[ "excuse me, my `datasets` version is `2.2.2`, but I also just see the error info like \r\n```\r\nDuplicatedKeysError: FAILURE TO GENERATE DATASET !\r\nFound duplicate Key: 0\r\nKeys should be unique and deterministic in nature\r\n```", "Hi ! for which dataset do you have this error ?\r\n\r\nAlso note that this issue is just about improving the error message, which is not very friendly x)", "@lhoestq I would like to take a hit at improving the error message. Will open a draft PR and will reach out to you for review\r\n", "> DuplicateKeysError: both 42th and 1337th examples have the same keys `48`.\r\n\r\n@lhoestq when you mention 42th and 1337th in the above case , are these values the examples' \"id\" or are they the examples' index ? ", "Hi ! Thanks @VijayKalmath :)\r\n\r\nIn the general case, examples don't have an \"id\" field, so I think it should correspond to the index", "@lhoestq , I have opened a draft PR for this Issue. \r\n\r\nI wanted to check with you if there is a way to get `<path/to/the/dataset/script>` currently or do I need to add extra code to find that. \r\n\r\nIf I need to find the script , I can assume that the generator function will always be in `datasets/{dataset_name}/{dataset_name}.py`. ", "Thanks !\r\n\r\n> I wanted to check with you if there is a way to get <path/to/the/dataset/script> currently or do I need to add extra code to find that.\r\n\r\nYou don't have access to this info inside the ArrowWriter unfortunately. This info is available in builder.py in the DatasetBuilder code that uses the ArrowWriter though, maybe a try-catch there can do the job" ]
2021-06-28T13:50:57
2022-06-28T09:26:04
2022-06-28T09:26:04
MEMBER
null
null
null
null
As mentioned in https://github.com/huggingface/datasets/issues/2552 it would be nice to improve the error message when a dataset fails to build because there are duplicate example keys. The current one is ```python datasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET ! Found duplicate Key: 48 Keys should be unique and deterministic in nature ``` and we could have something that guides the user to debugging the issue: ```python DuplicateKeysError: both 42th and 1337th examples have the same keys `48`. Please fix the dataset script at <path/to/the/dataset/script> ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2556/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2556/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
364 days, 19:35:07
https://api.github.com/repos/huggingface/datasets/issues/2554
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2554/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2554/comments
https://api.github.com/repos/huggingface/datasets/issues/2554/events
https://github.com/huggingface/datasets/issues/2554
931,453,855
MDU6SXNzdWU5MzE0NTM4NTU=
2,554
Multilabel metrics not supported
{ "avatar_url": "https://avatars.githubusercontent.com/u/37592763?v=4", "events_url": "https://api.github.com/users/GuillemGSubies/events{/privacy}", "followers_url": "https://api.github.com/users/GuillemGSubies/followers", "following_url": "https://api.github.com/users/GuillemGSubies/following{/other_user}", "gists_url": "https://api.github.com/users/GuillemGSubies/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/GuillemGSubies", "id": 37592763, "login": "GuillemGSubies", "node_id": "MDQ6VXNlcjM3NTkyNzYz", "organizations_url": "https://api.github.com/users/GuillemGSubies/orgs", "received_events_url": "https://api.github.com/users/GuillemGSubies/received_events", "repos_url": "https://api.github.com/users/GuillemGSubies/repos", "site_admin": false, "starred_url": "https://api.github.com/users/GuillemGSubies/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/GuillemGSubies/subscriptions", "type": "User", "url": "https://api.github.com/users/GuillemGSubies", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "Hi @GuillemGSubies, thanks for reporting.\r\n\r\nI have made a PR to fix this issue and allow metrics to be computed also for multilabel classification problems.", "Looks nice, thank you very much! 🚀 ", "Sorry for reopening but I just noticed that the `_compute` method for the F1 metric is still not good enough for multilabel problems:\r\n\r\nhttps://github.com/huggingface/datasets/blob/92a3ee549705aa0a107c9fa5caf463b3b3da2616/metrics/f1/f1.py#L115\r\n\r\nSomehow we should be able to change the parameter `average` at least", "@GuillemGSubies, the parameter `average` passed to `_compute` is then passed to `f1_score`. This is right." ]
2021-06-28T11:09:46
2021-10-13T12:29:13
2021-07-08T08:40:15
NONE
null
null
null
null
When I try to use a metric like F1 macro I get the following error: ``` TypeError: int() argument must be a string, a bytes-like object or a number, not 'list' ``` There is an explicit casting here: https://github.com/huggingface/datasets/blob/fc79f61cbbcfa0e8c68b28c0a8257f17e768a075/src/datasets/features.py#L274 And looks like this is because here https://github.com/huggingface/datasets/blob/fc79f61cbbcfa0e8c68b28c0a8257f17e768a075/metrics/f1/f1.py#L88 the features can only be integers, so we cannot use that F1 for multilabel. Instead, if I create the following F1 (ints replaced with sequence of ints), it will work: ```python class F1(datasets.Metric): def _info(self): return datasets.MetricInfo( description=_DESCRIPTION, citation=_CITATION, inputs_description=_KWARGS_DESCRIPTION, features=datasets.Features( { "predictions": datasets.Sequence(datasets.Value("int32")), "references": datasets.Sequence(datasets.Value("int32")), } ), reference_urls=["https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html"], ) def _compute(self, predictions, references, labels=None, pos_label=1, average="binary", sample_weight=None): return { "f1": f1_score( references, predictions, labels=labels, pos_label=pos_label, average=average, sample_weight=sample_weight, ), } ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2554/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2554/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
9 days, 21:30:29
https://api.github.com/repos/huggingface/datasets/issues/2553
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2553/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2553/comments
https://api.github.com/repos/huggingface/datasets/issues/2553/events
https://github.com/huggingface/datasets/issues/2553
931,365,926
MDU6SXNzdWU5MzEzNjU5MjY=
2,553
load_dataset("web_nlg") NonMatchingChecksumError
{ "avatar_url": "https://avatars.githubusercontent.com/u/33730312?v=4", "events_url": "https://api.github.com/users/alxthm/events{/privacy}", "followers_url": "https://api.github.com/users/alxthm/followers", "following_url": "https://api.github.com/users/alxthm/following{/other_user}", "gists_url": "https://api.github.com/users/alxthm/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alxthm", "id": 33730312, "login": "alxthm", "node_id": "MDQ6VXNlcjMzNzMwMzEy", "organizations_url": "https://api.github.com/users/alxthm/orgs", "received_events_url": "https://api.github.com/users/alxthm/received_events", "repos_url": "https://api.github.com/users/alxthm/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alxthm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alxthm/subscriptions", "type": "User", "url": "https://api.github.com/users/alxthm", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" } ]
[ "Hi ! Thanks for reporting. This is due to the WebNLG repository that got updated today.\r\nI just pushed a fix at #2558 - this shouldn't happen anymore in the future.", "This is fixed on `master` now :)\r\nWe'll do a new release soon !" ]
2021-06-28T09:26:46
2021-06-28T17:23:39
2021-06-28T17:23:16
NONE
null
null
null
null
Hi! It seems the WebNLG dataset gives a NonMatchingChecksumError. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset('web_nlg', name="release_v3.0_en", split="dev") ``` Gives ``` NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://gitlab.com/shimorina/webnlg-dataset/-/archive/master/webnlg-dataset-master.zip'] ``` ## Environment info - `datasets` version: 1.8.0 - Platform: macOS-11.3.1-x86_64-i386-64bit - Python version: 3.9.4 - PyArrow version: 3.0.0 Also tested on Linux, with python 3.6.8
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2553/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2553/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
7:56:30
https://api.github.com/repos/huggingface/datasets/issues/2552
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2552/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2552/comments
https://api.github.com/repos/huggingface/datasets/issues/2552/events
https://github.com/huggingface/datasets/issues/2552
931,354,687
MDU6SXNzdWU5MzEzNTQ2ODc=
2,552
Keys should be unique error on code_search_net
{ "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thomwolf", "id": 7353373, "login": "thomwolf", "node_id": "MDQ6VXNlcjczNTMzNzM=", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "repos_url": "https://api.github.com/users/thomwolf/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "type": "User", "url": "https://api.github.com/users/thomwolf", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "Two questions:\r\n- with `datasets-cli env` we don't have any information on the dataset script version used. Should we give access to this somehow? Either as a note in the Error message or as an argument with the name of the dataset to `datasets-cli env`?\r\n- I don't really understand why the id is duplicated in the code of `code_search_net`, how can I debug this actually?", "Thanks for reporting. There was indeed an issue with the keys. The key was the addition of the file id and row id, which resulted in collisions. I just opened a PR to fix this at https://github.com/huggingface/datasets/pull/2555\r\n\r\nTo help users debug this kind of errors we could try to show a message like this\r\n```python\r\nDuplicateKeysError: both 42th and 1337th examples have the same keys `48`.\r\nPlease fix the dataset script at <path/to/the/dataset/script>\r\n```\r\n\r\nThis way users who what to look for if they want to debug this issue. I opened an issue to track this: https://github.com/huggingface/datasets/issues/2556", "and are we sure there are not a lot of datasets which are now broken with this change?", "Thanks to the dummy data, we know for sure that most of them work as expected.\r\n`code_search_net` wasn't caught because the dummy data only have one dummy data file while the dataset script can actually load several of them using `os.listdir`. Let me take a look at all the other datasets that use `os.listdir` to see if the keys are alright", "I found one issue on `fever` (PR here: https://github.com/huggingface/datasets/pull/2557)\r\nAll the other ones seem fine :)", "Hi! Got same error when loading other dataset:\r\n```python3\r\nload_dataset('wikicorpus', 'raw_en')\r\n```\r\n\r\ntb:\r\n```pytb\r\n---------------------------------------------------------------------------\r\nDuplicatedKeysError Traceback (most recent call last)\r\n/opt/conda/lib/python3.8/site-packages/datasets/builder.py in _prepare_split(self, split_generator)\r\n 1109 example = self.info.features.encode_example(record)\r\n-> 1110 writer.write(example, key)\r\n 1111 finally:\r\n\r\n/opt/conda/lib/python3.8/site-packages/datasets/arrow_writer.py in write(self, example, key, writer_batch_size)\r\n 341 if self._check_duplicates:\r\n--> 342 self.check_duplicate_keys()\r\n 343 # Re-intializing to empty list for next batch\r\n\r\n/opt/conda/lib/python3.8/site-packages/datasets/arrow_writer.py in check_duplicate_keys(self)\r\n 352 if hash in tmp_record:\r\n--> 353 raise DuplicatedKeysError(key)\r\n 354 else:\r\n\r\nDuplicatedKeysError: FAILURE TO GENERATE DATASET !\r\nFound duplicate Key: 519\r\nKeys should be unique and deterministic in nature\r\n```\r\n\r\nVersion: datasets==1.11.0", "Fixed by #2555.", "The wikicorpus issue has been fixed by https://github.com/huggingface/datasets/pull/2844\r\n\r\nWe'll do a new release of `datasets` soon :)" ]
2021-06-28T09:15:20
2021-09-06T14:08:30
2021-09-02T08:25:29
MEMBER
null
null
null
null
## Describe the bug Loading `code_search_net` seems not possible at the moment. ## Steps to reproduce the bug ```python >>> load_dataset('code_search_net') Downloading: 8.50kB [00:00, 3.09MB/s] Downloading: 19.1kB [00:00, 10.1MB/s] No config specified, defaulting to: code_search_net/all Downloading and preparing dataset code_search_net/all (download: 4.77 GiB, generated: 5.99 GiB, post-processed: Unknown size, total: 10.76 GiB) to /Users/thomwolf/.cache/huggingface/datasets/code_search_net/all/1.0.0/b3e8278faf5d67da1d06981efbeac3b76a2900693bd2239bbca7a4a3b0d6e52a... Traceback (most recent call last): File "/Users/thomwolf/Documents/GitHub/datasets/src/datasets/builder.py", line 1067, in _prepare_split writer.write(example, key) File "/Users/thomwolf/Documents/GitHub/datasets/src/datasets/arrow_writer.py", line 343, in write self.check_duplicate_keys() File "/Users/thomwolf/Documents/GitHub/datasets/src/datasets/arrow_writer.py", line 354, in check_duplicate_keys raise DuplicatedKeysError(key) datasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET ! Found duplicate Key: 48 Keys should be unique and deterministic in nature ``` ## Environment info - `datasets` version: 1.8.1.dev0 - Platform: macOS-10.15.7-x86_64-i386-64bit - Python version: 3.8.5 - PyArrow version: 2.0.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2552/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2552/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
65 days, 23:10:09
https://api.github.com/repos/huggingface/datasets/issues/2550
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2550/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2550/comments
https://api.github.com/repos/huggingface/datasets/issues/2550/events
https://github.com/huggingface/datasets/issues/2550
930,951,287
MDU6SXNzdWU5MzA5NTEyODc=
2,550
Allow for incremental cumulative metric updates in a distributed setup
{ "avatar_url": "https://avatars.githubusercontent.com/u/13485709?v=4", "events_url": "https://api.github.com/users/eladsegal/events{/privacy}", "followers_url": "https://api.github.com/users/eladsegal/followers", "following_url": "https://api.github.com/users/eladsegal/following{/other_user}", "gists_url": "https://api.github.com/users/eladsegal/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/eladsegal", "id": 13485709, "login": "eladsegal", "node_id": "MDQ6VXNlcjEzNDg1NzA5", "organizations_url": "https://api.github.com/users/eladsegal/orgs", "received_events_url": "https://api.github.com/users/eladsegal/received_events", "repos_url": "https://api.github.com/users/eladsegal/repos", "site_admin": false, "starred_url": "https://api.github.com/users/eladsegal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eladsegal/subscriptions", "type": "User", "url": "https://api.github.com/users/eladsegal", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
[]
2021-06-27T15:00:58
2021-09-26T13:42:39
2021-09-26T13:42:39
CONTRIBUTOR
null
null
null
null
Currently, using a metric allows for one of the following: - Per example/batch metrics - Cumulative metrics over the whole data What I'd like is to have an efficient way to get cumulative metrics over the examples/batches added so far, in order to display it as part of the progress bar during training/evaluation. Since most metrics are just an average of per-example metrics (which aren't?), an efficient calculation can be done as follows: `((score_cumulative * n_cumulative) + (score_new * n_new)) / (n_cumulative+ n_new)` where `n` and `score` refer to number of examples and metric score, `cumulative` refers to the cumulative metric and `new` refers to the addition of new examples. If you don't want to add this capability in the library, a simple solution exists so users can do it themselves: It is easy to implement for a single process setup, but in a distributed one there is no way to get the correct `n_new`. The solution for this is to return the number of examples that was used to compute the metrics in `.compute()` by adding the following line here: https://github.com/huggingface/datasets/blob/5a3221785311d0ce86c2785b765e86bd6997d516/src/datasets/metric.py#L402-L403 ``` output["number_of_examples"] = len(predictions) ``` and also remove the log message here so it won't spam: https://github.com/huggingface/datasets/blob/3db67f5ff6cbf807b129d2b4d1107af27623b608/src/datasets/metric.py#L411 If this change is ok with you, I'll open a pull request.
{ "avatar_url": "https://avatars.githubusercontent.com/u/13485709?v=4", "events_url": "https://api.github.com/users/eladsegal/events{/privacy}", "followers_url": "https://api.github.com/users/eladsegal/followers", "following_url": "https://api.github.com/users/eladsegal/following{/other_user}", "gists_url": "https://api.github.com/users/eladsegal/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/eladsegal", "id": 13485709, "login": "eladsegal", "node_id": "MDQ6VXNlcjEzNDg1NzA5", "organizations_url": "https://api.github.com/users/eladsegal/orgs", "received_events_url": "https://api.github.com/users/eladsegal/received_events", "repos_url": "https://api.github.com/users/eladsegal/repos", "site_admin": false, "starred_url": "https://api.github.com/users/eladsegal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eladsegal/subscriptions", "type": "User", "url": "https://api.github.com/users/eladsegal", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2550/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2550/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
90 days, 22:41:41
https://api.github.com/repos/huggingface/datasets/issues/2549
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2549/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2549/comments
https://api.github.com/repos/huggingface/datasets/issues/2549/events
https://github.com/huggingface/datasets/issues/2549
929,819,093
MDU6SXNzdWU5Mjk4MTkwOTM=
2,549
Handling unlabeled datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/7272031?v=4", "events_url": "https://api.github.com/users/nelson-liu/events{/privacy}", "followers_url": "https://api.github.com/users/nelson-liu/followers", "following_url": "https://api.github.com/users/nelson-liu/following{/other_user}", "gists_url": "https://api.github.com/users/nelson-liu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/nelson-liu", "id": 7272031, "login": "nelson-liu", "node_id": "MDQ6VXNlcjcyNzIwMzE=", "organizations_url": "https://api.github.com/users/nelson-liu/orgs", "received_events_url": "https://api.github.com/users/nelson-liu/received_events", "repos_url": "https://api.github.com/users/nelson-liu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/nelson-liu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nelson-liu/subscriptions", "type": "User", "url": "https://api.github.com/users/nelson-liu", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
[ "Hi @nelson-liu,\r\n\r\nYou can pass the parameter `features` to `load_dataset`: https://huggingface.co/docs/datasets/_modules/datasets/load.html#load_dataset\r\n\r\nIf you look at the code of the MNLI script you referred in your question (https://github.com/huggingface/datasets/blob/master/datasets/multi_nli/multi_nli.py#L62-L77), you can see how the Features were originally specified. \r\n\r\nFeel free to use it as a template, customize it and pass it to `load_dataset` using the parameter `features`.", "ah got it, thanks!" ]
2021-06-25T04:32:23
2021-06-25T21:07:57
2021-06-25T21:07:56
NONE
null
null
null
null
Hi! Is there a way for datasets to produce unlabeled instances (e.g., the `ClassLabel` can be nullable). For example, I want to use the MNLI dataset reader ( https://github.com/huggingface/datasets/blob/master/datasets/multi_nli/multi_nli.py ) on a file that doesn't have the `gold_label` field. I tried setting `"label": data.get("gold_label")`, but got the following error: ``` File "/home/nfliu/miniconda3/envs/debias/lib/python3.7/site-packages/datasets/load.py", line 748, in load_dataset use_auth_token=use_auth_token, File "/home/nfliu/miniconda3/envs/debias/lib/python3.7/site-packages/datasets/builder.py", line 575, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/nfliu/miniconda3/envs/debias/lib/python3.7/site-packages/datasets/builder.py", line 652, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/nfliu/miniconda3/envs/debias/lib/python3.7/site-packages/datasets/builder.py", line 989, in _prepare_split example = self.info.features.encode_example(record) File "/home/nfliu/miniconda3/envs/debias/lib/python3.7/site-packages/datasets/features.py", line 953, in encode_example return encode_nested_example(self, example) File "/home/nfliu/miniconda3/envs/debias/lib/python3.7/site-packages/datasets/features.py", line 848, in encode_nested_example k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj) File "/home/nfliu/miniconda3/envs/debias/lib/python3.7/site-packages/datasets/features.py", line 848, in <dictcomp> k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj) File "/home/nfliu/miniconda3/envs/debias/lib/python3.7/site-packages/datasets/features.py", line 875, in encode_nested_example return schema.encode_example(obj) File "/home/nfliu/miniconda3/envs/debias/lib/python3.7/site-packages/datasets/features.py", line 653, in encode_example if not -1 <= example_data < self.num_classes: TypeError: '<=' not supported between instances of 'int' and 'NoneType' ``` What's the proper way to handle reading unlabeled datasets, especially for downstream usage with Transformers?
{ "avatar_url": "https://avatars.githubusercontent.com/u/7272031?v=4", "events_url": "https://api.github.com/users/nelson-liu/events{/privacy}", "followers_url": "https://api.github.com/users/nelson-liu/followers", "following_url": "https://api.github.com/users/nelson-liu/following{/other_user}", "gists_url": "https://api.github.com/users/nelson-liu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/nelson-liu", "id": 7272031, "login": "nelson-liu", "node_id": "MDQ6VXNlcjcyNzIwMzE=", "organizations_url": "https://api.github.com/users/nelson-liu/orgs", "received_events_url": "https://api.github.com/users/nelson-liu/received_events", "repos_url": "https://api.github.com/users/nelson-liu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/nelson-liu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nelson-liu/subscriptions", "type": "User", "url": "https://api.github.com/users/nelson-liu", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2549/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2549/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
16:35:33
https://api.github.com/repos/huggingface/datasets/issues/2548
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2548/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2548/comments
https://api.github.com/repos/huggingface/datasets/issues/2548/events
https://github.com/huggingface/datasets/issues/2548
929,232,831
MDU6SXNzdWU5MjkyMzI4MzE=
2,548
Field order issue in loading json
{ "avatar_url": "https://avatars.githubusercontent.com/u/55288513?v=4", "events_url": "https://api.github.com/users/luyug/events{/privacy}", "followers_url": "https://api.github.com/users/luyug/followers", "following_url": "https://api.github.com/users/luyug/following{/other_user}", "gists_url": "https://api.github.com/users/luyug/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/luyug", "id": 55288513, "login": "luyug", "node_id": "MDQ6VXNlcjU1Mjg4NTEz", "organizations_url": "https://api.github.com/users/luyug/orgs", "received_events_url": "https://api.github.com/users/luyug/received_events", "repos_url": "https://api.github.com/users/luyug/repos", "site_admin": false, "starred_url": "https://api.github.com/users/luyug/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/luyug/subscriptions", "type": "User", "url": "https://api.github.com/users/luyug", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "Hi @luyug, thanks for reporting.\r\n\r\nThe good news is that we fixed this issue only 9 days ago: #2507.\r\n\r\nThe patch is already in the master branch of our repository and it will be included in our next `datasets` release version 1.9.0.\r\n\r\nFeel free to reopen the issue if the problem persists." ]
2021-06-24T13:29:53
2021-06-24T14:36:43
2021-06-24T14:34:05
NONE
null
null
null
null
## Describe the bug The `load_dataset` function expects columns in alphabetical order when loading json files. Similar bug was previously reported for csv in #623 and fixed in #684. ## Steps to reproduce the bug For a json file `j.json`, ``` {"c":321, "a": 1, "b": 2} ``` Running the following, ``` f= datasets.Features({'a': Value('int32'), 'b': Value('int32'), 'c': Value('int32')}) json_data = datasets.load_dataset('json', data_files='j.json', features=f) ``` ## Expected results A successful load. ## Actual results ``` File "pyarrow/table.pxi", line 1409, in pyarrow.lib.Table.cast ValueError: Target schema's field names are not matching the table's field names: ['c', 'a', 'b'], ['a', 'b', 'c'] ``` ## Environment info - `datasets` version: 1.8.0 - Platform: Linux-3.10.0-957.1.3.el7.x86_64-x86_64-with-glibc2.10 - Python version: 3.8.8 - PyArrow version: 3.0.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2548/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2548/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
1:04:12
https://api.github.com/repos/huggingface/datasets/issues/2547
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2547/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2547/comments
https://api.github.com/repos/huggingface/datasets/issues/2547/events
https://github.com/huggingface/datasets/issues/2547
929,192,329
MDU6SXNzdWU5MjkxOTIzMjk=
2,547
Dataset load_from_disk is too slow
{ "avatar_url": "https://avatars.githubusercontent.com/u/35173563?v=4", "events_url": "https://api.github.com/users/avacaondata/events{/privacy}", "followers_url": "https://api.github.com/users/avacaondata/followers", "following_url": "https://api.github.com/users/avacaondata/following{/other_user}", "gists_url": "https://api.github.com/users/avacaondata/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/avacaondata", "id": 35173563, "login": "avacaondata", "node_id": "MDQ6VXNlcjM1MTczNTYz", "organizations_url": "https://api.github.com/users/avacaondata/orgs", "received_events_url": "https://api.github.com/users/avacaondata/received_events", "repos_url": "https://api.github.com/users/avacaondata/repos", "site_admin": false, "starred_url": "https://api.github.com/users/avacaondata/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/avacaondata/subscriptions", "type": "User", "url": "https://api.github.com/users/avacaondata", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
[]
[ "Hi ! It looks like an issue with the virtual disk you are using.\r\n\r\nWe load datasets using memory mapping. In general it makes it possible to load very big files instantaneously since it doesn't have to read the file (it just assigns virtual memory to the file on disk).\r\nHowever there happens to be issues with virtual disks (for example on spot instances), for which memory mapping does a pass over the entire file, and this takes a while. We are discussing about this issue here: #2252 \r\n\r\nMemory mapping is something handled by the OS so we can't do much about it, though we're still trying to figure out what's causing this behavior exactly to see what we can do.", "Okay, that's exactly my case, with spot instances... Therefore this isn't something we can change in any way to be able to load the dataset faster? I mean, what do you do internally at huggingface for being able to use spot instances with datasets efficiently?", "There are no solutions yet unfortunately.\r\nWe're still trying to figure out a way to make the loading instantaneous on such disks, I'll keep you posted" ]
2021-06-24T12:45:44
2021-06-25T14:56:38
null
NONE
null
null
null
null
@lhoestq ## Describe the bug It's not normal that I have to wait 7-8 hours for a dataset to be loaded from disk, as there are no preprocessing steps, it's only loading it with load_from_disk. I have 96 cpus, however only 1 is used for this, which is inefficient. Moreover, its usage is at 1%... This is happening in the context of a language model training, therefore I'm wasting 100$ each time I have to load the dataset from disk again (because the spot instance was stopped by aws and I need to relaunch it for example). ## Steps to reproduce the bug Just get the oscar in spanish (around 150GGB) and try to first save in disk and then load the processed dataset. It's not dependent on the task you're doing, it just depends on the size of the text dataset. ## Expected results I expect the dataset to be loaded in a normal time, by using the whole machine for loading it, I mean if you store the dataset in multiple files (.arrow) and then load it from multiple files, you can use multiprocessing for that and therefore don't waste so much time. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.8.0 - Platform: Ubuntu 18 - Python version: 3.8 I've seen you're planning to include a streaming mode for load_dataset, but that only saves the downloading and processing time, that's not being a problem for me, you cannot save the pure loading from disk time, therefore that's not a solution for my use case or for anyone who wants to use your library for training a language model.
null
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2547/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2547/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
https://api.github.com/repos/huggingface/datasets/issues/2543
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2543/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2543/comments
https://api.github.com/repos/huggingface/datasets/issues/2543/events
https://github.com/huggingface/datasets/issues/2543
928,571,915
MDU6SXNzdWU5Mjg1NzE5MTU=
2,543
switching some low-level log.info's to log.debug?
{ "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stas00", "id": 10676103, "login": "stas00", "node_id": "MDQ6VXNlcjEwNjc2MTAz", "organizations_url": "https://api.github.com/users/stas00/orgs", "received_events_url": "https://api.github.com/users/stas00/received_events", "repos_url": "https://api.github.com/users/stas00/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "type": "User", "url": "https://api.github.com/users/stas00", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
[ "Hi @stas00, thanks for pointing out this issue with logging.\r\n\r\nI agree that `datasets` can sometimes be too verbose... I can create a PR and we could discuss there the choice of the log levels for different parts of the code." ]
2021-06-23T19:26:55
2021-06-25T13:40:19
2021-06-25T13:40:19
CONTRIBUTOR
null
null
null
null
In https://github.com/huggingface/transformers/pull/12276 we are now changing the examples to have `datasets` on the same log level as `transformers`, so that one setting can do a consistent logging across all involved components. The trouble is that now we get a ton of these: ``` 06/23/2021 12:15:31 - INFO - datasets.utils.filelock - Lock 139627640431136 acquired on /home/stas/.cache/huggingface/metrics/sacrebleu/default/default_experiment-1-0.arrow.lock 06/23/2021 12:15:31 - INFO - datasets.arrow_writer - Done writing 50 examples in 12280 bytes /home/stas/.cache/huggingface/metrics/sacrebleu/default/default_experiment-1-0.arrow. 06/23/2021 12:15:31 - INFO - datasets.arrow_dataset - Set __getitem__(key) output type to python objects for no columns (when key is int or slice) and don't output other (un-formatted) columns. 06/23/2021 12:15:31 - INFO - datasets.utils.filelock - Lock 139627640431136 released on /home/stas/.cache/huggingface/metrics/sacrebleu/default/default_experiment-1-0.arrow.lock ``` May I suggest that these can be `log.debug` as it's no informative to the user. More examples: these are not informative - too much information: ``` 06/23/2021 12:14:26 - INFO - datasets.load - Checking /home/stas/.cache/huggingface/datasets/downloads/459933f1fe47711fad2f6ff8110014ff189120b45ad159ef5b8e90ea43a174fa.e23e7d1259a8c6274a82a42a8936dd1b87225302c6dc9b7261beb3bc2daac640.py for additional imports. 06/23/2021 12:14:27 - INFO - datasets.builder - Constructing Dataset for split train, validation, test, from /home/stas/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/0d9fb3e814712c785176ad8cdb9f465fbe6479000ee6546725db30ad8a8b5f8a ``` While these are: ``` 06/23/2021 12:14:27 - INFO - datasets.info - Loading Dataset Infos from /home/stas/.cache/huggingface/modules/datasets_modules/datasets/wmt16/0d9fb3e814712c785176ad8cdb9f465fbe6479000ee6546725db30ad8a8b5f8a 06/23/2021 12:14:27 - WARNING - datasets.builder - Reusing dataset wmt16 (/home/stas/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/0d9fb3e814712c785176ad8cdb9f465fbe6479000ee6546725db30ad8a8b5f8a) ``` I also realize that `transformers` examples don't have do use `info` for `datasets` to let the default `warning` keep logging to less noisy. But I think currently the log levels are slightly misused and skewed by 1 level. Many `warnings` will better be `info`s and most `info`s be `debug`. e.g.: ``` 06/23/2021 12:14:27 - WARNING - datasets.builder - Reusing dataset wmt16 (/home/stas/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/0d9fb3e814712c785176ad8cdb9f465fbe6479000ee6546725db30ad8a8b5f8a) ``` why is this a warning? it is informing me that the cache is used, there is nothing to be worried about. I'd have it as `info`. Warnings are typically something that's bordering error or the first thing to check when things don't work as expected. infrequent info is there to inform of the different stages or important events. Everything else is debug. At least the way I understand things.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2543/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2543/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
1 day, 18:13:24
https://api.github.com/repos/huggingface/datasets/issues/2542
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2542/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2542/comments
https://api.github.com/repos/huggingface/datasets/issues/2542/events
https://github.com/huggingface/datasets/issues/2542
928,540,382
MDU6SXNzdWU5Mjg1NDAzODI=
2,542
`datasets.keyhash.DuplicatedKeysError` for `drop` and `adversarial_qa/adversarialQA`
{ "avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4", "events_url": "https://api.github.com/users/VictorSanh/events{/privacy}", "followers_url": "https://api.github.com/users/VictorSanh/followers", "following_url": "https://api.github.com/users/VictorSanh/following{/other_user}", "gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/VictorSanh", "id": 16107619, "login": "VictorSanh", "node_id": "MDQ6VXNlcjE2MTA3NjE5", "organizations_url": "https://api.github.com/users/VictorSanh/orgs", "received_events_url": "https://api.github.com/users/VictorSanh/received_events", "repos_url": "https://api.github.com/users/VictorSanh/repos", "site_admin": false, "starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions", "type": "User", "url": "https://api.github.com/users/VictorSanh", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
[ "very much related: https://github.com/huggingface/datasets/pull/2333", "Hi @VictorSanh, thank you for reporting this issue with duplicated keys.\r\n\r\n- The issue with \"adversarial_qa\" was fixed 23 days ago: #2433. Current version of `datasets` (1.8.0) includes the patch.\r\n- I am investigating the issue with `drop`. I'll ping you to keep you informed.", "Hi @VictorSanh, the issue is already fixed and merged into master branch and will be included in our next release version 1.9.0.", "thank you!" ]
2021-06-23T18:41:16
2021-06-25T21:50:05
2021-06-24T14:57:08
CONTRIBUTOR
null
null
null
null
## Describe the bug Failure to generate the datasets (`drop` and subset `adversarialQA` from `adversarial_qa`) because of duplicate keys. ## Steps to reproduce the bug ```python from datasets import load_dataset load_dataset("drop") load_dataset("adversarial_qa", "adversarialQA") ``` ## Expected results The examples keys should be unique. ## Actual results ```bash >>> load_dataset("drop") Using custom data configuration default Downloading and preparing dataset drop/default (download: 7.92 MiB, generated: 111.88 MiB, post-processed: Unknown size, total: 119.80 MiB) to /home/hf/.cache/huggingface/datasets/drop/default/0.1.0/7a94f1e2bb26c4b5c75f89857c06982967d7416e5af935a9374b9bccf5068026... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/load.py", line 751, in load_dataset use_auth_token=use_auth_token, File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/builder.py", line 575, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/builder.py", line 652, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/builder.py", line 992, in _prepare_split num_examples, num_bytes = writer.finalize() File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/arrow_writer.py", line 409, in finalize self.check_duplicate_keys() File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/arrow_writer.py", line 349, in check_duplicate_keys raise DuplicatedKeysError(key) datasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET ! Found duplicate Key: 28553293-d719-441b-8f00-ce3dc6df5398 Keys should be unique and deterministic in nature ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.7.0 - Platform: Linux-5.4.0-1044-gcp-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.10 - PyArrow version: 3.0.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2542/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2542/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
20:15:52
https://api.github.com/repos/huggingface/datasets/issues/2538
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2538/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2538/comments
https://api.github.com/repos/huggingface/datasets/issues/2538/events
https://github.com/huggingface/datasets/issues/2538
927,940,691
MDU6SXNzdWU5Mjc5NDA2OTE=
2,538
Loading partial dataset when debugging
{ "avatar_url": "https://avatars.githubusercontent.com/u/9061913?v=4", "events_url": "https://api.github.com/users/reachtarunhere/events{/privacy}", "followers_url": "https://api.github.com/users/reachtarunhere/followers", "following_url": "https://api.github.com/users/reachtarunhere/following{/other_user}", "gists_url": "https://api.github.com/users/reachtarunhere/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/reachtarunhere", "id": 9061913, "login": "reachtarunhere", "node_id": "MDQ6VXNlcjkwNjE5MTM=", "organizations_url": "https://api.github.com/users/reachtarunhere/orgs", "received_events_url": "https://api.github.com/users/reachtarunhere/received_events", "repos_url": "https://api.github.com/users/reachtarunhere/repos", "site_admin": false, "starred_url": "https://api.github.com/users/reachtarunhere/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/reachtarunhere/subscriptions", "type": "User", "url": "https://api.github.com/users/reachtarunhere", "user_view_type": "public" }
[]
open
false
null
[]
[ "Hi ! `load_dataset` downloads the full dataset once and caches it, so that subsequent calls to `load_dataset` just reloads the dataset from your disk.\r\nThen when you specify a `split` in `load_dataset`, it will just load the requested split from the disk. If your specified split is a sliced split (e.g. `\"train[:10]\"`), then it will load the 10 first rows of the train split that you have on disk.\r\n\r\nTherefore, as long as you don't delete your cache, all your calls to `load_dataset` will be very fast. Except the first call that downloads the dataset of course ^^", "That’s a use case for the new streaming feature, no?", "Hi @reachtarunhere.\r\n\r\nBesides the above insights provided by @lhoestq and @thomwolf, there is also a Dataset feature in progress (I plan to finish it this week): #2249, which will allow you, when calling `load_dataset`, to pass the option to download/preprocess/cache only some specific split(s), which will definitely speed up your workflow.\r\n\r\nIf this feature is interesting for you, I can ping you once it will be merged into the master branch.", "Thanks all for responding.\r\n\r\nHey @albertvillanova \r\n\r\nThanks. Yes, I would be interested.\r\n\r\n@lhoestq I think even if a small split is specified it loads up the full dataset from the disk (please correct me if this is not the case). Because it does seem to be slow to me even on subsequent calls. There is no repeated downloading so it seems that the cache is working.\r\n\r\nI am not aware of the streaming feature @thomwolf mentioned. So I might need to read up on it.", "@reshinthadithyan I use the .select function to have a fraction of indices.", "If I want to create a dataset, containing only the 10 elements of a given dataset (slice it), how do I do that?", "```python \r\nsmall_ds = ds.select(range(10))\r\n```", "\r\n\r\n> ```python\r\n> small_ds = ds.select(range(10))\r\n> ```\r\n\r\nThanks, but this doesn't help me to save time during initial loading, right?", "Indeed by default load_dataset would download and prepare everything as Arrow files. And passing `split=train[:10]` memory maps only the beginning of the full dataset that has been prepared on disk.\r\n\r\nIf you don't want to download everything, you can use streaming : \r\n```python \r\nids = load_dataset(..., streaming=True)\r\nfirst_samples = list(ids[\"train\"].take(10))\r\n```\r\n\r\nTo get a Dataset you can use \r\n```python \r\nds = Dataset.from_generator(ids.take(10).__iter__)\r\n```\r\n\r\nedit: fixed small bug", "Thanks @lhoestq, but I don't think it is 100% accurate, as it doesn't keep the dataset structure exactly the same.\r\nTo load the full dataset, I do:\r\n```\r\ndata = load_dataset(\"json\", data_files=\"a.json\")\r\ntrain_data = data[\"train\"].shuffle()\r\n```\r\n\r\nBut when I am changing it as per your instructions: \r\n```\r\nids = load_dataset(\"json\", data_files=\"a.json\", streaming=True)\r\ndata = Dataset.from_generator(ids[\"train\"].take(1).__iter__)\r\ntrain_data = data[\"train\"].shuffle()\r\n```\r\nIt throws KeyError.\r\nI need a simple way, like you suggested, to have a subset of a Dataset, which exactly the same attributes.\r\n", "Whoops I fixed my code sorry\r\n```diff\r\n- ds = Dataset.from_generator(ids[\"train\"].take(10).__iter__)\r\n+ ds = Dataset.from_generator(ids.take(10).__iter__)\r\n```\r\n\r\nin your case that means running\r\n```python\r\ntrain_data = data.shuffle()\r\n```\r\n\r\nwithout `[\"train\"]`" ]
2021-06-23T07:19:52
2023-04-19T11:05:38
null
NONE
null
null
null
null
I am using PyTorch Lightning along with datasets (thanks for so many datasets already prepared and the great splits). Every time I execute load_dataset for the imdb dataset it takes some time even if I specify a split involving very few samples. I guess this due to hashing as per the other issues. Is there a way to only load part of the dataset on load_dataset? This would really speed up my workflow. Something like a debug mode would really help. Thanks!
null
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2538/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2538/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
https://api.github.com/repos/huggingface/datasets/issues/2536
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2536/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2536/comments
https://api.github.com/repos/huggingface/datasets/issues/2536/events
https://github.com/huggingface/datasets/issues/2536
927,338,639
MDU6SXNzdWU5MjczMzg2Mzk=
2,536
Use `Audio` features for `AutomaticSpeechRecognition` task template
{ "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lewtun", "id": 26859204, "login": "lewtun", "node_id": "MDQ6VXNlcjI2ODU5MjA0", "organizations_url": "https://api.github.com/users/lewtun/orgs", "received_events_url": "https://api.github.com/users/lewtun/received_events", "repos_url": "https://api.github.com/users/lewtun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "type": "User", "url": "https://api.github.com/users/lewtun", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lewtun", "id": 26859204, "login": "lewtun", "node_id": "MDQ6VXNlcjI2ODU5MjA0", "organizations_url": "https://api.github.com/users/lewtun/orgs", "received_events_url": "https://api.github.com/users/lewtun/received_events", "repos_url": "https://api.github.com/users/lewtun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "type": "User", "url": "https://api.github.com/users/lewtun", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lewtun", "id": 26859204, "login": "lewtun", "node_id": "MDQ6VXNlcjI2ODU5MjA0", "organizations_url": "https://api.github.com/users/lewtun/orgs", "received_events_url": "https://api.github.com/users/lewtun/received_events", "repos_url": "https://api.github.com/users/lewtun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "type": "User", "url": "https://api.github.com/users/lewtun", "user_view_type": "public" } ]
[ "I'm just retaking and working on #2324. 😉 ", "Resolved via https://github.com/huggingface/datasets/pull/4006." ]
2021-06-22T15:07:21
2022-06-01T17:18:16
2022-06-01T17:18:16
MEMBER
null
null
null
null
In #2533 we added a task template for speech recognition that relies on the file paths to the audio files. As pointed out by @SBrandeis this is brittle as it doesn't port easily across different OS'. The solution is to use dedicated `Audio` features when casting the dataset. These features are not yet available in `datasets`, but should be included in the `AutomaticSpeechRecognition` template once they are.
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2536/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2536/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
344 days, 2:10:55
https://api.github.com/repos/huggingface/datasets/issues/2532
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2532/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2532/comments
https://api.github.com/repos/huggingface/datasets/issues/2532/events
https://github.com/huggingface/datasets/issues/2532
927,063,196
MDU6SXNzdWU5MjcwNjMxOTY=
2,532
Tokenizer's normalization preprocessor cause misalignment in return_offsets_mapping for tokenizer classification task
{ "avatar_url": "https://avatars.githubusercontent.com/u/50871412?v=4", "events_url": "https://api.github.com/users/cosmeowpawlitan/events{/privacy}", "followers_url": "https://api.github.com/users/cosmeowpawlitan/followers", "following_url": "https://api.github.com/users/cosmeowpawlitan/following{/other_user}", "gists_url": "https://api.github.com/users/cosmeowpawlitan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cosmeowpawlitan", "id": 50871412, "login": "cosmeowpawlitan", "node_id": "MDQ6VXNlcjUwODcxNDEy", "organizations_url": "https://api.github.com/users/cosmeowpawlitan/orgs", "received_events_url": "https://api.github.com/users/cosmeowpawlitan/received_events", "repos_url": "https://api.github.com/users/cosmeowpawlitan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cosmeowpawlitan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cosmeowpawlitan/subscriptions", "type": "User", "url": "https://api.github.com/users/cosmeowpawlitan", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
[ "Hi @jerryIsHere, thanks for reporting the issue. But are you sure this is a bug in HuggingFace **Datasets**?", "> Hi @jerryIsHere, thanks for reporting the issue. But are you sure this is a bug in HuggingFace **Datasets**?\r\n\r\nOh, I am sorry\r\nI would reopen the post on huggingface/transformers" ]
2021-06-22T10:08:18
2021-06-23T05:17:25
2021-06-23T05:17:25
CONTRIBUTOR
null
null
null
null
[This colab notebook](https://colab.research.google.com/drive/151gKyo0YIwnlznrOHst23oYH_a3mAe3Z?usp=sharing) implements a token classification input pipeline extending the logic from [this hugging example](https://huggingface.co/transformers/custom_datasets.html#tok-ner). The pipeline works fine with most instance in different languages, but unfortunately, [the Japanese Kana ligature (a form of abbreviation? I don't know Japanese well)](https://en.wikipedia.org/wiki/Kana_ligature) break the alignment of `return_offsets_mapping`: ![image](https://user-images.githubusercontent.com/50871412/122904371-db192700-d382-11eb-8917-1775db76db69.png) Without the try catch block, it riase `ValueError: NumPy boolean array indexing assignment cannot assign 88 input values to the 87 output values where the mask is true`, example shown here [(another colab notebook)](https://colab.research.google.com/drive/1MmOqf3ppzzdKKyMWkn0bJy6DqzOO0SSm?usp=sharing) It is clear that the normalizer is the process that break the alignment, as it is observed that `tokenizer._tokenizer.normalizer.normalize_str('ヿ')` return 'コト'. One workaround is to include `tokenizer._tokenizer.normalizer.normalize_str` before the tokenizer preprocessing pipeline, which is also provided in the [first colab notebook](https://colab.research.google.com/drive/151gKyo0YIwnlznrOHst23oYH_a3mAe3Z?usp=sharing) with the name `udposTestDatasetWorkaround`. I guess similar logics should be included inside the tokenizer and the offsets_mapping generation process such that user don't need to include them in their code. But I don't understand the code of tokenizer well that I think I am not able to do this. p.s. **I am using my own dataset building script in the provided example, but the script should be equivalent to the changes made by this [update](https://github.com/huggingface/datasets/pull/2466)** `get_dataset `is just a simple wrapping for `load_dataset` and the `tokenizer` is just `XLMRobertaTokenizerFast.from_pretrained("xlm-roberta-large")`
{ "avatar_url": "https://avatars.githubusercontent.com/u/50871412?v=4", "events_url": "https://api.github.com/users/cosmeowpawlitan/events{/privacy}", "followers_url": "https://api.github.com/users/cosmeowpawlitan/followers", "following_url": "https://api.github.com/users/cosmeowpawlitan/following{/other_user}", "gists_url": "https://api.github.com/users/cosmeowpawlitan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cosmeowpawlitan", "id": 50871412, "login": "cosmeowpawlitan", "node_id": "MDQ6VXNlcjUwODcxNDEy", "organizations_url": "https://api.github.com/users/cosmeowpawlitan/orgs", "received_events_url": "https://api.github.com/users/cosmeowpawlitan/received_events", "repos_url": "https://api.github.com/users/cosmeowpawlitan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cosmeowpawlitan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cosmeowpawlitan/subscriptions", "type": "User", "url": "https://api.github.com/users/cosmeowpawlitan", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2532/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2532/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
19:09:07
https://api.github.com/repos/huggingface/datasets/issues/2528
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2528/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2528/comments
https://api.github.com/repos/huggingface/datasets/issues/2528/events
https://github.com/huggingface/datasets/issues/2528
926,314,656
MDU6SXNzdWU5MjYzMTQ2NTY=
2,528
Logging cannot be set to NOTSET similar to transformers
{ "avatar_url": "https://avatars.githubusercontent.com/u/34662010?v=4", "events_url": "https://api.github.com/users/joshzwiebel/events{/privacy}", "followers_url": "https://api.github.com/users/joshzwiebel/followers", "following_url": "https://api.github.com/users/joshzwiebel/following{/other_user}", "gists_url": "https://api.github.com/users/joshzwiebel/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/joshzwiebel", "id": 34662010, "login": "joshzwiebel", "node_id": "MDQ6VXNlcjM0NjYyMDEw", "organizations_url": "https://api.github.com/users/joshzwiebel/orgs", "received_events_url": "https://api.github.com/users/joshzwiebel/received_events", "repos_url": "https://api.github.com/users/joshzwiebel/repos", "site_admin": false, "starred_url": "https://api.github.com/users/joshzwiebel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/joshzwiebel/subscriptions", "type": "User", "url": "https://api.github.com/users/joshzwiebel", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
[ "Hi @joshzwiebel, thanks for reporting. We are going to align with `transformers`." ]
2021-06-21T15:04:54
2021-06-24T14:42:47
2021-06-24T14:42:47
NONE
null
null
null
null
## Describe the bug In the transformers library you can set the verbosity level to logging.NOTSET to work around the usage of tqdm and IPywidgets, however in Datasets this is no longer possible. This is because transformers set the verbosity level of tqdm with [this](https://github.com/huggingface/transformers/blob/b53bc55ba9bb10d5ee279eab51a2f0acc5af2a6b/src/transformers/file_utils.py#L1449) `disable=bool(logging.get_verbosity() == logging.NOTSET)` and datasets accomplishes this like [so](https://github.com/huggingface/datasets/blob/83554e410e1ab8c6f705cfbb2df7953638ad3ac1/src/datasets/utils/file_utils.py#L493) `not_verbose = bool(logger.getEffectiveLevel() > WARNING)` ## Steps to reproduce the bug ```python import datasets import logging datasets.logging.get_verbosity = lambda : logging.NOTSET datasets.load_dataset("patrickvonplaten/librispeech_asr_dummy") ``` ## Expected results The code should download and load the dataset as normal without displaying progress bars ## Actual results ```ImportError Traceback (most recent call last) <ipython-input-4-aec65c0509c6> in <module> ----> 1 datasets.load_dataset("patrickvonplaten/librispeech_asr_dummy") ~/venv/lib/python3.7/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, task, **config_kwargs) 713 dataset=True, 714 return_resolved_file_path=True, --> 715 use_auth_token=use_auth_token, 716 ) 717 # Set the base path for downloads as the parent of the script location ~/venv/lib/python3.7/site-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, dynamic_modules_path, return_resolved_file_path, **download_kwargs) 350 file_path = hf_bucket_url(path, filename=name, dataset=False) 351 try: --> 352 local_path = cached_path(file_path, download_config=download_config) 353 except FileNotFoundError: 354 raise FileNotFoundError( ~/venv/lib/python3.7/site-packages/datasets/utils/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs) 289 use_etag=download_config.use_etag, 290 max_retries=download_config.max_retries, --> 291 use_auth_token=download_config.use_auth_token, 292 ) 293 elif os.path.exists(url_or_filename): ~/venv/lib/python3.7/site-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token) 668 headers=headers, 669 cookies=cookies, --> 670 max_retries=max_retries, 671 ) 672 ~/venv/lib/python3.7/site-packages/datasets/utils/file_utils.py in http_get(url, temp_file, proxies, resume_size, headers, cookies, timeout, max_retries) 493 initial=resume_size, 494 desc="Downloading", --> 495 disable=not_verbose, 496 ) 497 for chunk in response.iter_content(chunk_size=1024): ~/venv/lib/python3.7/site-packages/tqdm/notebook.py in __init__(self, *args, **kwargs) 217 total = self.total * unit_scale if self.total else self.total 218 self.container = self.status_printer( --> 219 self.fp, total, self.desc, self.ncols) 220 self.sp = self.display 221 ~/venv/lib/python3.7/site-packages/tqdm/notebook.py in status_printer(_, total, desc, ncols) 95 if IProgress is None: # #187 #451 #558 #872 96 raise ImportError( ---> 97 "IProgress not found. Please update jupyter and ipywidgets." 98 " See https://ipywidgets.readthedocs.io/en/stable" 99 "/user_install.html") ImportError: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.8.0 - Platform: Linux-5.4.95-42.163.amzn2.x86_64-x86_64-with-debian-10.8 - Python version: 3.7.10 - PyArrow version: 3.0.0 I am running this code on Deepnote and which important to this issue **does not** support IPywidgets
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2528/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2528/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
2 days, 23:37:53