url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 48
51
| id
int64 600M
3.67B
| node_id
stringlengths 18
24
| number
int64 2
7.88k
| title
stringlengths 1
290
| user
dict | labels
listlengths 0
4
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
4
| comments
listlengths 0
30
| created_at
timestamp[s]date 2020-04-14 18:18:51
2025-11-26 16:16:56
| updated_at
timestamp[s]date 2020-04-29 09:23:05
2025-11-30 03:52:07
| closed_at
timestamp[s]date 2020-04-29 09:23:05
2025-11-21 12:31:19
β | author_association
stringclasses 4
values | type
null | active_lock_reason
null | draft
null | pull_request
null | body
stringlengths 0
228k
β | closed_by
dict | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | state_reason
stringclasses 4
values | sub_issues_summary
dict | issue_dependencies_summary
dict | is_pull_request
bool 1
class | closed_at_time_taken
duration[s] |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/5206
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5206/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5206/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5206/events
|
https://github.com/huggingface/datasets/issues/5206
| 1,437,223,894
|
I_kwDODunzps5VqkvW
| 5,206
|
Use logging instead of printing to console
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/16692099?v=4",
"events_url": "https://api.github.com/users/bilelomrani1/events{/privacy}",
"followers_url": "https://api.github.com/users/bilelomrani1/followers",
"following_url": "https://api.github.com/users/bilelomrani1/following{/other_user}",
"gists_url": "https://api.github.com/users/bilelomrani1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bilelomrani1",
"id": 16692099,
"login": "bilelomrani1",
"node_id": "MDQ6VXNlcjE2NjkyMDk5",
"organizations_url": "https://api.github.com/users/bilelomrani1/orgs",
"received_events_url": "https://api.github.com/users/bilelomrani1/received_events",
"repos_url": "https://api.github.com/users/bilelomrani1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bilelomrani1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bilelomrani1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bilelomrani1",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Actually upon closer inspection, it is documented in the code that this behavior is intentional, so I'll close this."
] | 2022-11-05T23:48:02
| 2022-11-06T00:06:00
| 2022-11-06T00:05:59
|
NONE
| null | null | null | null |
### Describe the bug
Some logs ([here](https://github.com/huggingface/datasets/blob/4a6e1fe2735505efc7e3a3dbd3e1835da0702575/src/datasets/builder.py#L778), [here](https://github.com/huggingface/datasets/blob/4a6e1fe2735505efc7e3a3dbd3e1835da0702575/src/datasets/builder.py#L786), and [here](https://github.com/huggingface/datasets/blob/4a6e1fe2735505efc7e3a3dbd3e1835da0702575/src/datasets/builder.py#L830)) generated by the `DatasetBuilder` are printed to the console instead of passed to `datasets` logger.
### Steps to reproduce the bug
```python
>> import datasets
>> datasets.load_dataset("some-dataset")
Downloading and preparing dataset csv/data to <path>...
Downloading data files: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 3/3 [00:00<00:00, 7729.06it/s]
Extracting data files: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 3/3 [00:00<00:00, 527.23it/s]
Dataset csv downloaded and prepared to <path>. Subsequent calls will reuse this data.
```
### Expected behavior
The logs should not be printed to the console directly but passed to the logger so that the user can redirect them wherever he wants.
### Environment info
- `datasets` version: 2.6.1
- Platform: macOS-13.0-x86_64-i386-64bit
- Python version: 3.9.15
- PyArrow version: 10.0.0
- Pandas version: 1.5.1
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/16692099?v=4",
"events_url": "https://api.github.com/users/bilelomrani1/events{/privacy}",
"followers_url": "https://api.github.com/users/bilelomrani1/followers",
"following_url": "https://api.github.com/users/bilelomrani1/following{/other_user}",
"gists_url": "https://api.github.com/users/bilelomrani1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bilelomrani1",
"id": 16692099,
"login": "bilelomrani1",
"node_id": "MDQ6VXNlcjE2NjkyMDk5",
"organizations_url": "https://api.github.com/users/bilelomrani1/orgs",
"received_events_url": "https://api.github.com/users/bilelomrani1/received_events",
"repos_url": "https://api.github.com/users/bilelomrani1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bilelomrani1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bilelomrani1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bilelomrani1",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5206/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5206/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 0:17:57
|
https://api.github.com/repos/huggingface/datasets/issues/5204
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5204/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5204/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5204/events
|
https://github.com/huggingface/datasets/issues/5204
| 1,437,221,259
|
I_kwDODunzps5VqkGL
| 5,204
|
`push_to_hub` not propagating `token` through `DownloadConfig`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alvarobartt",
"id": 36760800,
"login": "alvarobartt",
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alvarobartt",
"user_view_type": "public"
}
|
[] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alvarobartt",
"id": 36760800,
"login": "alvarobartt",
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alvarobartt",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alvarobartt",
"id": 36760800,
"login": "alvarobartt",
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alvarobartt",
"user_view_type": "public"
}
] |
[
"#self-assign",
"@lhoestq can you close this issue as part of the recent #5205 merge? Thanks π€ ",
"Thank you :)"
] | 2022-11-05T23:32:20
| 2022-11-08T10:12:09
| 2022-11-08T10:12:08
|
MEMBER
| null | null | null | null |
### Describe the bug
When trying to upload a new π€ Dataset to the Hub via Python, and providing the `token` as a parameter to the `Dataset.push_to_hub` function, it just works for the first time, assuming that the dataset didn't exist before.
But when trying to run `Dataset.push_to_hub` again over the same dataset, instead of updating it, it throws a `ConnectionError` when trying to retrieve the `README.md` that may contain some metadata about the dataset, so as to also update it, but since the `token` is not propagated, the `DownloadConfig` provided to the `datasets.utils.file_utils.get_from_cache` function doesn't contain the `use_auth_token` value set to `token`, it's just using the default one which is None/False.
So on, when uploading a dataset via Python with `push_to_hub` with the `token` as a parameter with the HuggingFace API Token as value, it can just be uploaded when the dataset is new, otherwise it fails with to `ConnectionError` due to the `token` not being propagated as `use_auth_token`.
### Steps to reproduce the bug
Let's create a new dataset in our HF account via Python as:
```python
from datasets import Dataset
data = {"a": [1, 2, 3], "b": [4, 5, 6]}
ds = Dataset.from_dict(data)
ds.push_to_hub(repo_id=<HF_USERNAME>/<HF_DATASET>, private=private, token=<HF_TOKEN_HERE>)
```
When we create the `Dataset` for the first time it works and there are no issues, but when trying to actually upload a new version of the same dataset (same name under the same username), we encounter the following issue:
```python
from datasets import Dataset
data = {"a": [1, 2, 3], "b": [4, 5, 6]}
ds = Dataset.from_dict(data)
ds.push_to_hub(repo_id=<HF_USERNAME>/<HF_DATASET>, private=private, token=<HF_TOKEN_HERE>)
>>> ConnectionError: Couldn't reach https://huggingface.co/datasets/alvarobartt/demo/resolve/main/README.md (ConnectionError('Unauthorized for URL https://huggingface.co/datasets/<HF_USERNAME>/<HF_DATASET>/resolve/main/README.md. Please use the parameter `use_auth_token=True` after logging in with `huggingface-cli login`'))
```
### Expected behavior
Ideally, the `token` parameter provided to `push_to_hub` should be propagated and used to download the `README.md` when trying to update a `Dataset`, instead of throwing that exception, so that the authentication can be done directly through code without running `huggingface-cli login`as mentioned at https://huggingface.co/docs/datasets/upload_dataset#upload-with-python.
### Environment info
- `datasets` version: 2.6.1
- Platform: macOS-13.0-arm64-arm-64bit
- Python version: 3.10.8
- PyArrow version: 10.0.0
- Pandas version: 1.5.1
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5204/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5204/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 2 days, 10:39:48
|
https://api.github.com/repos/huggingface/datasets/issues/5202
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5202/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5202/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5202/events
|
https://github.com/huggingface/datasets/issues/5202
| 1,435,886,090
|
I_kwDODunzps5VleIK
| 5,202
|
CI fails after bulk edit of canonical datasets
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] |
[
"Fixed by: https://huggingface.co/datasets/paws/discussions/1"
] | 2022-11-04T10:51:20
| 2023-02-16T09:11:10
| 2023-02-16T09:11:10
|
MEMBER
| null | null | null | null |
```
______ test_get_dataset_config_info[paws-labeled_final-expected_splits2] _______
[gw0] linux -- Python 3.7.15 /opt/hostedtoolcache/Python/3.7.15/x64/bin/python
path = 'paws', config_name = 'labeled_final'
expected_splits = ['train', 'test', 'validation']
@pytest.mark.parametrize(
"path, config_name, expected_splits",
[
("squad", "plain_text", ["train", "validation"]),
("dalle-mini/wit", "dalle-mini--wit", ["train"]),
("paws", "labeled_final", ["train", "test", "validation"]),
],
)
def test_get_dataset_config_info(path, config_name, expected_splits):
info = get_dataset_config_info(path, config_name=config_name)
assert info.config_name == config_name
> assert list(info.splits.keys()) == expected_splits
E AssertionError: assert ['test', 'tra... 'validation'] == ['train', 'te... 'validation']
E At index 0 diff: 'test' != 'train'
E Full diff:
E - ['train', 'test', 'validation']
E + ['test', 'train', 'validation']
tests/test_inspect.py:45: AssertionError
_ test_get_dataset_info[paws-expected_configs2-expected_splits_in_first_config2] _
[gw0] linux -- Python 3.7.15 /opt/hostedtoolcache/Python/3.7.15/x64/bin/python
path = 'paws'
expected_configs = ['labeled_final', 'labeled_swap', 'unlabeled_final']
expected_splits_in_first_config = ['train', 'test', 'validation']
@pytest.mark.parametrize(
"path, expected_configs, expected_splits_in_first_config",
[
("squad", ["plain_text"], ["train", "validation"]),
("dalle-mini/wit", ["dalle-mini--wit"], ["train"]),
("paws", ["labeled_final", "labeled_swap", "unlabeled_final"], ["train", "test", "validation"]),
],
)
def test_get_dataset_info(path, expected_configs, expected_splits_in_first_config):
infos = get_dataset_infos(path)
assert list(infos.keys()) == expected_configs
expected_config = expected_configs[0]
assert expected_config in infos
info = infos[expected_config]
assert info.config_name == expected_config
> assert list(info.splits.keys()) == expected_splits_in_first_config
E AssertionError: assert ['test', 'tra... 'validation'] == ['train', 'te... 'validation']
E At index 0 diff: 'test' != 'train'
E Full diff:
E - ['train', 'test', 'validation']
E + ['test', 'train', 'validation']
tests/test_inspect.py:90: AssertionError
______ test_get_dataset_split_names[paws-labeled_final-expected_splits2] _______
[gw0] linux -- Python 3.7.15 /opt/hostedtoolcache/Python/3.7.15/x64/bin/python
path = 'paws', expected_config = 'labeled_final'
expected_splits = ['train', 'test', 'validation']
@pytest.mark.parametrize(
"path, expected_config, expected_splits",
[
("squad", "plain_text", ["train", "validation"]),
("dalle-mini/wit", "dalle-mini--wit", ["train"]),
("paws", "labeled_final", ["train", "test", "validation"]),
],
)
def test_get_dataset_split_names(path, expected_config, expected_splits):
infos = get_dataset_infos(path)
assert expected_config in infos
info = infos[expected_config]
assert info.config_name == expected_config
> assert list(info.splits.keys()) == expected_splits
E AssertionError: assert ['test', 'tra... 'validation'] == ['train', 'te... 'validation']
E At index 0 diff: 'test' != 'train'
E Full diff:
E - ['train', 'test', 'validation']
E + ['test', 'train', 'validation']
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5202/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5202/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 103 days, 22:19:50
|
https://api.github.com/repos/huggingface/datasets/issues/5200
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5200/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5200/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5200/events
|
https://github.com/huggingface/datasets/issues/5200
| 1,435,831,559
|
I_kwDODunzps5VlQ0H
| 5,200
|
Some links to canonical datasets in the docs are outdated
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna",
"user_view_type": "public"
}
|
[
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] |
closed
| false
| null |
[] |
[
"Thanks for catching this, I can go through the docs and replace the links to their corresponding datasets on the Hub!"
] | 2022-11-04T10:06:21
| 2022-11-07T18:40:20
| 2022-11-07T18:40:20
|
CONTRIBUTOR
| null | null | null | null |
As we don't have canonical datasets in the github repo anymore, some old links to them doesn't work. I don't know how many of them are there, I found link to SuperGlue here: https://huggingface.co/docs/datasets/dataset_script#multiple-configurations, probably there are more of them. These links should be replaced by links to the corresponding datasets on the Hub.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stevhliu",
"id": 59462357,
"login": "stevhliu",
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stevhliu",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5200/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5200/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 3 days, 8:33:59
|
https://api.github.com/repos/huggingface/datasets/issues/5193
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5193/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5193/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5193/events
|
https://github.com/huggingface/datasets/issues/5193
| 1,433,883,780
|
I_kwDODunzps5Vd1SE
| 5,193
|
"One or several metadata. were found, but not in the same directory or in a parent directory"
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/20109584?v=4",
"events_url": "https://api.github.com/users/lambda-science/events{/privacy}",
"followers_url": "https://api.github.com/users/lambda-science/followers",
"following_url": "https://api.github.com/users/lambda-science/following{/other_user}",
"gists_url": "https://api.github.com/users/lambda-science/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lambda-science",
"id": 20109584,
"login": "lambda-science",
"node_id": "MDQ6VXNlcjIwMTA5NTg0",
"organizations_url": "https://api.github.com/users/lambda-science/orgs",
"received_events_url": "https://api.github.com/users/lambda-science/received_events",
"repos_url": "https://api.github.com/users/lambda-science/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lambda-science/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lambda-science/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lambda-science",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Also unrelated but still: https://huggingface.co/docs/datasets/image_dataset#generate-the-dataset\r\n```If your loading script passed the test, you should now have a dataset_infos.json file in your dataset folder.```\r\nIt's not the case anymore as it's now in the readme.md, it was confusing to me",
"And here is my data loader script: https://huggingface.co/datasets/corentinm7/MyoQuant-SDH-Data/blob/main/SDH_16k.py\r\nI have one file archive to download that contains the images for all splits and one `metadata.jsonl` to download that contains the informations about what image goes into what split.",
"Hi @lambda-science! It seems that your repo is recognized as a packaged module [ImageFolder](https://huggingface.co/docs/datasets/main/en/image_dataset#imagefolder), not as a dataset with the custom loading script, because loader looks for a script that has the same name as the dataset repo. So please try to rename your script to `MyoQuant-SDH-Data.py`, this should help.",
"> Hi @lambda-science! It seems that your repo is recognized as a packaged module [ImageFolder](https://huggingface.co/docs/datasets/main/en/image_dataset#imagefolder), not as a dataset with the custom loading script, because loader looks for a script that has the same name as the dataset repo. So please try to rename your script to `MyoQuant-SDH-Data.py`, this should help.\r\n\r\nHi !\r\n\r\nThank you for your answer. That was... embarrassingly easy, sorry for this issue, everything is fixed now ! \r\n\r\nHave a nice day ! :)",
"@lambda-science that's not embarrassing at all! it's actually not clear from the documentation that the script should have the same name, so thank you for the issue, we'll add this information to the docs :) "
] | 2022-11-02T22:46:25
| 2022-11-03T13:39:16
| 2022-11-03T13:35:44
|
NONE
| null | null | null | null |
### Describe the bug
When loading my own dataset, on loading it I get an error.
Here is my dataset link: https://huggingface.co/datasets/corentinm7/MyoQuant-SDH-Data
And the error after loading with:
```python
from datasets import load_dataset
load_dataset("corentinm7/MyoQuant-SDH-Data")
```
```python
Downloading readme: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 3.34k/3.34k [00:00<00:00, 4.45MB/s]
Using custom data configuration SDH_16k-53e7301a92ab0025
Downloading and preparing dataset None/SDH_16k to /home/corentin/.cache/huggingface/datasets/corentinm7___imagefolder/SDH_16k-53e7301a92ab0025/0.0.0/37fbb85cc714a338bea574ac6c7d0b5be5aff46c1862c1989b20e0771199e93f...
Downloading data: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 3.28M/3.28M [00:00<00:00, 4.31MB/s]
Downloading data files: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:01<00:00, 1.75s/it]
Downloading data: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1.13G/1.13G [00:15<00:00, 74.3MB/s]
Downloading data files: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:16<00:00, 16.09s/it]
Extracting data files: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:13<00:00, 13.16s/it]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/corentin/code-project/hugging_face_play/.venv/lib/python3.10/site-packages/datasets/load.py", line 1742, in load_dataset
builder_instance.download_and_prepare(
File "/home/corentin/code-project/hugging_face_play/.venv/lib/python3.10/site-packages/datasets/builder.py", line 814, in download_and_prepare
self._download_and_prepare(
File "/home/corentin/code-project/hugging_face_play/.venv/lib/python3.10/site-packages/datasets/builder.py", line 1423, in _download_and_prepare
super()._download_and_prepare(
File "/home/corentin/code-project/hugging_face_play/.venv/lib/python3.10/site-packages/datasets/builder.py", line 905, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/corentin/code-project/hugging_face_play/.venv/lib/python3.10/site-packages/datasets/builder.py", line 1374, in _prepare_split
for key, record in logging.tqdm(
File "/home/corentin/code-project/hugging_face_play/.venv/lib/python3.10/site-packages/tqdm/std.py", line 1195, in __iter__
for obj in iterable:
File "/home/corentin/code-project/hugging_face_play/.venv/lib/python3.10/site-packages/datasets/packaged_modules/folder_based_builder/folder_based_builder.py", line 394, in _generate_examples
raise ValueError(
ValueError: One or several metadata. were found, but not in the same directory or in a parent directory of /home/corentin/.cache/huggingface/datasets/downloads/extracted/60c4aa8d4da3065bb3d310de4373dffd73bd4dc331aedcb4ee867febe4fdb7cd/validation/sick/2_CG_SDH_TAM_Bin1cKO_ko_pla_4_1640.tif.
```
However the test command is working fine. ```datasets-cli test hugging_face_play/ds_test/SDH_16k.py --save_info --all_configs --force_redownload```
```
Using custom data configuration SDH_16k
Testing builder 'SDH_16k' (1/1)
Downloading and preparing dataset sdh_16k/SDH_16k to /home/corentin/.cache/huggingface/datasets/sdh_16k/SDH_16k/1.0.0/21b584239a638aeeda33cba1ac2ca4869d48e4b4f20fb22274d5a5ddc487659d...
Downloading data: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1.13G/1.13G [00:14<00:00, 76.5MB/s]
Downloading data files: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:15<00:00, 15.66s/it]
Downloading data: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 3.28M/3.28M [00:02<00:00, 1.44MB/s]
Downloading data files: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:03<00:00, 3.21s/it]
Downloading data files: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:00<00:00, 11586.48it/s]
Extracting data files: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:13<00:00, 13.42s/it]
Dataset sdh_16k downloaded and prepared to /home/corentin/.cache/huggingface/datasets/sdh_16k/SDH_16k/1.0.0/21b584239a638aeeda33cba1ac2ca4869d48e4b4f20fb22274d5a5ddc487659d. Subsequent calls will reuse this data.
100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 3/3 [00:00<00:00, 605.27it/s]
Dataset card saved at hugging_face_play/ds_test/README.md
Test successful.
```
### Steps to reproduce the bug
Simply run on python
```python
from datasets import load_dataset
load_dataset("corentinm7/MyoQuant-SDH-Data")
```
### Expected behavior
As the test command worked, this error should not appear
### Environment info
- `datasets` version: 2.6.1
- Platform: Linux-5.10.16.3-microsoft-standard-WSL2-x86_64-with-glibc2.31
- Python version: 3.10.6
- PyArrow version: 10.0.0
- Pandas version: 1.5.1
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/20109584?v=4",
"events_url": "https://api.github.com/users/lambda-science/events{/privacy}",
"followers_url": "https://api.github.com/users/lambda-science/followers",
"following_url": "https://api.github.com/users/lambda-science/following{/other_user}",
"gists_url": "https://api.github.com/users/lambda-science/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lambda-science",
"id": 20109584,
"login": "lambda-science",
"node_id": "MDQ6VXNlcjIwMTA5NTg0",
"organizations_url": "https://api.github.com/users/lambda-science/orgs",
"received_events_url": "https://api.github.com/users/lambda-science/received_events",
"repos_url": "https://api.github.com/users/lambda-science/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lambda-science/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lambda-science/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lambda-science",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5193/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5193/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 14:49:19
|
https://api.github.com/repos/huggingface/datasets/issues/5190
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5190/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5190/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5190/events
|
https://github.com/huggingface/datasets/issues/5190
| 1,433,014,626
|
I_kwDODunzps5VahFi
| 5,190
|
`path` is `None` when downloading a custom audio dataset from the Hub
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lewtun",
"id": 26859204,
"login": "lewtun",
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"repos_url": "https://api.github.com/users/lewtun/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lewtun",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi! Yes, this is expected behavior - we do this as a security measure to not leak local paths (this info would be useless on other users' machines anyways) and only push audio bytes. \r\n"
] | 2022-11-02T11:51:25
| 2022-11-02T12:55:02
| 2022-11-02T12:55:02
|
MEMBER
| null | null | null | null |
### Describe the bug
I've created an [audio dataset](https://huggingface.co/datasets/lewtun/audio-test-push) using the `audiofolder` feature desribed in the [docs](https://huggingface.co/docs/datasets/audio_dataset#audiofolder) and then pushed it to the Hub.
Locally, I can see the `audio.path` feature is of the expected form `path/to/data_dir`, but when I download the dataset from the Hub, I see `audio.path` is `None`
Here's an example:
```python
from datasets import load_dataset
ds = load_dataset("lewtun/audio-test-push")
ds["train"][0]
# {
# "audio": {
# "path": None, <-- Is this expected?
# "array": array(
# [
# 3.97140226e-07,
# 7.30310290e-07,
# 7.56406735e-07,
# ...,
# -1.19636677e-01,
# -1.16811886e-01,
# -1.12441722e-01,
# ]
# ),
# "sampling_rate": 44100,
# },
# "song_id": 0,
# "genre_id": 0,
# "genre": "Electronic",
# }
```
Is this expected behaviour? If yes, feel free to close this issue as it's not a true bug then :)
### Steps to reproduce the bug
1. Create an audio dataset with the `audiofolder` feature
2. Push the dataset to the Hub with `push_to_hub()`
3. Download the Hub dataset and inspect the `audio.path` feature
### Expected behavior
`audio.path` points to the file associated with the audio data
### Environment info
- `datasets` version: 2.6.2.dev0
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.8.13
- PyArrow version: 9.0.0
- Pandas version: 1.5.1
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5190/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5190/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 1:03:37
|
https://api.github.com/repos/huggingface/datasets/issues/5189
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5189/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5189/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5189/events
|
https://github.com/huggingface/datasets/issues/5189
| 1,432,769,143
|
I_kwDODunzps5VZlJ3
| 5,189
|
Reduce friction in tabular dataset workflow by eliminating having splits when dataset is loaded
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/53175384?v=4",
"events_url": "https://api.github.com/users/merveenoyan/events{/privacy}",
"followers_url": "https://api.github.com/users/merveenoyan/followers",
"following_url": "https://api.github.com/users/merveenoyan/following{/other_user}",
"gists_url": "https://api.github.com/users/merveenoyan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/merveenoyan",
"id": 53175384,
"login": "merveenoyan",
"node_id": "MDQ6VXNlcjUzMTc1Mzg0",
"organizations_url": "https://api.github.com/users/merveenoyan/orgs",
"received_events_url": "https://api.github.com/users/merveenoyan/received_events",
"repos_url": "https://api.github.com/users/merveenoyan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/merveenoyan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/merveenoyan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/merveenoyan",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
] |
[
"I have to admit I'm not a fan of this idea, as this would result in a non-consistent behavior between tabular and non-tabular datasets, which is confusing if done without the context you provided. Instead, we could consider returning a `Dataset` object rather than `DatasetDict` if there is only one split in the generated dataset. But then again, I think this lib is a bit too old to make such changes. @lhoestq @albertvillanova WDYT?\r\n\r\n",
"We can brainstorm here to see how we could make it happen ? And then depending on the options we see if it's a change we can do.\r\n\r\nI'm starting with a first reasoning\r\n\r\nCurrently not passing `split=` in `load_dataset` means \"return a dict with each split\".\r\n\r\nNow what would happen if a dataset has no split ? Ideally it should return one Dataset. And passing `split=` would have no sense. So depending on the dataset content, not passing `split=` should return a dict or a Dataset. In particular, those two cases should work:\r\n```python\r\n# case 1: dataset without split\r\nds = load_dataset(\"dataset_without_split\")\r\nds[0], ds[\"column_name\"], list(ds) # we want this\r\n\r\n# case 2: dataset with splits\r\nds = load_dataset(\"dataset_with_splits\")\r\nds[\"train\"] # this works and can't be changed\r\nds = load_dataset(\"dataset_with_splits\", split=\"train\")\r\nds[0], ds[\"column_name\"], list(ds) # this works and can't be changed\r\n```\r\n\r\nI can see several ideas:\r\n1. allowing `load_dataset` to return a different object based on the dataset content - either a Dataset or a DatasetDict\r\n - we can update `get_dataset_split_names` to return None or a list if users want to know in advance what object will be returned. They can also use `isinstance` _a posteriori_\r\n - but in this case we expect users to be careful when loading datasets and always to extra steps to check if they got a Dataset or DatasetDict\r\n2. merge Dataset and DatasetDict objects\r\n - they already share many functions: map, filter, push_to_hub etc.\r\n - we can define `ds[0]` to be the first item of the first split, and consider that the uses accesses rows from the full table of all the splits concatenated\r\n - however there is a collision when doing `ds[\"column_name\"]` or `ds[\"train\"]` that we need to address: the first returns a list, while the other returns a Dataset.\r\n\r\nWhat are your opinions on those two ideas ? Do you have other ideas in mind ?",
"I like the first idea more (concatenating splits doesn't seem useful, no?). This is a significant breaking change, so I think we should do a poll (or something similar) to gather more info on the actual \"expected behavior\" and wait for Datasets 3.0 if we decide to implement it.\r\n\r\nPS: @thomwolf also suggested the same thing a while ago (https://github.com/huggingface/datasets/issues/743#issuecomment-746074641).",
"I think it's an interesting improvement to the user experience for a case that comes often (no split) so I would definitively support it.\r\n\r\nI would be more in favor of option 2 rather than returning various types of objects from load_dataset and handling carefully the possible collisions indeed",
"Related: if a dataset only has one split, we don't show the splits select control in the dataset viewer on the Hub, eg. compare https://huggingface.co/datasets/hf-internal-testing/fixtures_image_utils/viewer/image/test with https://huggingface.co/datasets/glue/viewer/mnli/test.\r\n\r\nSee https://github.com/huggingface/moon-landing/pull/3858 for more details (internal)",
"I feel like the second idea is a bit more overkill. \r\n@severo I would say it's a bit irrelevant to the problem we have but is a separate problem @polinaeterna is solving at the moment. π
(also discussed on slack)",
"OK, sorry for polluting the thread. The relation I saw with the dataset viewer is that from a UX point of view, we hide the concepts of split and configuration whenever possible -> this issue feels like doing the same in the datasets library.",
"I would agree that returning different types based on the content of the dataset might be confusing.\r\n\r\nWe can do something similar to what `fetch_*` or `load_*` from `sklearn.datasets` do, which is to have an arg which changes the type of the returned type. For instance, `load_iris` would return a dict, but `load_iris(..., return_X_y=True)` would return a tuple.\r\n\r\nHere we can have a similar arg such as `return_X` which would then only return a single `DataSet` or an array.",
"> I feel like the second idea is a bit more overkill.\r\n\r\nOverkill in what sense ?\r\n\r\n> Here we can have a similar arg such as return_X which would then only return a single DataSet or an array.\r\n\r\nRight now one can already pass `split=\"all\"` to get one `Dataset` object with all the data in it (unsplit). We could also have something like `return_all=True` so make the API clearer.\r\n\r\n> I would be more in favor of option 2 rather than returning various types of objects from load_dataset and handling carefully the possible collisions indeed\r\n\r\nI think it would be ok to handle the collision by allowing both `ds[\"train\"]` and `ds[\"column_name\"]` (and maybe adding something like `ds.splits` for those who want to iterate over the splits or add new ones)",
"Would it make sense to remove the notion of \"split\" in `load_dataset`? I feel a lof of it comes from the want to have some sort of group of more or less similar dataset. \"train\"/\"test\"/\"validation\" are the traditional ones, but there are some datasets that have much more splits.\r\n\r\nWould it make sense to force `load_dataset` to only load a single `Dataset` object, and fail if it doesn't point to one. And have another method that's like `load_dataset_group_info` that can return a very arbitrary info class (Dict, List whatever), but you need to pass individual infos to `load_dataset` to run anything? Typically I don't think `DatasetDict.map` is really that helpful, but that's my personal opinion. This would help make things more readable (typically knowing if an object is a `Dataset` or a `DatasetDict`)",
"> Would it make sense to remove the notion of \"split\" in load_dataset?\r\n\r\nI think we need to keep it - though in practice people can name the splits whatever they want anyway.\r\n\r\n> Would it make sense to force load_dataset to only load a single Dataset object, and fail if it doesn't point to one.\r\n\r\nWe need to keep backward compatibility ideally - in particular the load_dataset + ds[\"train\"] one",
"> I think we need to keep it - though in practice people can name the splits whatever they want anyway.\r\n\r\nIt was my understanding that the whole issue was that `load_dataset` returned multiple types of objects.\r\n\r\n> We need to keep backward compatibility ideally - in particular the load_dataset + ds[\"train\"] one\r\n\r\nYeah sorry I meant ideally. One can always start developing `load_dataset_v2` can deprecate the first one and remove it in the longer term.",
"> It was my understanding that the whole issue was that load_dataset returned multiple types of objects.\r\n\r\nYes indeed, but we still want to keep a way to load the train/val/test/whatever splits alone ;)",
"@thomasw21's solution is good but it will break backwards compatibility. π
",
"Started to experiment with merging Dataset and DatasetDict. My plan is to define the splits of a Dataset in Dataset.info.splits (already exists, but never used). A Dataset would then be the concatenation of its splits if they exist.\r\n\r\nNot sure yet this is the way to go. My plan is to play with it and see and share it with you, so we can see if it makes sense from a UX point of view.",
"So just to make sure that I understand the current direction, people will have to be extra careful when handling splits right?\r\nImagine \"potato\" a dataset containing train/validation split:\r\n```\r\nload_dataset(\"potato\") # returns the concatenation of all the splits\r\n```\r\nPreviously the design would force you to choose a split (it would raise otherwise), or manually concat them if you really wanted to play with concatenated splits. Now it would potentially run without raising for a bit of time until you figure out that you've been training on both train and validation split.\r\n\r\nWould it make sense to use a dataset specific default instead of using the concatenation, typically \"potato\" dataset's default would be train?\r\n```\r\nload_dataset(\"potato\") # returns \"train\" split\r\nload_dataset(\"potato\", split=\"train\") # returns \"train\" split\r\nload_dataset(\"potato\", split=\"validation\") # returns \"validation\" split\r\nconcatenate_datasets([load_dataset(\"potato\", split=\"train\"), load_dataset(\"potato\", split=\"validation\")]) # returns concatenation\r\n```",
"> load_dataset(\"potato\") # returns \"train\" split\r\n\r\nTo avoid a breaking change we need to be able to do `load_dataset(\"potato\")[\"validation\"]` as well.\r\n\r\nIn that case I'd wonder where the validation split comes from, since the rows of the dataset wouldn't contain the validation split according to your example. That's why I'm more in favor of concatenating.\r\n\r\nA dataset is one table, that optionally has some split info about subsets (e.g. for training an evaluation)\r\n\r\nThis also allows anyone to re-split the dataset the way they want if they're not happy with the default:\r\n\r\n```python\r\nds = load_dataset(\"potato\").train_test_split(test_size=0.2)\r\ntrain_ds = ds[\"train\"]\r\ntest_ds = ds[\"test\"]\r\n```",
"Just thinking about this, we could just have `to_dataframe()` as `load_dataset(\"blah\").to_dataframe()` to get the whole dataset, and not change anything else.",
"I have a first implementation of option 2 (merging Dataset and DatasetDict) in this PR: https://github.com/huggingface/datasets/pull/5301/\r\n\r\nFeel free to play with it if you're interested, and let me know what you think. In this PR, a dataset is one table that optionally has some split info about subsets.",
"@adrinjalali we already have [to_pandas](https://huggingface.co/docs/datasets/package_reference/main_classes#datasets.Dataset.to_pandas) AFAIK that essentially does the same thing (for a dataset, not for a dataset dict), I was wondering if it makes sense to have this as I don't know portion of people who load non-tabular datasets into dataframes. @lhoestq I saw your PR and it will break a lot of things imo, WDYT of this option? ",
"> we already have [to_pandas](https://huggingface.co/docs/datasets/package_reference/main_classes#datasets.Dataset.to_pandas) AFAIK that essentially does the same thing (for a dataset, not for a dataset dict)\r\n\r\nyes correct :)\r\n\r\n> I saw your PR and it will break a lot of things imo\r\n\r\nDo you have concrete examples you can share ?\r\n\r\n> WDYT of this option?\r\n\r\nThe to_dataframe option ? I think it not enough, since you'd still get a `DatasetDict({\"train\": Dataset()})` if you load a dataset with no splits (e.g. one CSV), and this doesn't really make sense.\r\n\r\nNote that in the PR I opened you can do\r\n```python\r\nds = load_dataset(\"dataset_with_just_one_csv\") # Dataset type\r\ndf = load_dataset(\"dataset_with_just_one_csv\").to_pandas() # DataFrame type\r\n```",
"@lhoestq no I think @adrinjalali and I meant when user calls `to_dataframe` if there's only train split in `DatasetDict` we could directly load that into dataframe. This might cause a confusion given there's to_pandas but I think it's more intuitive and least breaking change. (given people -who use `datasets` for tabular workflows- will eventually call `to_pandas` anyway) ",
"So in that case it would be fine to still end up with a dataset dict with a \"train\" split ?",
"yeah what I mean is this:\r\n\r\n```py\r\ndataset = load_dataset(\"blah\")\r\n\r\n# deal with a split of the dataset\r\ntrain = dataset[\"train\"]\r\ntrain_df = dataset[\"train\"].to_dataframe()\r\n\r\n# deal with the whole dataset\r\ndataset_df = dataset.to_dataframe()\r\n```\r\n\r\nSo we do two things to improve tabular experience:\r\n- allow datasets to have a single split\r\n- add `to_dataframe` to the root dict level so that users can simply call `df = load_dataset(\"blah\").to_dataframe()` and have it in their `pandas.DataFrame` object.",
"Ok ! Note that we already have `Dataset.to_pandas()` so for consistency I'd call it `DatasetDict.to_pandas()` as well, does it sound good to you ? This is something we can add pretty easily",
"yeah that sounds perfect @lhoestq !",
"> So just to make sure that I understand the current direction, people will have to be extra careful when handling splits right?\r\n\r\nWe can raise an error if someone does `load_dataset(...)[0]` if the dataset is made of several splits, and return the first example if there's one or zero splits (i.e. when it's not ambiguous). Had this idea from the dicussions in #5312 WDYT @thomasw21 ?",
"> We can raise an error if someone does load_dataset(...)[0] if the dataset is made of several splits,\r\n\r\nBut then how is that different to have the distinction between DatasetDict and Dataset then? Is it just that \"default behaviour when there are no splits or single split, it returns directly the split when there's no ambiguity\".\r\n\r\nAlso I was wondering how the concatenation could have heavy impacts when running mapping functions/filtering in batch? Typically can batch be somehow mixed?",
"> But then how is that different to have the distinction between DatasetDict and Dataset then?\r\n\r\nBecause it doesn't make sense to be able to do `example = ds[0]` or `examples = list(ds)` on a class named `DatasetDict` of type `Dict[str, Dataset]`.\r\n\r\n> Also I was wondering how the concatenation could have heavy impacts when running mapping functions/filtering in batch? Typically can batch be somehow mixed?\r\n\r\nNo, we run each function on each split separated",
"> Because it doesn't make sense to be able to do example = ds[0] or examples = list(ds) on a class named DatasetDict of type Dict[str, Dataset].\r\n\r\nHum but you're still going to raise an exception in both those cases with your current change no? (actually list(ds) would return the name of the splits no?)\r\n\r\n> No, we run each function on each split separated\r\n\r\nNice!"
] | 2022-11-02T09:15:02
| 2022-12-06T12:13:17
| null |
CONTRIBUTOR
| null | null | null | null |
### Feature request
Sorry for cryptic name but I'd like to explain using code itself. When I want to load a specific dataset from a repository (for instance, this: https://huggingface.co/datasets/inria-soda/tabular-benchmark)
```python
from datasets import load_dataset
dataset = load_dataset("inria-soda/tabular-benchmark", data_files=["reg_cat/house_sales.csv"], streaming=True)
print(next(iter(dataset["train"])))
```
`datasets` library is essentially designed for people who'd like to use benchmark datasets on various modalities to fine-tune their models, and these benchmark datasets usually have pre-defined train and test splits. However, for tabular workflows, having train and test splits usually ends up model overfitting to validation split so usually the users would like to do validation techniques like `StratifiedKFoldCrossValidation` or when they tune for hyperparameters they do `GridSearchCrossValidation` so often the behavior is to create their own splits. Even [in this paper](https://hal.archives-ouvertes.fr/hal-03723551) a benchmark is introduced but the split is done by authors.
It's a bit confusing for average tabular user to try and load a dataset and see `"train"` so it would be nice if we would not load dataset into a split called `train `by default.
```diff
from datasets import load_dataset
dataset = load_dataset("inria-soda/tabular-benchmark", data_files=["reg_cat/house_sales.csv"], streaming=True)
-print(next(iter(dataset["train"])))
+print(next(iter(dataset)))
```
### Motivation
I explained it above π
### Your contribution
I think this is quite a big change that seems small (e.g. how to determine datasets that will not be load to train split?), it's best if we discuss first!
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5189/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5189/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/5186
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5186/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5186/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5186/events
|
https://github.com/huggingface/datasets/issues/5186
| 1,432,045,011
|
I_kwDODunzps5VW0XT
| 5,186
|
Incorrect error message when Dataset.from_sql fails and sqlalchemy not installed
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4",
"events_url": "https://api.github.com/users/nateraw/events{/privacy}",
"followers_url": "https://api.github.com/users/nateraw/followers",
"following_url": "https://api.github.com/users/nateraw/following{/other_user}",
"gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/nateraw",
"id": 32437151,
"login": "nateraw",
"node_id": "MDQ6VXNlcjMyNDM3MTUx",
"organizations_url": "https://api.github.com/users/nateraw/orgs",
"received_events_url": "https://api.github.com/users/nateraw/received_events",
"repos_url": "https://api.github.com/users/nateraw/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nateraw/subscriptions",
"type": "User",
"url": "https://api.github.com/users/nateraw",
"user_view_type": "public"
}
|
[] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
] |
[
"Hi! The first `Dataset.from_sql` call also outputs the \"ImportError: Using URI string without sqlalchemy installed.\" message, but you also get \"During handling of the above exception another exception occurred: ...\" after which the ValueError is printed. I agree that this behavior makes it easy to miss the original error. \r\n\r\nI think we can improve this by not throwing the writer's ValueError if the error from a dataset script is already being handled to make debugging easier. @lhoestq @albertvillanova wdyt?",
"Yup ! Alternatively the error can be raised in sql.py before generating the examples ? In `_info` for example",
"yea @lhoestq that would probably be good. The 2nd error is useless if the 1st error is the real reason it failed. "
] | 2022-11-01T20:25:51
| 2022-11-15T18:24:39
| 2022-11-15T18:24:39
|
CONTRIBUTOR
| null | null | null | null |
### Describe the bug
When calling `Dataset.from_sql` (in my case, with sqlite3), it fails with a message ```ValueError: Please pass `features` or at least one example when writing data``` when I don't have `sqlalchemy` installed.
### Steps to reproduce the bug
Make a new sqlite db with `sqlite3` and `pandas` from a remote [URL](https://raw.githubusercontent.com/nytimes/covid-19-data/master/us-states.csv).
```python
import sqlite3
import pandas as pd
from datasets import Dataset
conn = sqlite3.connect('us_covid_data.db')
df = pd.read_csv('https://raw.githubusercontent.com/nytimes/covid-19-data/master/us-states.csv')
df.to_sql('states', conn, if_exists='replace')
```
Then if you try to query this DB like this:
```python
ds = Dataset.from_sql('''SELECT * from states WHERE state=="New York";''', "sqlite:///us_covid_data.db")
```
You run into the error I described above:
```ValueError: Please pass `features` or at least one example when writing data```
However, if you try to pass features, as the error suggests, then you get an error that tells you the underlying problem...
```python
from datasets import Dataset, Features, Value
features = Features({
'date': Value('date32'),
'label': Value('string'),
'fips': Value('int32'),
'cases': Value('int32'),
'deaths': Value('int32')
})
ds = Dataset.from_sql(
'''SELECT * from states WHERE state=="New York";''',
"sqlite:///us_covid_data.db",
features=features
)
```
Which results in the actual underlying error: `ImportError: Using URI string without sqlalchemy installed.`
### Expected behavior
Instead of `ValueError` about needing to pass features, we should provide the actual underlying error about not having SQLAlchemy installed when it isn't found in the environment.
### Environment info
- `datasets` version: 2.6.1
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.8.10
- PyArrow version: 10.0.0
- Pandas version: 1.2.5
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5186/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5186/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 13 days, 21:58:48
|
https://api.github.com/repos/huggingface/datasets/issues/5185
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5185/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5185/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5185/events
|
https://github.com/huggingface/datasets/issues/5185
| 1,432,021,611
|
I_kwDODunzps5VWupr
| 5,185
|
Allow passing a subset of output features to Dataset.map
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/48946947?v=4",
"events_url": "https://api.github.com/users/sanderland/events{/privacy}",
"followers_url": "https://api.github.com/users/sanderland/followers",
"following_url": "https://api.github.com/users/sanderland/following{/other_user}",
"gists_url": "https://api.github.com/users/sanderland/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sanderland",
"id": 48946947,
"login": "sanderland",
"node_id": "MDQ6VXNlcjQ4OTQ2OTQ3",
"organizations_url": "https://api.github.com/users/sanderland/orgs",
"received_events_url": "https://api.github.com/users/sanderland/received_events",
"repos_url": "https://api.github.com/users/sanderland/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sanderland/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanderland/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sanderland",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] |
[] | 2022-11-01T20:07:20
| 2022-11-01T20:07:34
| null |
CONTRIBUTOR
| null | null | null | null |
### Feature request
Currently, map does one of two things to the features (if I'm not mistaken):
* when you do not pass features, types are assumed to be equal to the input if they can be cast, and inferred otherwise
* when you pass a full specification of features, output features are set to this
However, sometimes you want to just pass some of the output types, particularly when the first of these modes makes an incorrect type. This currently crashes.
### Motivation
To give a little background: this problem appears in converting labels to ids, where the labels happen to be floats rather than strings
Consider the following use of map to convert from float to int
```python
data = Dataset.from_dict({'y':[1.0,2.0,3.0]})
mapped = data.map(lambda r: {'y': int(r['y'])})
mapped['y'] # is floats, not ints
```
The result is a float again, since after the mapping operation it forces the old datatypes back on the data.
Passing `features=Features({"y": Value(dtype="int64")})` to map works in principle, but then extending it a little to e.g.
```python
def format_data(r):
return {**tokenizer(r["text"]), "y": int(r["y"])}
data = Dataset.from_dict({"y": [1.0, 2.0, 3.0], "text": ["one", "two", "three"]})
mapped = data.map(
format_data,
features=Features({'y': Value(dtype="int64")}),
remove_columns=["text"],
)
```
Results in a crash in dataset internals, as it expects either all or no output features to be specified.
Of course one can pass a full feature specification, but this becomes tokenizer specific and very awkward.
### Your contribution
I've looked at `write_batch` and particularly `col_type = features[col] if features else None`, but checking for `col in features` here makes it fail elsewhere, but the structure makes it hard to understand how and why. I do not think I would have the time myself to get to the bottom of this anytime soon.
| null |
{
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5185/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5185/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/5183
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5183/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5183/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5183/events
|
https://github.com/huggingface/datasets/issues/5183
| 1,431,418,066
|
I_kwDODunzps5VUbTS
| 5,183
|
Loading an external dataset in a format similar to conll2003
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/112555442?v=4",
"events_url": "https://api.github.com/users/Taghreed7878/events{/privacy}",
"followers_url": "https://api.github.com/users/Taghreed7878/followers",
"following_url": "https://api.github.com/users/Taghreed7878/following{/other_user}",
"gists_url": "https://api.github.com/users/Taghreed7878/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Taghreed7878",
"id": 112555442,
"login": "Taghreed7878",
"node_id": "U_kgDOBrV1sg",
"organizations_url": "https://api.github.com/users/Taghreed7878/orgs",
"received_events_url": "https://api.github.com/users/Taghreed7878/received_events",
"repos_url": "https://api.github.com/users/Taghreed7878/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Taghreed7878/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Taghreed7878/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Taghreed7878",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[] | 2022-11-01T13:18:29
| 2022-11-02T11:57:50
| 2022-11-02T11:57:50
|
NONE
| null | null | null | null |
I'm trying to load a custom dataset in a Dataset object, it's similar to conll2003 but with 2 columns only (word entity), I used the following script:
features = datasets.Features(
{"tokens": datasets.Sequence(datasets.Value("string")),
"ner_tags": datasets.Sequence(
datasets.features.ClassLabel(
names=["B-PER", .... etc.]))}
)
from datasets import Dataset
INPUT_COLUMNS = "tokens ner_tags".split(" ")
def read_conll(file):
#all_labels = []
example = {col: [] for col in INPUT_COLUMNS}
idx = 0
with open(file) as f:
for line in f:
if line:
if line.startswith("-DOCSTART-") and example["tokens"] != []:
print(idx, example)
yield idx, example
idx += 1
example = {col: [] for col in INPUT_COLUMNS}
elif line == "\n" or (line.startswith("-DOCSTART-") and example["tokens"] == []):
continue
else:
row_cols = line.split(" ")
for i, col in enumerate(example):
example[col] = row_cols[i].rstrip()
dset = Dataset.from_generator(read_conll, gen_kwargs={"file": "/content/new_train.txt"}, features = features)
The following error happened:
[/usr/local/lib/python3.7/dist-packages/datasets/utils/py_utils.py](https://localhost:8080/#) in <genexpr>(.0)
285 for key in unique_values(itertools.chain(*dicts)): # set merge all keys
286 # Will raise KeyError if the dict don't have the same keys
--> 287 yield key, tuple(d[key] for d in dicts)
288
TypeError: tuple indices must be integers or slices, not str
What does this mean and what should I modify?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/112555442?v=4",
"events_url": "https://api.github.com/users/Taghreed7878/events{/privacy}",
"followers_url": "https://api.github.com/users/Taghreed7878/followers",
"following_url": "https://api.github.com/users/Taghreed7878/following{/other_user}",
"gists_url": "https://api.github.com/users/Taghreed7878/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Taghreed7878",
"id": 112555442,
"login": "Taghreed7878",
"node_id": "U_kgDOBrV1sg",
"organizations_url": "https://api.github.com/users/Taghreed7878/orgs",
"received_events_url": "https://api.github.com/users/Taghreed7878/received_events",
"repos_url": "https://api.github.com/users/Taghreed7878/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Taghreed7878/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Taghreed7878/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Taghreed7878",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5183/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5183/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 22:39:21
|
https://api.github.com/repos/huggingface/datasets/issues/5182
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5182/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5182/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5182/events
|
https://github.com/huggingface/datasets/issues/5182
| 1,431,029,547
|
I_kwDODunzps5VS8cr
| 5,182
|
Add notebook / other resource links to the task-specific data loading guides
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4",
"events_url": "https://api.github.com/users/sayakpaul/events{/privacy}",
"followers_url": "https://api.github.com/users/sayakpaul/followers",
"following_url": "https://api.github.com/users/sayakpaul/following{/other_user}",
"gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sayakpaul",
"id": 22957388,
"login": "sayakpaul",
"node_id": "MDQ6VXNlcjIyOTU3Mzg4",
"organizations_url": "https://api.github.com/users/sayakpaul/orgs",
"received_events_url": "https://api.github.com/users/sayakpaul/received_events",
"repos_url": "https://api.github.com/users/sayakpaul/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sayakpaul",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4",
"events_url": "https://api.github.com/users/sayakpaul/events{/privacy}",
"followers_url": "https://api.github.com/users/sayakpaul/followers",
"following_url": "https://api.github.com/users/sayakpaul/following{/other_user}",
"gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sayakpaul",
"id": 22957388,
"login": "sayakpaul",
"node_id": "MDQ6VXNlcjIyOTU3Mzg4",
"organizations_url": "https://api.github.com/users/sayakpaul/orgs",
"received_events_url": "https://api.github.com/users/sayakpaul/received_events",
"repos_url": "https://api.github.com/users/sayakpaul/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sayakpaul",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4",
"events_url": "https://api.github.com/users/sayakpaul/events{/privacy}",
"followers_url": "https://api.github.com/users/sayakpaul/followers",
"following_url": "https://api.github.com/users/sayakpaul/following{/other_user}",
"gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sayakpaul",
"id": 22957388,
"login": "sayakpaul",
"node_id": "MDQ6VXNlcjIyOTU3Mzg4",
"organizations_url": "https://api.github.com/users/sayakpaul/orgs",
"received_events_url": "https://api.github.com/users/sayakpaul/received_events",
"repos_url": "https://api.github.com/users/sayakpaul/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sayakpaul",
"user_view_type": "public"
}
] |
[
"Yea this would be great! We would need an object detection tutorial notebook too if it doesn't already exist there. ",
"There is one: https://huggingface.co/docs/datasets/object_detection.\r\n\r\nI will start the work. "
] | 2022-11-01T07:57:26
| 2022-11-03T01:49:57
| 2022-11-03T01:49:57
|
MEMBER
| null | null | null | null |
Does it make sense to include links to notebooks / scripts that show how to use a dataset for training / fine-tuning a model?
For example, here in [https://huggingface.co/docs/datasets/image_classification] we could include a mention of https://github.com/huggingface/notebooks/blob/main/examples/image_classification.ipynb.
Applies to https://huggingface.co/docs/datasets/object_detection as well.
Cc: @osanseviero @nateraw
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4",
"events_url": "https://api.github.com/users/sayakpaul/events{/privacy}",
"followers_url": "https://api.github.com/users/sayakpaul/followers",
"following_url": "https://api.github.com/users/sayakpaul/following{/other_user}",
"gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sayakpaul",
"id": 22957388,
"login": "sayakpaul",
"node_id": "MDQ6VXNlcjIyOTU3Mzg4",
"organizations_url": "https://api.github.com/users/sayakpaul/orgs",
"received_events_url": "https://api.github.com/users/sayakpaul/received_events",
"repos_url": "https://api.github.com/users/sayakpaul/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sayakpaul",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5182/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5182/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 1 day, 17:52:31
|
https://api.github.com/repos/huggingface/datasets/issues/5181
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5181/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5181/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5181/events
|
https://github.com/huggingface/datasets/issues/5181
| 1,431,027,102
|
I_kwDODunzps5VS72e
| 5,181
|
Add a guide for semantic segmentation
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4",
"events_url": "https://api.github.com/users/sayakpaul/events{/privacy}",
"followers_url": "https://api.github.com/users/sayakpaul/followers",
"following_url": "https://api.github.com/users/sayakpaul/following{/other_user}",
"gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sayakpaul",
"id": 22957388,
"login": "sayakpaul",
"node_id": "MDQ6VXNlcjIyOTU3Mzg4",
"organizations_url": "https://api.github.com/users/sayakpaul/orgs",
"received_events_url": "https://api.github.com/users/sayakpaul/received_events",
"repos_url": "https://api.github.com/users/sayakpaul/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sayakpaul",
"user_view_type": "public"
}
|
[
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4",
"events_url": "https://api.github.com/users/sayakpaul/events{/privacy}",
"followers_url": "https://api.github.com/users/sayakpaul/followers",
"following_url": "https://api.github.com/users/sayakpaul/following{/other_user}",
"gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sayakpaul",
"id": 22957388,
"login": "sayakpaul",
"node_id": "MDQ6VXNlcjIyOTU3Mzg4",
"organizations_url": "https://api.github.com/users/sayakpaul/orgs",
"received_events_url": "https://api.github.com/users/sayakpaul/received_events",
"repos_url": "https://api.github.com/users/sayakpaul/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sayakpaul",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4",
"events_url": "https://api.github.com/users/sayakpaul/events{/privacy}",
"followers_url": "https://api.github.com/users/sayakpaul/followers",
"following_url": "https://api.github.com/users/sayakpaul/following{/other_user}",
"gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sayakpaul",
"id": 22957388,
"login": "sayakpaul",
"node_id": "MDQ6VXNlcjIyOTU3Mzg4",
"organizations_url": "https://api.github.com/users/sayakpaul/orgs",
"received_events_url": "https://api.github.com/users/sayakpaul/received_events",
"repos_url": "https://api.github.com/users/sayakpaul/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sayakpaul",
"user_view_type": "public"
}
] |
[
"Sure this sounds great! Would this be pure torchvision, albumentations, or something else?",
"I am considering `torchvision` and `albumentations`. Also [works with TensorFlow](https://github.com/deep-diver/segformer-tf-transformers/blob/main/notebooks/TFSegFormer_Finetune.ipynb). \r\n\r\nI am assigning the issue to myself then. "
] | 2022-11-01T07:54:50
| 2022-11-04T18:23:36
| 2022-11-04T18:23:36
|
MEMBER
| null | null | null | null |
Currently, we have these guides for object detection and image classification:
* https://huggingface.co/docs/datasets/object_detection
* https://huggingface.co/docs/datasets/image_classification
I am proposing adding a similar guide for semantic segmentation.
I am happy to contribute a PR for it.
Cc: @osanseviero @nateraw
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4",
"events_url": "https://api.github.com/users/sayakpaul/events{/privacy}",
"followers_url": "https://api.github.com/users/sayakpaul/followers",
"following_url": "https://api.github.com/users/sayakpaul/following{/other_user}",
"gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sayakpaul",
"id": 22957388,
"login": "sayakpaul",
"node_id": "MDQ6VXNlcjIyOTU3Mzg4",
"organizations_url": "https://api.github.com/users/sayakpaul/orgs",
"received_events_url": "https://api.github.com/users/sayakpaul/received_events",
"repos_url": "https://api.github.com/users/sayakpaul/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sayakpaul",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5181/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5181/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 3 days, 10:28:46
|
https://api.github.com/repos/huggingface/datasets/issues/5180
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5180/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5180/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5180/events
|
https://github.com/huggingface/datasets/issues/5180
| 1,431,012,438
|
I_kwDODunzps5VS4RW
| 5,180
|
An example or recommendations for creating large image datasets?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4",
"events_url": "https://api.github.com/users/sayakpaul/events{/privacy}",
"followers_url": "https://api.github.com/users/sayakpaul/followers",
"following_url": "https://api.github.com/users/sayakpaul/following{/other_user}",
"gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sayakpaul",
"id": 22957388,
"login": "sayakpaul",
"node_id": "MDQ6VXNlcjIyOTU3Mzg4",
"organizations_url": "https://api.github.com/users/sayakpaul/orgs",
"received_events_url": "https://api.github.com/users/sayakpaul/received_events",
"repos_url": "https://api.github.com/users/sayakpaul/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sayakpaul",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[
"The beam utilities allow to prepare a dataset as parquet in your cloud storage. From my perspective this CLI is not super easy to use, but we've been working on a new python API to prepare a dataset in your cloud storage:\r\n```python\r\nfrom datasets import load_dataset_builder\r\n\r\nbuilder = load_dataset_builder(\"c4\", \"en\")\r\nbuilder.download_and_prepapre(\"s3://my-bucket/c4\", file_format=\"parquet\")\r\n```\r\n\r\nAnd to use Beam you can do:\r\n```python\r\nbeam_runner = ... # one of \"SparkRunner\", \"DataFlowRunner\", \"DirectRunner\", etc.\r\nbeam_options = ...\r\n\r\nbuilder.download_and_prepapre(\r\n \"s3://my-bucket/c4\",\r\n file_format=\"parquet\",\r\n beam_runner=beam_runner,\r\n beam_options=beam_options\r\n)\r\n```\r\n\r\nThough Beam can be used ONLY if there is a dataset script based on the `BeamBasedBuilder` right now - it doesn't work on an arbitrary dataset (see [wikipedia.py](https://huggingface.co/datasets/wikipedia/blob/main/wikipedia.py) for example).",
"Thanks! \r\n\r\nWould be nice to have something similar for creating large image datasets. "
] | 2022-11-01T07:38:38
| 2022-11-02T10:17:11
| null |
MEMBER
| null | null | null | null |
I know that Apache Beam and `datasets` have [some connector utilities](https://huggingface.co/docs/datasets/beam). But it's a little unclear what we mean by "But if you want to run your own Beam pipeline with Dataflow, here is how:". What does that pipeline do?
As a user, I was wondering if we have this support for creating large image datasets. If so, we should mention that [here](https://huggingface.co/docs/datasets/image_dataset).
Cc @lhoestq
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5180/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5180/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/5179
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5179/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5179/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5179/events
|
https://github.com/huggingface/datasets/issues/5179
| 1,430,826,100
|
I_kwDODunzps5VSKx0
| 5,179
|
`map()` fails midway due to format incompatibility
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4",
"events_url": "https://api.github.com/users/sayakpaul/events{/privacy}",
"followers_url": "https://api.github.com/users/sayakpaul/followers",
"following_url": "https://api.github.com/users/sayakpaul/following{/other_user}",
"gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sayakpaul",
"id": 22957388,
"login": "sayakpaul",
"node_id": "MDQ6VXNlcjIyOTU3Mzg4",
"organizations_url": "https://api.github.com/users/sayakpaul/orgs",
"received_events_url": "https://api.github.com/users/sayakpaul/received_events",
"repos_url": "https://api.github.com/users/sayakpaul/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sayakpaul",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] |
[
"Cc: @lhoestq ",
"You can end up with a list instead of a tensor if all the tensors inside the list can't be stacked together - can you make sure all your inputs are tensors with the same shape ?",
"Is there an easy way to ensure it?",
"You can make sure your `tokenize` function always return tensors of the same shape",
"I modified my `tokenize()` function to be like so:\r\n\r\n```py\r\ndef tokenize(batch):\r\n return tokenizer(batch[\"text\"], padding=\"longest\")\r\n```\r\n\r\nso that the padding always happens w.r.t to the length of the longest sequence in a batch. The issue still persists. Is there any other way? ",
"tbh I though your first implementation was fine\r\n```python\r\ndef tokenize(batch):\r\n return tokenizer(batch[\"text\"], padding=True, truncation=True)\r\n```\r\n\r\nMaybe you can try to see what the erroring data looks like by adding a try/except in `get_test_accuracy` ?",
"This is what I got. \r\n\r\nFor the non-erroring data, it looks like (without the labels):\r\n\r\n```\r\ntensor([[ 101, 10047, 3110, ..., 0, 0, 0],\r\n [ 101, 1045, 2514, ..., 0, 0, 0],\r\n [ 101, 1045, 2514, ..., 0, 0, 0],\r\n ...,\r\n [ 101, 1045, 2005, ..., 0, 0, 0],\r\n [ 101, 1045, 2572, ..., 0, 0, 0],\r\n [ 101, 10047, 7481, ..., 0, 0, 0]]) 128\r\ntensor([[1, 1, 1, ..., 0, 0, 0],\r\n [1, 1, 1, ..., 0, 0, 0],\r\n [1, 1, 1, ..., 0, 0, 0],\r\n ...,\r\n [1, 1, 1, ..., 0, 0, 0],\r\n [1, 1, 1, ..., 0, 0, 0],\r\n [1, 1, 1, ..., 0, 0, 0]]) 128\r\n```\r\n\r\nFor the erroring part:\r\n\r\n```\r\n[tensor([ 101, 1045, 2064, 2102, 2393, 3110, 2066, 2242, 6355, 3047, 2004, 2574,\r\n 2004, 1996, 8629, 2357, 2125, 4299, 1045, 2071, 2424, 2009, 2006, 7858,\r\n 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0]), tensor([ 101, 10047, 5458, 1997, 3110, 11654, 1998, 11055, 102, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0]), tensor([ 101, 1045, 2074, 2064, 2102, 6073, 1996, 3110, 2008, 2026,\r\n 14982, 2000, 5587, 2203, 16650, 29563, 2030, 2569, 4506, 2052,\r\n 2191, 1037, 2738, 11552, 2208, 17044, 14540, 2100, 3375, 102,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0]),\r\n...\r\n\r\n[tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]), tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]),\r\n...\r\n```\r\n\r\nI also tried investigating the shapes of the individual entries within a `batch` without the labels:\r\n\r\n```py\r\ndef get_test_accuracy(model):\r\n def fn(batch): \r\n try:\r\n inputs = {k:v.to(device) for k,v in batch.items() \r\n if k in tokenizer.model_input_names}\r\n with torch.no_grad():\r\n output = model(**inputs)\r\n pred_label = torch.argmax(output.logits, axis=-1)\r\n return {\"predicted_label\": pred_label.cpu().numpy()}\r\n except:\r\n for k in batch:\r\n if k != \"label\":\r\n for i in range(len(batch[k])):\r\n print(batch[k][i].shape)\r\n return fn\r\n```\r\n\r\nThey are:\r\n\r\n```\r\n...\r\ntorch.Size([66])\r\ntorch.Size([66])\r\ntorch.Size([66])\r\ntorch.Size([66])\r\ntorch.Size([66])\r\ntorch.Size([66])\r\ntorch.Size([66])\r\ntorch.Size([66])\r\ntorch.Size([66])\r\ntorch.Size([66])\r\ntorch.Size([66])\r\ntorch.Size([66])\r\ntorch.Size([66])\r\ntorch.Size([69])\r\ntorch.Size([69])\r\ntorch.Size([69])\r\ntorch.Size([69])\r\ntorch.Size([69])\r\ntorch.Size([69])\r\ntorch.Size([69])\r\ntorch.Size([69])\r\ntorch.Size([69])\r\ntorch.Size([69])\r\ntorch.Size([69])\r\ntorch.Size([69])\r\ntorch.Size([69])\r\ntorch.Size([69])\r\ntorch.Size([69])\r\ntorch.Size([69])\r\ntorch.Size([69])\r\ntorch.Size([69])\r\ntorch.Size([69])\r\ntorch.Size([69])\r\ntorch.Size([69])\r\ntorch.Size([69])\r\ntorch.Size([69])\r\ntorch.Size([69])\r\n```\r\n\r\nThere are differing shapes. I understand if I set `batch_size=None` in `emotions_encoded = emotions.map(tokenize, batched=True)` the problem should be fixed as the whole dataset would be treated as a single batch. But is there a way to do that in batches? ",
"If you use the same batch_size for your two maps, you should get the exact same batches - therefore all containing the same shapes",
"Oh I see. Thanks. Closing this issue. "
] | 2022-11-01T03:57:59
| 2022-11-08T11:35:26
| 2022-11-08T11:35:26
|
MEMBER
| null | null | null | null |
### Describe the bug
I am using the `emotion` dataset from Hub for sequence classification. After training the model, I am using it to generate predictions for all the entries present in the `validation` split of the dataset.
```py
def get_test_accuracy(model):
def fn(batch):
inputs = {k:v.to(device) for k,v in batch.items()
if k in tokenizer.model_input_names}
with torch.no_grad():
output = model(**inputs)
pred_label = torch.argmax(output.logits, axis=-1)
return {"predicted_label": pred_label.cpu().numpy()}
return fn
```
This is how the `get_test_accuracy()` is being used:
```py
emotions = load_dataset("emotion")
def tokenize(batch):
return tokenizer(batch["text"], padding=True, truncation=True)
emotions_encoded = emotions.map(tokenize, batched=True)
emotions_encoded.set_format("torch",
columns=["input_ids", "attention_mask", "label"])
new_dataset = emotions_encoded["validation"].map(
accuracy_fn, batched=True, batch_size=128
)
```
Complete code is available in the Colab Notebook provided below.
The `map()` process fails midway giving:
```shell
AttributeError Traceback (most recent call last)
<ipython-input-8-ad24ac288eb4> in <module>
2
3 new_dataset = emotions_encoded["validation"].map(
----> 4 accuracy_fn, batched=True, batch_size=128
5 )
7 frames
/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)
2588 new_fingerprint=new_fingerprint,
2589 disable_tqdm=disable_tqdm,
-> 2590 desc=desc,
2591 )
2592 else:
/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
582 self: "Dataset" = kwargs.pop("self")
583 # apply actual function
--> 584 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
585 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
586 for dataset in datasets:
/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
549 }
550 # apply actual function
--> 551 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
552 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
553 # re-apply format to the output
/usr/local/lib/python3.7/dist-packages/datasets/fingerprint.py in wrapper(*args, **kwargs)
478 # Call actual function
479
--> 480 out = func(self, *args, **kwargs)
481
482 # Update fingerprint of in-place transforms + update in-place history of transforms
/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in _map_single(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only)
2970 indices,
2971 check_same_num_examples=len(input_dataset.list_indexes()) > 0,
-> 2972 offset=offset,
2973 )
2974 except NumExamplesMismatchError:
/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in apply_function_on_filtered_inputs(inputs, indices, check_same_num_examples, offset)
2850 if with_rank:
2851 additional_args += (rank,)
-> 2852 processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)
2853 if update_data is None:
2854 # Check if the function returns updated examples
<ipython-input-6-4e0d280426f6> in fn(batch)
1 def get_test_accuracy(model):
2 def fn(batch):
----> 3 inputs = {k:v.to(device) for k,v in batch.items()
4 if k in tokenizer.model_input_names}
5 with torch.no_grad():
<ipython-input-6-4e0d280426f6> in <dictcomp>(.0)
2 def fn(batch):
3 inputs = {k:v.to(device) for k,v in batch.items()
----> 4 if k in tokenizer.model_input_names}
5 with torch.no_grad():
6 output = model(**inputs)
AttributeError: 'list' object has no attribute 'to'
```
As you'd notice in the notebook, the process fails _midway_ and not at the beginning.
Is this expected?
### Steps to reproduce the bug
Colab Notebook:
https://colab.research.google.com/gist/sayakpaul/d1570d537faf39040d02d77b1ed7de07/scratchpad.ipynb
### Expected behavior
The mapping process should complete as is. If you switch the `split` to `test` it works as expected.
### Environment info
Colab
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4",
"events_url": "https://api.github.com/users/sayakpaul/events{/privacy}",
"followers_url": "https://api.github.com/users/sayakpaul/followers",
"following_url": "https://api.github.com/users/sayakpaul/following{/other_user}",
"gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sayakpaul",
"id": 22957388,
"login": "sayakpaul",
"node_id": "MDQ6VXNlcjIyOTU3Mzg4",
"organizations_url": "https://api.github.com/users/sayakpaul/orgs",
"received_events_url": "https://api.github.com/users/sayakpaul/received_events",
"repos_url": "https://api.github.com/users/sayakpaul/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sayakpaul",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5179/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5179/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 7 days, 7:37:27
|
https://api.github.com/repos/huggingface/datasets/issues/5178
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5178/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5178/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5178/events
|
https://github.com/huggingface/datasets/issues/5178
| 1,430,800,810
|
I_kwDODunzps5VSEmq
| 5,178
|
Unable to download the Chinese `wikipedia`, the dumpstatus.json not found!
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/37113676?v=4",
"events_url": "https://api.github.com/users/beyondguo/events{/privacy}",
"followers_url": "https://api.github.com/users/beyondguo/followers",
"following_url": "https://api.github.com/users/beyondguo/following{/other_user}",
"gists_url": "https://api.github.com/users/beyondguo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/beyondguo",
"id": 37113676,
"login": "beyondguo",
"node_id": "MDQ6VXNlcjM3MTEzNjc2",
"organizations_url": "https://api.github.com/users/beyondguo/orgs",
"received_events_url": "https://api.github.com/users/beyondguo/received_events",
"repos_url": "https://api.github.com/users/beyondguo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/beyondguo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/beyondguo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/beyondguo",
"user_view_type": "public"
}
|
[] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] |
[
"In the dumps page of the wiki (https://dumps.wikimedia.org/zhwiki/), I found the following dumps:\r\n```\r\nIndex of /zhwiki/\r\n[../](https://dumps.wikimedia.org/)\r\n[20220701/](https://dumps.wikimedia.org/zhwiki/20220701/) 21-Aug-2022 01:48 -\r\n[20220720/](https://dumps.wikimedia.org/zhwiki/20220720/) 02-Sep-2022 01:48 -\r\n[20220801/](https://dumps.wikimedia.org/zhwiki/20220801/) 21-Sep-2022 01:44 -\r\n[20220820/](https://dumps.wikimedia.org/zhwiki/20220820/) 01-Oct-2022 09:39 -\r\n[20220901/](https://dumps.wikimedia.org/zhwiki/20220901/) 20-Oct-2022 09:44 -\r\n[20220920/](https://dumps.wikimedia.org/zhwiki/20220920/) 23-Sep-2022 12:06 -\r\n[20221001/](https://dumps.wikimedia.org/zhwiki/20221001/) 04-Oct-2022 15:10 -\r\n[20221020/](https://dumps.wikimedia.org/zhwiki/20221020/) 01-Nov-2022 03:15 -\r\n[latest/](https://dumps.wikimedia.org/zhwiki/latest/) 01-Nov-2022 03:15 -\r\n```\r\n\r\nMaybe the older dumps are not available which caused the downloading failure? \r\n\r\nHowever, when I changed to the newer version:\r\n```\r\ndata = load_dataset('wikipedia', '20220701.zh', beam_runner='DirectRunner')\r\n```\r\n\r\nit shows:\r\n```\r\nValueError: BuilderConfig 20220701.zh not found. Available: ['20220301.aa', '20220301.ab', '20220301.ace', '20220301.ady', '20220301.af', '20220301.ak', '20220301.als', '20220301.am', '20220301.an', '20220301.ang', '20220301.ar', '20220301.arc', '20220301.arz', '20220301.as', '20220301.ast', '20220301.atj', '20220301.av', '20220301.ay', '20220301.az', '20220301.azb', '20220301.ba', '20220301.bar', '20220301.bat-smg', '20220301.bcl', '20220301.be', '20220301.be-x-old', '20220301.bg', '20220301.bh', '20220301.bi', '20220301.bjn', '20220301.bm', '20220301.bn', '20220301.bo', '20220301.bpy', '20220301.br', '20220301.bs', '20220301.bug', '20220301.bxr', '20220301.ca', '20220301.cbk-zam', '20220301.cdo', '20220301.ce', '20220301.ceb', '20220301.ch', '20220301.cho', '20220301.chr', '20220301.chy', '20220301.ckb', '20220301.co', '20220301.cr', '20220301.crh', '20220301.cs', '20220301.csb', '20220301.cu', '20220301.cv', '20220301.cy', '20220301.da', '20220301.de', '20220301.din', '20220301.diq', '20220301.dsb', '20220301.dty', '20220301.dv', '20220301.dz', '20220301.ee', '20220301.el', '20220301.eml', '20220301.en', '20220301.eo', '20220301.es', '20220301.et', '20220301.eu', '20220301.ext', '20220301.fa', '20220301.ff', '20220301.fi', '20220301.fiu-vro', '20220301.fj', '20220301.fo', '20220301.fr', '20220301.frp', '20220301.frr', '20220301.fur', '20220301.fy', '20220301.ga', '20220301.gag', '20220301.gan', '20220301.gd', '20220301.gl', '20220301.glk', '20220301.gn', '20220301.gom', '20220301.gor', '20220301.got', '20220301.gu', '20220301.gv', '20220301.ha', '20220301.hak', '20220301.haw', '20220301.he', '20220301.hi', '20220301.hif', '20220301.ho', '20220301.hr', '20220301.hsb', '20220301.ht', '20220301.hu', '20220301.hy', '20220301.ia', '20220301.id', '20220301.ie', '20220301.ig', '20220301.ii', '20220301.ik', '20220301.ilo', '20220301.inh', '20220301.io', '20220301.is', '20220301.it', '20220301.iu', '20220301.ja', '20220301.jam', '20220301.jbo', '20220301.jv', '20220301.ka', '20220301.kaa', '20220301.kab', '20220301.kbd', '20220301.kbp', '20220301.kg', '20220301.ki', '20220301.kj', '20220301.kk', '20220301.kl', '20220301.km', '20220301.kn', '20220301.ko', '20220301.koi', '20220301.krc', '20220301.ks', '20220301.ksh', '20220301.ku', '20220301.kv', '20220301.kw', '20220301.ky', '20220301.la', '20220301.lad', '20220301.lb', '20220301.lbe', '20220301.lez', '20220301.lfn', '20220301.lg', '20220301.li', '20220301.lij', '20220301.lmo', '20220301.ln', '20220301.lo', '20220301.lrc', '20220301.lt', '20220301.ltg', '20220301.lv', '20220301.mai', '20220301.map-bms', '20220301.mdf', '20220301.mg', '20220301.mh', '20220301.mhr', '20220301.mi', '20220301.min', '20220301.mk', '20220301.ml', '20220301.mn', '20220301.mr', '20220301.mrj', '20220301.ms', '20220301.mt', '20220301.mus', '20220301.mwl', '20220301.my', '20220301.myv', '20220301.mzn', '20220301.na', '20220301.nah', '20220301.nap', '20220301.nds', '20220301.nds-nl', '20220301.ne', '20220301.new', '20220301.ng', '20220301.nl', '20220301.nn', '20220301.no', '20220301.nov', '20220301.nrm', '20220301.nso', '20220301.nv', '20220301.ny', '20220301.oc', '20220301.olo', '20220301.om', '20220301.or', '20220301.os', '20220301.pa', '20220301.pag', '20220301.pam', '20220301.pap', '20220301.pcd', '20220301.pdc', '20220301.pfl', '20220301.pi', '20220301.pih', '20220301.pl', '20220301.pms', '20220301.pnb', '20220301.pnt', '20220301.ps', '20220301.pt', '20220301.qu', '20220301.rm', '20220301.rmy', '20220301.rn', '20220301.ro', '20220301.roa-rup', '20220301.roa-tara', '20220301.ru', '20220301.rue', '20220301.rw', '20220301.sa', '20220301.sah', '20220301.sat', '20220301.sc', '20220301.scn', '20220301.sco', '20220301.sd', '20220301.se', '20220301.sg', '20220301.sh', '20220301.si', '20220301.simple', '20220301.sk', '20220301.sl', '20220301.sm', '20220301.sn', '20220301.so', '20220301.sq', '20220301.sr', '20220301.srn', '20220301.ss', '20220301.st', '20220301.stq', '20220301.su', '20220301.sv', '20220301.sw', '20220301.szl', '20220301.ta', '20220301.tcy', '20220301.te', '20220301.tet', '20220301.tg', '20220301.th', '20220301.ti', '20220301.tk', '20220301.tl', '20220301.tn', '20220301.to', '20220301.tpi', '20220301.tr', '20220301.ts', '20220301.tt', '20220301.tum', '20220301.tw', '20220301.ty', '20220301.tyv', '20220301.udm', '20220301.ug', '20220301.uk', '20220301.ur', '20220301.uz', '20220301.ve', '20220301.vec', '20220301.vep', '20220301.vi', '20220301.vls', '20220301.vo', '20220301.wa', '20220301.war', '20220301.wo', '20220301.wuu', '20220301.xal', '20220301.xh', '20220301.xmf', '20220301.yi', '20220301.yo', '20220301.za', '20220301.zea', '20220301.zh', '20220301.zh-classical', '20220301.zh-min-nan', '20220301.zh-yue', '20220301.zu']\r\n```\r\n\r\nSo I guess adding the latest dumps versions to the `BuilderConfig` may solve the problem? But how to add it?",
"Hi, @beyondguo, thanks for reporting.\r\n\r\nYou have all the information in the dataset card: https://huggingface.co/datasets/wikipedia\r\n\r\n> Then, you can load any subset of Wikipedia per language and per date this way:\r\n> ```python\r\n> from datasets import load_dataset\r\n> \r\n> load_dataset(\"wikipedia\", language=\"sw\", date=\"20220120\", beam_runner=...) \r\n> ```\r\n> where you can pass as beam_runner any Apache Beam supported runner for (distributed) data processing (see [here](https://beam.apache.org/documentation/runners/capability-matrix/)). Pass \"DirectRunner\" to run it on your machine.\r\n> \r\n> You can find the full list of languages and dates [here](https://dumps.wikimedia.org/backup-index.html).\r\n\r\nNote that you have to pass the language and date as keyword arguments, and the available dates depend on the language and can be found on Wikimedia website.",
"Also:\r\n> Some subsets of Wikipedia have already been processed by HuggingFace, and you can load them just with:\r\n> ```python\r\n> load_dataset(\"wikipedia\", \"20220301.en\")\r\n> ```\r\n> The list of pre-processed subsets is:\r\n> - \"20220301.de\"\r\n> - \"20220301.en\"\r\n> - \"20220301.fr\"\r\n> - \"20220301.frr\"\r\n> - \"20220301.it\"\r\n> - \"20220301.simple\""
] | 2022-11-01T03:17:55
| 2022-11-02T08:27:15
| 2022-11-02T08:24:29
|
NONE
| null | null | null | null |
### Describe the bug
I tried:
`data = load_dataset('wikipedia', '20220301.zh', beam_runner='DirectRunner')`
and
`data = load_dataset("wikipedia", language="zh", date="20220301", beam_runner='DirectRunner')`
but both got:
`FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/zhwiki/20220301/dumpstatus.json`
the full report is:
```
FileNotFoundError Traceback (most recent call last)
<ipython-input-13-d07c5021090c> in <module>
1 from datasets import load_dataset
2
----> 3 data = load_dataset("wikipedia", language="zh", date="20220301", beam_runner='DirectRunner')<?, ?it/s]
/opt/conda/lib/python3.8/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)
1740
1741 # Download and prepare data
-> 1742 builder_instance.download_and_prepare(
1743 download_config=download_config,
1744 download_mode=download_mode,
/opt/conda/lib/python3.8/site-packages/datasets/builder.py in download_and_prepare(self, output_dir, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, storage_options, **download_and_prepare_kwargs)
812 **download_and_prepare_kwargs,
813 }
--> 814 self._download_and_prepare(
815 dl_manager=dl_manager,
816 verify_infos=verify_infos,
/opt/conda/lib/python3.8/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_splits_kwargs)
1645 options=beam_options,
1646 )
-> 1647 super()._download_and_prepare(
1648 dl_manager, verify_infos=False, pipeline=pipeline, **prepare_splits_kwargs
1649 ) # TODO handle verify_infos in beam datasets
/opt/conda/lib/python3.8/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
881 split_dict = SplitDict(dataset_name=self.name)
882 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)
--> 883 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
884
885 # Checksums verification
~/.cache/huggingface/modules/datasets_modules/datasets/wikipedia/aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559/wikipedia.py in _split_generators(self, dl_manager, pipeline)
943 info_url = _base_url(lang) + _INFO_FILE
944 # Use dictionary since testing mock always returns the same result.
--> 945 downloaded_files = dl_manager.download_and_extract({"info": info_url})
946
947 xml_urls = []
/opt/conda/lib/python3.8/site-packages/datasets/download/download_manager.py in download_and_extract(self, url_or_urls)
431 extracted_path(s): `str`, extracted paths of given URL(s).
432 """
--> 433 return self.extract(self.download(url_or_urls))
434
435 def get_recorded_sizes_checksums(self):
/opt/conda/lib/python3.8/site-packages/datasets/download/download_manager.py in download(self, url_or_urls)
308
309 start_time = datetime.now()
--> 310 downloaded_path_or_paths = map_nested(
311 download_func,
312 url_or_urls,
/opt/conda/lib/python3.8/site-packages/datasets/utils/py_utils.py in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, parallel_min_length, types, disable_tqdm, desc)
427 num_proc = 1
428 if num_proc <= 1 or len(iterable) < parallel_min_length:
--> 429 mapped = [
430 _single_map_nested((function, obj, types, None, True, None))
431 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)
/opt/conda/lib/python3.8/site-packages/datasets/utils/py_utils.py in <listcomp>(.0)
428 if num_proc <= 1 or len(iterable) < parallel_min_length:
429 mapped = [
--> 430 _single_map_nested((function, obj, types, None, True, None))
431 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)
432 ]
/opt/conda/lib/python3.8/site-packages/datasets/utils/py_utils.py in _single_map_nested(args)
329 # Singleton first to spare some computation
330 if not isinstance(data_struct, dict) and not isinstance(data_struct, types):
--> 331 return function(data_struct)
332
333 # Reduce logging to keep things readable in multiprocessing with tqdm
/opt/conda/lib/python3.8/site-packages/datasets/download/download_manager.py in _download(self, url_or_filename, download_config)
335 # append the relative path to the base_path
336 url_or_filename = url_or_path_join(self._base_path, url_or_filename)
--> 337 return cached_path(url_or_filename, download_config=download_config)
338
339 def iter_archive(self, path_or_buf: Union[str, io.BufferedReader]):
/opt/conda/lib/python3.8/site-packages/datasets/utils/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs)
186 if is_remote_url(url_or_filename):
187 # URL, so get it from the cache (downloading if necessary)
--> 188 output_path = get_from_cache(
189 url_or_filename,
190 cache_dir=cache_dir,
/opt/conda/lib/python3.8/site-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token, ignore_url_params, download_desc)
533 )
534 elif response is not None and response.status_code == 404:
--> 535 raise FileNotFoundError(f"Couldn't find file at {url}")
536 _raise_if_offline_mode_is_enabled(f"Tried to reach {url}")
537 if head_error is not None:
FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/zhwiki/20220301/dumpstatus.json
```
### Steps to reproduce the bug
`data = load_dataset('wikipedia', '20220301.zh', beam_runner='DirectRunner')`
### Expected behavior
download the data
### Environment info
python3.6
latest datasets/transformers version
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5178/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5178/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 1 day, 5:06:34
|
https://api.github.com/repos/huggingface/datasets/issues/5176
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5176/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5176/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5176/events
|
https://github.com/huggingface/datasets/issues/5176
| 1,430,214,539
|
I_kwDODunzps5VP1eL
| 5,176
|
prepare dataset for cloud storage doesn't work
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/27285078?v=4",
"events_url": "https://api.github.com/users/araonblake/events{/privacy}",
"followers_url": "https://api.github.com/users/araonblake/followers",
"following_url": "https://api.github.com/users/araonblake/following{/other_user}",
"gists_url": "https://api.github.com/users/araonblake/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/araonblake",
"id": 27285078,
"login": "araonblake",
"node_id": "MDQ6VXNlcjI3Mjg1MDc4",
"organizations_url": "https://api.github.com/users/araonblake/orgs",
"received_events_url": "https://api.github.com/users/araonblake/received_events",
"repos_url": "https://api.github.com/users/araonblake/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/araonblake/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/araonblake/subscriptions",
"type": "User",
"url": "https://api.github.com/users/araonblake",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"It looks like an issue with `gcsfs`, are you able to instantiate a `GCSFileSystem` manually ?",
"closing since it was probably due to gcsfs"
] | 2022-10-31T17:28:57
| 2023-03-28T09:11:46
| 2023-03-28T09:11:45
|
NONE
| null | null | null | null |
### Describe the bug
Following the [documentation](https://huggingface.co/docs/datasets/filesystems#load-and-save-your-datasets-using-your-cloud-storage-filesystem) and [this PR](https://github.com/huggingface/datasets/pull/4724), I was downloading and storing huggingface dataset to cloud storage.
```
from datasets import load_dataset, load_dataset_builder
dataset = load_dataset_builder("wikipedia", "20220301.en", cache_dir='LOCAL_PATH')
dataset.download_and_prepare("gs://Bucket_NAME", file_format="parquet")
```
The above code successfully downloaded dataset, however, it returns error from `download_and_prepare`.
> Traceback (most recent call last):
> File "/shared/zhuiai/research/wiki/wiki/gcsfs.py", line 12, in <module>
> dataset.download_and_prepare("gs://upgen/dataset/wiki", file_format="parquet")
> File "/shared/zhuiai/.conda/envs/wiki/lib/python3.9/site-packages/datasets/builder.py", line 671, in download_and_prepare
> fs_token_paths = fsspec.get_fs_token_paths(output_dir, storage_options=storage_options)
> File "/shared/zhuiai/.conda/envs/wiki/lib/python3.9/site-packages/fsspec/core.py", line 635, in get_fs_token_paths
> cls = get_filesystem_class(protocol)
> File "/shared/zhuiai/.conda/envs/wiki/lib/python3.9/site-packages/fsspec/registry.py", line 234, in get_filesystem_class
> register_implementation(protocol, _import_class(bit["class"]))
> File "/shared/zhuiai/.conda/envs/wiki/lib/python3.9/site-packages/fsspec/registry.py", line 257, in _import_class
> mod = importlib.import_module(mod)
> File "/shared/zhuiai/.conda/envs/wiki/lib/python3.9/importlib/__init__.py", line 127, in import_module
> return _bootstrap._gcd_import(name[level:], package, level)
> File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
> File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
> File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
> File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
> File "<frozen importlib._bootstrap_external>", line 850, in exec_module
> File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
> File "/shared/zhuiai/research/wiki/wiki/gcsfs.py", line 12, in <module>
> dataset.download_and_prepare("gs://upgen/dataset/wiki", file_format="parquet")
> File "/shared/zhuiai/.conda/envs/wiki/lib/python3.9/site-packages/datasets/builder.py", line 671, in download_and_prepare
> fs_token_paths = fsspec.get_fs_token_paths(output_dir, storage_options=storage_options)
> File "/shared/zhuiai/.conda/envs/wiki/lib/python3.9/site-packages/fsspec/core.py", line 635, in get_fs_token_paths
> cls = get_filesystem_class(protocol)
> File "/shared/zhuiai/.conda/envs/wiki/lib/python3.9/site-packages/fsspec/registry.py", line 234, in get_filesystem_class
> register_implementation(protocol, _import_class(bit["class"]))
> File "/shared/zhuiai/.conda/envs/wiki/lib/python3.9/site-packages/fsspec/registry.py", line 258, in _import_class
> return getattr(mod, name)
> AttributeError: partially initialized module 'gcsfs' has no attribute 'GCSFileSystem' (most likely due to a circular import)
### Steps to reproduce the bug
1. pip install datasets==2.6.1 gcsfs==2022.8.2
2. Run the following code will reproduce the issue (change `LOCAL_PATH` and `Bucket_NAME` accordingly)
```
from datasets import load_dataset, load_dataset_builder
dataset = load_dataset_builder("wikipedia", "20220301.en", cache_dir='LOCAL_PATH')
dataset.download_and_prepare("gs://Bucket_NAME", file_format="parquet")
```
### Expected behavior
Expecting successful downloading dataset and uploading it to cloud storage.
### Environment info
- `datasets` version: 2.6.1
- Platform: Linux-5.15.0-25-generic-x86_64-with-glibc2.35
- Python version: 3.9.12
- PyArrow version: 7.0.0
- Pandas version: 1.5.1
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5176/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5176/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 147 days, 15:42:48
|
https://api.github.com/repos/huggingface/datasets/issues/5175
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5175/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5175/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5175/events
|
https://github.com/huggingface/datasets/issues/5175
| 1,428,696,231
|
I_kwDODunzps5VKCyn
| 5,175
|
Loading an external NER dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/112555442?v=4",
"events_url": "https://api.github.com/users/Taghreed7878/events{/privacy}",
"followers_url": "https://api.github.com/users/Taghreed7878/followers",
"following_url": "https://api.github.com/users/Taghreed7878/following{/other_user}",
"gists_url": "https://api.github.com/users/Taghreed7878/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Taghreed7878",
"id": 112555442,
"login": "Taghreed7878",
"node_id": "U_kgDOBrV1sg",
"organizations_url": "https://api.github.com/users/Taghreed7878/orgs",
"received_events_url": "https://api.github.com/users/Taghreed7878/received_events",
"repos_url": "https://api.github.com/users/Taghreed7878/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Taghreed7878/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Taghreed7878/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Taghreed7878",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[] | 2022-10-30T09:31:55
| 2022-11-01T13:15:49
| 2022-11-01T13:15:49
|
NONE
| null | null | null | null |
I need to use huggingface datasets to load a custom dataset similar to conll2003 but with more entities and each the files contain only two columns: word and ner tag.
I tried this code snnipet that I found here as an answer to a similar issue:
from datasets import Dataset
INPUT_COLUMNS = "ID Text NER".split()
def read_conll(file):
example = {col: [] for col in INPUT_COLUMNS}
idx = 0
with open(file) as f:
for line in f:
if line.startswith("-DOCSTART-") or line == "\n" or not line:
if example[next(iter(example))]:
yield idx, example
idx += 1
example = {col: [] for col in INPUT_COLUMNS}
else:
row_cols = line.split()
for i, col in enumerate(example):
example[col] = row_cols[i].rstrip()
train = Dataset.from_generator(read_conll, gen_kwargs={"file": "some_path"})
But the following error happened:
ValueError: Please pass `features` or at least one example when writing data
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/112555442?v=4",
"events_url": "https://api.github.com/users/Taghreed7878/events{/privacy}",
"followers_url": "https://api.github.com/users/Taghreed7878/followers",
"following_url": "https://api.github.com/users/Taghreed7878/following{/other_user}",
"gists_url": "https://api.github.com/users/Taghreed7878/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Taghreed7878",
"id": 112555442,
"login": "Taghreed7878",
"node_id": "U_kgDOBrV1sg",
"organizations_url": "https://api.github.com/users/Taghreed7878/orgs",
"received_events_url": "https://api.github.com/users/Taghreed7878/received_events",
"repos_url": "https://api.github.com/users/Taghreed7878/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Taghreed7878/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Taghreed7878/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Taghreed7878",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5175/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5175/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 2 days, 3:43:54
|
https://api.github.com/repos/huggingface/datasets/issues/5172
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5172/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5172/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5172/events
|
https://github.com/huggingface/datasets/issues/5172
| 1,425,523,114
|
I_kwDODunzps5U98Gq
| 5,172
|
Inconsistency behavior between handling local file protocol and other FS protocols
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/37735580?v=4",
"events_url": "https://api.github.com/users/leoleoasd/events{/privacy}",
"followers_url": "https://api.github.com/users/leoleoasd/followers",
"following_url": "https://api.github.com/users/leoleoasd/following{/other_user}",
"gists_url": "https://api.github.com/users/leoleoasd/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/leoleoasd",
"id": 37735580,
"login": "leoleoasd",
"node_id": "MDQ6VXNlcjM3NzM1NTgw",
"organizations_url": "https://api.github.com/users/leoleoasd/orgs",
"received_events_url": "https://api.github.com/users/leoleoasd/received_events",
"repos_url": "https://api.github.com/users/leoleoasd/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/leoleoasd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leoleoasd/subscriptions",
"type": "User",
"url": "https://api.github.com/users/leoleoasd",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[] | 2022-10-27T12:03:20
| 2024-05-08T19:31:13
| null |
NONE
| null | null | null | null |
### Describe the bug
These lines us used during load_from_disk:
```
if is_remote_filesystem(fs):
dest_dataset_dict_path = extract_path_from_uri(dataset_dict_path)
else:
fs = fsspec.filesystem("file")
dest_dataset_dict_path = dataset_dict_path
```
If a local FS is given, then it will the URL as the path name. If a remote Fs is given, then it will use the path of the URL. This is an inconsistent behavior when handling a file: when using remote FS, you must write a URL, but for local FS, even if you passed LocalFileSystem as `fs` you still can't use a `file://` URL. It will be recognized as a directory named `file:`.
### Steps to reproduce the bug
```
import fsspec.core
url = "hdfs:///somewhere/MNIST"
# url = "file:///somewhere/MNIST"
fs, path = fsspec.core.url_to_fs(url)
fs.ls(path) # this will always work
load_from_disk(path, fs) # only works for local FS
load_from_disk(url, fs) # only works for remote FS
```
### Expected behavior
one of `url` or `path` should always work
I think we extract path from given URL by using `fsspec.core.url_to_fs` instead of using `is_remote_filesystem` and `extract_path_from_uri` will fix this, since:
```
fsspec.core.url_to_fs("/somewhere/MNIST") -> LocalFs, '/somewhere/MNIST'
fsspec.core.url_to_fs("file:///somewhere/MNIST") -> LocalFs, '/somewhere/MNIST'
fsspec.core.url_to_fs("hdfs:///somewhere/MNIST") -> HDFS, '/somewhere/MNIST'
```
and
```
fsspec.core.url_to_fs("file:///somewhere/MNIST") == fsspec.core.url_to_fs("/somewhere/MNIST")
```
In theory, this wouldn't break anything, since giving local path and remote uri still works. It will only affect local URI (make it works too)
### Environment info
- `datasets` version: 2.5.1
- Platform: Linux-5.4.205.1**HIDDEN**
- Python version: 3.7.10
- PyArrow version: 8.0.0
- Pandas version: 1.2.4
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5172/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5172/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/5170
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5170/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5170/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5170/events
|
https://github.com/huggingface/datasets/issues/5170
| 1,425,301,835
|
I_kwDODunzps5U9GFL
| 5,170
|
[Caching] Deterministic hashing of torch tensors
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
| null |
[] |
[] | 2022-10-27T09:15:15
| 2022-11-02T17:18:43
| 2022-11-02T17:18:43
|
MEMBER
| null | null | null | null |
Currently this fails
```python
import torch
from datasets.fingerprint import Hasher
t = torch.tensor([1.])
def func(x):
return t + x
hash1 = Hasher.hash(func)
t = torch.tensor([1.])
hash2 = Hasher.hash(func)
assert hash1 == hash2
```
Also as noticed in https://discuss.huggingface.co/t/dataset-cant-cache-models-outputs/24945, using a model in a `map` function doesn't work well with caching. Indeed the `bert-base-uncased` model has a different hash every time you reload it. Supporting torch tensors may also help in this case.
This can be fixed by registering a custom pickling functions for torch tensors - as we did for other objects such as CodeType, FunctionType and Regex in `py_utils.py`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5170/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5170/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 6 days, 8:03:28
|
https://api.github.com/repos/huggingface/datasets/issues/5165
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5165/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5165/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5165/events
|
https://github.com/huggingface/datasets/issues/5165
| 1,423,616,677
|
I_kwDODunzps5U2qql
| 5,165
|
Memory explosion when trying to access 4d tensors in datasets cast to torch or np
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/22726840?v=4",
"events_url": "https://api.github.com/users/clefourrier/events{/privacy}",
"followers_url": "https://api.github.com/users/clefourrier/followers",
"following_url": "https://api.github.com/users/clefourrier/following{/other_user}",
"gists_url": "https://api.github.com/users/clefourrier/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/clefourrier",
"id": 22726840,
"login": "clefourrier",
"node_id": "MDQ6VXNlcjIyNzI2ODQw",
"organizations_url": "https://api.github.com/users/clefourrier/orgs",
"received_events_url": "https://api.github.com/users/clefourrier/received_events",
"repos_url": "https://api.github.com/users/clefourrier/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/clefourrier/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/clefourrier/subscriptions",
"type": "User",
"url": "https://api.github.com/users/clefourrier",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[] | 2022-10-26T08:14:47
| 2022-10-26T08:14:47
| null |
MEMBER
| null | null | null | null |
### Describe the bug
When trying to access an item by index, in a datasets.Dataset cast to torch/np using `set_format` or `with_format`, we get a memory explosion if the item contains 4d (or above) tensors.
### Steps to reproduce the bug
MWE:
```python
from datasets import load_dataset
import numpy as np
def create_4d_tensor(item):
i = item["num_nodes"]
item["x_big"] = np.random.rand(i, 2*i, int(i/2), 1) + 1 # we create a big 4d tensor
return item
if __name__ == "__main__":
dataset = load_dataset(path=f"graphs-datasets/PROTEINS")
# This works
print(dataset["train"].format)
print(dataset["train"][0].keys())
dataset = dataset.map(
create_4d_tensor,
batched=False,
writer_batch_size=100,
)
# This works
print(dataset["train"].format)
print(dataset["train"][0].keys())
dataset.set_format("torch")
print(dataset["train"].format)
# This gets killed :(
print(dataset["train"][0].keys())
```
The problem likely comes from `format_table` [here](https://cs.github.com/huggingface/datasets/blob/f09f781be3278156ce3aa6ec90c1926b1846a78f/src/datasets/arrow_dataset.py#L2328)
### Expected behavior
No memory explosion when trying to access dataset items after cast.
### Environment info
- `datasets` version: 2.3.2
- Platform: Linux-5.14.0-1054-oem-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 8.0.0
- Pandas version: 1.4.3
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5165/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5165/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/5162
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5162/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5162/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5162/events
|
https://github.com/huggingface/datasets/issues/5162
| 1,422,461,112
|
I_kwDODunzps5UyQi4
| 5,162
|
Pip-compile: Could not find a version that matches dill<0.3.6,>=0.3.6
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8604946?v=4",
"events_url": "https://api.github.com/users/Rijgersberg/events{/privacy}",
"followers_url": "https://api.github.com/users/Rijgersberg/followers",
"following_url": "https://api.github.com/users/Rijgersberg/following{/other_user}",
"gists_url": "https://api.github.com/users/Rijgersberg/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Rijgersberg",
"id": 8604946,
"login": "Rijgersberg",
"node_id": "MDQ6VXNlcjg2MDQ5NDY=",
"organizations_url": "https://api.github.com/users/Rijgersberg/orgs",
"received_events_url": "https://api.github.com/users/Rijgersberg/received_events",
"repos_url": "https://api.github.com/users/Rijgersberg/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Rijgersberg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rijgersberg/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Rijgersberg",
"user_view_type": "public"
}
|
[] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] |
[
"Thanks for reporting, @Rijgersberg.\r\n\r\nWe were waiting for the release of `dill` 0.3.6, that happened 2 days ago (24 Oct 2022): https://github.com/uqfoundation/dill/releases/tag/dill-0.3.6\r\n- See comment: https://github.com/huggingface/datasets/pull/4397#discussion_r880629543\r\n\r\nAlso `multiprocess` 0.70.14 was released 2 days ago: https://github.com/uqfoundation/multiprocess/releases/tag/multiprocess-0.70.14\r\n\r\nWe are addressing this issue to align dependencies.",
"In your specific setup, I guess the compatible configuration is with `multiprocess` 0.70.13 (instead of 0.70.14).",
"@Rijgersberg this issue is fixed. It will be available in our next `datasets` release.",
"Thanks!",
"> @Rijgersberg this issue is fixed. It will be available in our next `datasets` release.\n\nAny chance you have a eta? ",
"@StefanSamba we are disussing about making a release early this week.",
"@Rijgersberg, please also that you can make `pip-compile` work by using the backtracking resolver (instead of the legacy one): https://pip-tools.readthedocs.io/en/latest/#a-note-on-resolvers\r\n```\r\npip-compile --resolver=backtracking requirements.in\r\n```\r\nThis resolver will automatically use `multiprocess` 0.70.13 version. "
] | 2022-10-25T13:23:50
| 2022-11-14T08:25:37
| 2022-10-28T05:38:15
|
NONE
| null | null | null | null |
### Describe the bug
When using `pip-compile` (part of `pip-tools`) to generate a pinned requirements file that includes `datasets`, a version conflict of `dill` appears.
It is caused by a transitive dependency conflict between `datasets` and `multiprocess`.
### Steps to reproduce the bug
```bash
$ echo "datasets" > requirements.in
$ pip install pip-tools
$ pip-compile requirements.in
Could not find a version that matches dill<0.3.6,>=0.3.6 (from datasets==2.6.1->-r requirements.in (line 1))
Tried: 0.2, 0.2, 0.2.1, 0.2.1, 0.2.2, 0.2.2, 0.2.3, 0.2.3, 0.2.4, 0.2.4, 0.2.5, 0.2.5, 0.2.6, 0.2.7, 0.2.7.1, 0.2.8, 0.2.8.1, 0.2.8.2, 0.2.9, 0.3.0, 0.3.1, 0.3.1.1, 0.3.2, 0.3.3, 0.3.3, 0.3.4, 0.3.4, 0.3.5, 0.3.5, 0.3.5.1, 0.3.5.1, 0.3.6, 0.3.6
Skipped pre-versions: 0.1a1, 0.2a1, 0.2a1, 0.2b1, 0.2b1
There are incompatible versions in the resolved dependencies:
dill<0.3.6 (from datasets==2.6.1->-r requirements.in (line 1))
dill>=0.3.6 (from multiprocess==0.70.14->datasets==2.6.1->-r requirements.in (line 1))
```
### Expected behavior
A correctly generated file `requirements.txt` with pinned dependencies
### Environment info
Tested with versions `2.6.1, 2.6.0, 2.5.2` on Python 3.8 and 3.10 on Ubuntu 20.04LTS and Python 3.10 on MacOS 12.6 (M1).
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5162/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5162/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 2 days, 16:14:25
|
https://api.github.com/repos/huggingface/datasets/issues/5161
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5161/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5161/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5161/events
|
https://github.com/huggingface/datasets/issues/5161
| 1,422,371,748
|
I_kwDODunzps5Ux6uk
| 5,161
|
Dataset canβt cache modelβs outputs
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/37979232?v=4",
"events_url": "https://api.github.com/users/jongjyh/events{/privacy}",
"followers_url": "https://api.github.com/users/jongjyh/followers",
"following_url": "https://api.github.com/users/jongjyh/following{/other_user}",
"gists_url": "https://api.github.com/users/jongjyh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jongjyh",
"id": 37979232,
"login": "jongjyh",
"node_id": "MDQ6VXNlcjM3OTc5MjMy",
"organizations_url": "https://api.github.com/users/jongjyh/orgs",
"received_events_url": "https://api.github.com/users/jongjyh/received_events",
"repos_url": "https://api.github.com/users/jongjyh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jongjyh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jongjyh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jongjyh",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Addressed in https://github.com/huggingface/datasets/pull/5191 (torch.Tensor objects now produce deterministic hashes)"
] | 2022-10-25T12:19:00
| 2022-11-03T16:12:52
| 2022-11-03T16:12:51
|
NONE
| null | null | null | null |
### Describe the bug
Hi,
I try to cache some outputs of teacher model( Knowledge Distillation ) by using map function of Dataset library, while every time I run my code, I still recompute all the sequences. I tested Bert Model like this, I got different hash every single run, so any idea to deal with this?
### Steps to reproduce the bug
1. run below code
2. get different hash
```
from transformers import BertModel
from transformers import AutoTokenizer
import torch
token = ['hello']
model = BertModel.from_pretrained("bert-base-uncased").eval()
tok = AutoTokenizer.from_pretrained("bert-base-uncased")
def abcd():
with torch.no_grad():
out = model(**tok(token,return_tensors='pt'))[0]
# out = tok(token)
return out
from datasets.fingerprint import Hasher
my_func = abcd
print(Hasher.hash(my_func))
print(abcd())
```
### Expected behavior
I wanna cache all the model output
### Environment info
datasets:2.5.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5161/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5161/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 9 days, 3:53:51
|
https://api.github.com/repos/huggingface/datasets/issues/5160
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5160/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5160/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5160/events
|
https://github.com/huggingface/datasets/issues/5160
| 1,422,193,938
|
I_kwDODunzps5UxPUS
| 5,160
|
Automatically add filename for image/audio folder
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna",
"user_view_type": "public"
},
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
},
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
] |
[
"Also cc @anton-l ",
"BTW the exact same holds true for the audio folder",
"I'm fine with adding a new column with the file name personally. Not sure how breaking this is though",
"@patrickvonplaten do you mean just filename or full relative path inside the repo?\r\nI think it shouldn't be breaking, at least I cannot come up with any case where it is. Maybe @mariosasko can?\r\n\r\nalso I think that the problem here and in general is that Image/AudioFolder has default configuration which implies automatic label creation if there is not metadata file. It can be changed when you load the dataset with `load_dataset` but not on it's Hub page. \r\n\r\n",
"> also I think that the problem here and in general Image/AudioFolder has default configuration which implies automatic label creation if there is not metadata file\r\n\r\nYea I agree it's often the wrong default. We can also imagine adding the builder's parameters as YAML in the repo.",
"@lhoestq yes I also got the idea of some YAML config! not sure of what priority it is though.",
"but it would actually also solve this issue: https://github.com/huggingface/datasets/issues/5153",
"I meant just the file name (no path) that would already be super helpful IMO :-) (maybe dir+filename if there are dirs in the folder)",
"@patrickvonplaten one more time, to be sure I understand you.\r\nFor example, we have data structure like this:\r\n```\r\nββ data/\r\nβ ββ subdir/\r\nβ βββ cats/\r\nβ βββ 0.jpg\r\nβ βββ 1.jpg\r\nβ βββ 2.jpg\r\nβ βββ dogs/\r\nβ βββ 0.jpg\r\nβ βββ 1.jpg\r\nβ βββ 2.jpg\r\nβββ another_subdir/\r\n βββ 10.jpg\r\n βββ 11.jpg\r\n βββ 12.jpg\r\n```\r\nIs it okay to provide `\"data/subdir/cats/0.jpg\"`, `\"data/subdir/dogs/0.jpg\"`, `\"data/another_subdir/10.jpg\"`?\r\nI think providing just filenames might be confusing if they are not unique, as in this example. ",
"Yes I think the relative path as you proposed makes a lot of sense :-) "
] | 2022-10-25T09:56:49
| 2022-10-26T16:51:46
| null |
CONTRIBUTOR
| null | null | null | null |
### Feature request
When creating a custom audio of image dataset, it would be great to automatically have access to the filename. It should be both:
a) Automatically displayed in the viewer
b) Automatically added as a column to the dataset when doing `load_dataset`
In `diffusers` our test rely quite heavily on images and audio files now and it's a bit tedious at the moment to download specific images from a datasets repo.
E.g. we have a dataset of images for tests in `diffusers`: https://huggingface.co/datasets/hf-internal-testing/diffusers-images
where it would be extremely nice to have direct access to the filename both visually on the datasets page (@severo ) as well as via the `load_datasets` function. We currently have some akward functionality to download images by path name: https://github.com/huggingface/diffusers/blob/2fb8fafa4b761f6fc144cf75a6f6f0ea6af3a1c1/src/diffusers/utils/testing_utils.py#L131
It would be much nicer to just go over `load_dataset(...)`
### Motivation
Intuitively the filename is something people understand directly. E.g if you upload a folder of images online, it's nice if you recognize the image as well as the filename next to it directly and that you're able to use it right away.
The label on the other hand is less intuitive to understand as you haven't added it yourself.
### Your contribution
Not sure if I have the time to add it myself anytime soon, but it would help us a lot for `diffusers`.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5160/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5160/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/5158
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5158/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5158/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5158/events
|
https://github.com/huggingface/datasets/issues/5158
| 1,422,059,287
|
I_kwDODunzps5UwucX
| 5,158
|
Fix language and license tag names in all Hub datasets
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script",
"id": 4564477500,
"name": "dataset contribution",
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] |
[
"There are currently 402 datasets with deprecated \"languages\" or \"licenses\".",
"hey @albertvillanova ,i would love to work on this issue if you like.",
"Hi @ayushthe1, thanks for your offer.\r\n\r\nBut as you can see, I self-assigned this issue.\r\n\r\nI have already fixed 200 out of the 402 datasets. My script is still running and fixing the rest.\r\n\r\nFor example: https://huggingface.co/datasets/fhamborg/news_sentiment_newsmtsc/discussions/2/files",
"Thanks for your time. Will try next time. π",
"@ayushthe1 feel free to take one of the non-assigned open issues: https://github.com/huggingface/datasets/issues",
"This is done."
] | 2022-10-25T08:19:29
| 2022-10-25T11:27:26
| 2022-10-25T10:42:19
|
MEMBER
| null | null | null | null |
While working on this:
- #5137
we realized there are still many datasets with deprecated "languages" and "licenses" tag names (instead of "language" and "license").
This is a blocking issue: no subsequent PR can be opened to modify their metadata: a ValueError will be thrown.
We should fix the "language" and "license" tag names in all Hub datasets.
TODO:
- [x] Fix language and license tag names in 402 Hub datasets
CC: @julien-c
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5158/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5158/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 2:22:50
|
https://api.github.com/repos/huggingface/datasets/issues/5157
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5157/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5157/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5157/events
|
https://github.com/huggingface/datasets/issues/5157
| 1,421,703,577
|
I_kwDODunzps5UvXmZ
| 5,157
|
Consistent caching between python and jupyter
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/32967787?v=4",
"events_url": "https://api.github.com/users/gpucce/events{/privacy}",
"followers_url": "https://api.github.com/users/gpucce/followers",
"following_url": "https://api.github.com/users/gpucce/following{/other_user}",
"gists_url": "https://api.github.com/users/gpucce/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gpucce",
"id": 32967787,
"login": "gpucce",
"node_id": "MDQ6VXNlcjMyOTY3Nzg3",
"organizations_url": "https://api.github.com/users/gpucce/orgs",
"received_events_url": "https://api.github.com/users/gpucce/received_events",
"repos_url": "https://api.github.com/users/gpucce/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gpucce/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gpucce/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gpucce",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/32967787?v=4",
"events_url": "https://api.github.com/users/gpucce/events{/privacy}",
"followers_url": "https://api.github.com/users/gpucce/followers",
"following_url": "https://api.github.com/users/gpucce/following{/other_user}",
"gists_url": "https://api.github.com/users/gpucce/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gpucce",
"id": 32967787,
"login": "gpucce",
"node_id": "MDQ6VXNlcjMyOTY3Nzg3",
"organizations_url": "https://api.github.com/users/gpucce/orgs",
"received_events_url": "https://api.github.com/users/gpucce/received_events",
"repos_url": "https://api.github.com/users/gpucce/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gpucce/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gpucce/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gpucce",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/32967787?v=4",
"events_url": "https://api.github.com/users/gpucce/events{/privacy}",
"followers_url": "https://api.github.com/users/gpucce/followers",
"following_url": "https://api.github.com/users/gpucce/following{/other_user}",
"gists_url": "https://api.github.com/users/gpucce/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gpucce",
"id": 32967787,
"login": "gpucce",
"node_id": "MDQ6VXNlcjMyOTY3Nzg3",
"organizations_url": "https://api.github.com/users/gpucce/orgs",
"received_events_url": "https://api.github.com/users/gpucce/received_events",
"repos_url": "https://api.github.com/users/gpucce/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gpucce/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gpucce/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gpucce",
"user_view_type": "public"
}
] |
[
"Hi ! Maybe it's possible to have a consistent hash for a function defined in `__main__` and a function define in a notebook.\r\n\r\nHowever for functions imported from another location, pickle uses the location to identify the code, so in that case we can't do much I believe.\r\n\r\nWould it be ok for you if we only try to do this for functions in `__main__` / jupyter ?\r\n\r\nIf you'd like to contribute, you can read this part of the code and let me know if you have questions:\r\n\r\nhttps://github.com/huggingface/datasets/blob/7feeb5648a63b6135a8259dedc3b1e19185ee4c7/src/datasets/utils/py_utils.py#L617-L643\r\n\r\nI think the key here would be to also ignore the \"co_filename\" of functions defined in `__main__`",
"Seems like a good solution, I will start a PR and see if I understood the changes needed. Thanks!"
] | 2022-10-25T01:34:33
| 2022-11-02T15:43:22
| 2022-11-02T15:43:22
|
CONTRIBUTOR
| null | null | null | null |
### Feature request
I hope this is not my mistake, currently if I use `load_dataset` from a python session on a custom dataset to do the preprocessing, it will be saved in the cache and in other python sessions it will be loaded from the cache, however calling the same from a jupyter notebook does not work, meaning the preprocessing starts from scratch.
If adjusting the hashes is impossible, is there a way to manually set dataset fingerprint to "force" this behaviour?
### Motivation
If this is not already the case and I am doing something wrong, it would be useful to have the two fingerprints consistent so one can create the dataset once and then try small things on jupyter without preprocessing everything again.
### Your contribution
I am happy to try a PR if you give me some pointers where the changes should happen
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5157/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5157/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 8 days, 14:08:49
|
https://api.github.com/repos/huggingface/datasets/issues/5156
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5156/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5156/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5156/events
|
https://github.com/huggingface/datasets/issues/5156
| 1,421,667,125
|
I_kwDODunzps5UvOs1
| 5,156
|
Unable to download dataset using Azure Data Lake Gen 2
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/87379512?v=4",
"events_url": "https://api.github.com/users/clarissesimoes/events{/privacy}",
"followers_url": "https://api.github.com/users/clarissesimoes/followers",
"following_url": "https://api.github.com/users/clarissesimoes/following{/other_user}",
"gists_url": "https://api.github.com/users/clarissesimoes/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/clarissesimoes",
"id": 87379512,
"login": "clarissesimoes",
"node_id": "MDQ6VXNlcjg3Mzc5NTEy",
"organizations_url": "https://api.github.com/users/clarissesimoes/orgs",
"received_events_url": "https://api.github.com/users/clarissesimoes/received_events",
"repos_url": "https://api.github.com/users/clarissesimoes/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/clarissesimoes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/clarissesimoes/subscriptions",
"type": "User",
"url": "https://api.github.com/users/clarissesimoes",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi ! From the `adlfs` docs, there are two filesystems you can use:\r\n> To use the Gen1 filesystem:\r\n> - known_implementations[βadlβ] = {βclassβ: βadlfs.AzureDatalakeFileSystemβ}\r\n> \r\n> To use the Gen2 filesystem:\r\n> - known_implementations[βabfsβ] = {βclassβ: βadlfs.AzureBlobFileSystemβ}\r\n\r\nIf I'm not mistaken you're using the second one - so you should use `abfs://` instead of `adl://`, and also run this at the beginning of your script:\r\n```python\r\nfrom fsspec.registry import known_implementations\r\nknown_implementations['abfs'] = {'class': 'adlfs.AzureDatalakeFileSystem'}\r\n```\r\n\r\n",
"Thank you @lhoestq . Great call.\r\nUsing the default class from `known_implementations` dict solved my problem\r\n```\r\nknown_implementations[βabfsβ] = {βclassβ: βadlfs.AzureBlobFileSystemβ}\r\n```\r\nI'm closing this issue.",
"> Thank you @lhoestq . Great call. Using the default class from `known_implementations` dict solved my problem\r\n> \r\n> ```\r\n> known_implementations[βabfsβ] = {βclassβ: βadlfs.AzureBlobFileSystemβ}\r\n> ```\r\n> \r\n> I'm closing this issue.\r\n\r\nHi so here `Saving serialized datasets\r\n\r\nAfter you have processed your dataset, you can save it to your cloud storage with [Dataset.save_to_disk()](https://huggingface.co/docs/datasets/v2.17.0/en/package_reference/main_classes#datasets.Dataset.save_to_disk):` what is the encoded dataset I have failed to save it ",
"Uploading failed ? Did you get an error message ?"
] | 2022-10-25T00:43:18
| 2024-02-15T09:48:36
| 2022-11-17T23:37:08
|
NONE
| null | null | null | null |
### Describe the bug
When using the DatasetBuilder method with the credentials for the cloud storage Azure Data Lake (adl) Gen2, the following error is showed:
```
Traceback (most recent call last):
File "download_hf_dataset.py", line 143, in <module>
main()
File "download_hf_dataset.py", line 102, in main
builder.download_and_prepare(save_dir, storage_options=storage_options, max_shard_size="250MB", file_format="parquet")
File "/home/clarisses/miniconda3/envs/hf_datasets_env/lib/python3.8/site-packages/datasets/builder.py", line 671, in download_and_prepare
fs_token_paths = fsspec.get_fs_token_paths(output_dir, storage_options=storage_options)
File "/home/clarisses/miniconda3/envs/hf_datasets_env/lib/python3.8/site-packages/fsspec/core.py", line 639, in get_fs_token_paths
fs = cls(**options)
File "/home/clarisses/miniconda3/envs/hf_datasets_env/lib/python3.8/site-packages/fsspec/spec.py", line 76, in __call__
obj = super().__call__(*args, **kwargs)
TypeError: __init__() got an unexpected keyword argument 'account_name'
```
If I don't pass the storage_options argument (leave it as None), it requires the credentials used in ADL Gen 1:
`TypeError: __init__() missing 3 required positional arguments: 'tenant_id', 'client_id', and 'client_secret'`
Thus, it is not possible to download a dataset from the cloud using Azure Data Lake (adl) Gen2.
### Steps to reproduce the bug
Assuming that you have an account on Azure and at Storage Account that can be used for reproduce:
1. Create a dict with the format to connect to Azure Data Lake Gen 2
```
storage_options = {"account_name": ACCOUNT_NAME, "account_key": ACCOUNT_KEY) # gen 2 filesystem
```
2. Create a dataset builder for any HF hosted dataset
```
builder = load_dataset_builder(dataset_name)
```
3. Try to download the dataset passing the storage_options as an argument
```
save_dir = 'adl://my_save_dir'
builder.download_and_prepare(save_dir, storage_options=storage_options, max_shard_size="250MB", file_format="parquet")
```
### Expected behavior
Not seeing the error mentioned above and being able to download the dataset to the provided path on ADL
### Environment info
- `datasets` version: 2.6.1
- Platform: Linux-5.15.0-46-generic-x86_64-with-glibc2.17
- Python version: 3.8.13
- PyArrow version: 9.0.0
- Pandas version: 1.5.1
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/87379512?v=4",
"events_url": "https://api.github.com/users/clarissesimoes/events{/privacy}",
"followers_url": "https://api.github.com/users/clarissesimoes/followers",
"following_url": "https://api.github.com/users/clarissesimoes/following{/other_user}",
"gists_url": "https://api.github.com/users/clarissesimoes/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/clarissesimoes",
"id": 87379512,
"login": "clarissesimoes",
"node_id": "MDQ6VXNlcjg3Mzc5NTEy",
"organizations_url": "https://api.github.com/users/clarissesimoes/orgs",
"received_events_url": "https://api.github.com/users/clarissesimoes/received_events",
"repos_url": "https://api.github.com/users/clarissesimoes/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/clarissesimoes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/clarissesimoes/subscriptions",
"type": "User",
"url": "https://api.github.com/users/clarissesimoes",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5156/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5156/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 23 days, 22:53:50
|
https://api.github.com/repos/huggingface/datasets/issues/5153
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5153/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5153/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5153/events
|
https://github.com/huggingface/datasets/issues/5153
| 1,420,833,457
|
I_kwDODunzps5UsDKx
| 5,153
|
default Image/AudioFolder infers labels when there is no metadata files even if there is only one dir
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna",
"user_view_type": "public"
}
] |
[
"Makes sense! For the last structure, we could count the path segments (delimited by \"/\" for URLs and `os.sep` for local paths) to ensure all inferred labels are on the same level. Otherwise, I think it's safe to assume they are meaningless and ignore them.\r\n"
] | 2022-10-24T13:28:18
| 2022-11-15T16:31:10
| 2022-11-15T16:31:09
|
CONTRIBUTOR
| null | null | null | null |
### Describe the bug
By default FolderBasedBuilder infers labels if there is not metadata files, even if it's meaningless (for example, they are in a single directory or in the root folder, see this repo as an example: https://huggingface.co/datasets/patrickvonplaten/audios
As this is a corner case for quick exploration of images or audios on the Hub.
### Steps to reproduce the bug
If you have directory like this:
```
repo
image1.jpg
image2.jpg
image3.jpg
```
or
```
repo
data
image1.jpg
image2.jpg
image3.jpg
```
doing `ds = load_dataset(repo)` would create `label` feature:
```python
print(ds["train"][0])
>> {'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=500x375 at 0x7FB5326468E0>, 'label': 0}
```
Also, if you have the following structure:
```
repo
data
image1.jpg
image2.jpg
image3.jpg
image4.jpg
image5.jpg
image6.jpg
```
it will infer two labels:
```python
print(ds["train"][0])
print(ds["train"][-1])
>> {'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=500x375 at 0x7FB5326468E0>, 'label': 1}
>> {'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=500x415 at 0x7FB5326555B0>, 'label': 0}
```
### Expected behavior
We should have only one base feature (Image/Audio) in such cases.
### Environment info
all versions of `datasets`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5153/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5153/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 22 days, 3:02:51
|
https://api.github.com/repos/huggingface/datasets/issues/5152
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5152/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5152/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5152/events
|
https://github.com/huggingface/datasets/issues/5152
| 1,420,808,919
|
I_kwDODunzps5Ur9LX
| 5,152
|
refactor FolderBasedBuilder and Image/AudioFolder tests
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna",
"user_view_type": "public"
}
|
[
{
"color": "B67A40",
"default": false,
"description": "Restructuring existing code without changing its external behavior",
"id": 2851292821,
"name": "refactoring",
"node_id": "MDU6TGFiZWwyODUxMjkyODIx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/refactoring"
}
] |
open
| false
| null |
[] |
[] | 2022-10-24T13:11:52
| 2022-10-24T13:11:52
| null |
CONTRIBUTOR
| null | null | null | null |
Tests for FolderBasedBuilder, ImageFolder and AudioFolder are mostly duplicating each other. They need to be refactored and Audio/ImageFolder should have only tests specific to the loader.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5152/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5152/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/5151
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5151/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5151/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5151/events
|
https://github.com/huggingface/datasets/issues/5151
| 1,420,791,163
|
I_kwDODunzps5Ur417
| 5,151
|
Add support to create different configs with `push_to_hub` (+ inferring configs from directories with package managers?)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna",
"user_view_type": "public"
}
] |
[
"also asked in https://discuss.huggingface.co/t/create-multiple-dataset-configs-with-push-to-hub-method/25480"
] | 2022-10-24T12:59:18
| 2022-11-04T14:55:20
| null |
CONTRIBUTOR
| null | null | null | null |
Now one can push only different splits within one default config of a dataset.
Would be nice to allow something like:
```
ds.push_to_hub(repo_name, config=config_name)
```
I'm not sure, but this will probably require changes in `data_files.py` patterns. If so, it would also allow to create different configs for packaged modules datasets.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5151/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5151/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/5150
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5150/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5150/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5150/events
|
https://github.com/huggingface/datasets/issues/5150
| 1,420,684,999
|
I_kwDODunzps5Ure7H
| 5,150
|
Problems after upgrading to 2.6.1
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/61748653?v=4",
"events_url": "https://api.github.com/users/pietrolesci/events{/privacy}",
"followers_url": "https://api.github.com/users/pietrolesci/followers",
"following_url": "https://api.github.com/users/pietrolesci/following{/other_user}",
"gists_url": "https://api.github.com/users/pietrolesci/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/pietrolesci",
"id": 61748653,
"login": "pietrolesci",
"node_id": "MDQ6VXNlcjYxNzQ4NjUz",
"organizations_url": "https://api.github.com/users/pietrolesci/orgs",
"received_events_url": "https://api.github.com/users/pietrolesci/received_events",
"repos_url": "https://api.github.com/users/pietrolesci/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/pietrolesci/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pietrolesci/subscriptions",
"type": "User",
"url": "https://api.github.com/users/pietrolesci",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[
"Hi! I can't reproduce the error following these steps. Can you please provide a reproducible example?",
"I faced the same issue:\r\n\r\n### Repro\r\n```\r\n!pip install datasets==2.6.1\r\nimport datasets as Dataset\r\ndataset = Dataset.from_pandas(dataframe)\r\ndataset.save_to_disk(local)\r\n\r\n!pip install datasets==2.5.2\r\nimport datasets as Dataset\r\ndataset = Dataset.load_from_disk(local)\r\n```\r\n\r\n",
"@Lokiiiiii And what are the contents of the \"dataframe\" in your example?",
"I bumped into the issue too. @Lokiiiiii thanks for steps. I \"solved\" if for now by `pip install datasets>=2.6.1` everywhere.",
"Hi all, \r\nI experienced the same issue. \r\nPlease note that the pull request is related to the IMDB example provided in the doc, and is a fix for that, in that context, to make sure that people can follow the doc example and have a working system. \r\nIt does not provide a fix for Datasets itself. ",
"im getting the same error.\r\n- using the base AWS HF container that uses a datasets <2.\r\n- updating the AWS HF container to use dataset 2.4\r\n",
"Same here, running on our SageMaker pipelines. It's only happening for some but not all of our saved Datasets.",
"I am also receiving this error on Sagemaker but not locally, I have noticed that this occurs when the `.dataset/` folder does not contain a single file like:\r\n\r\n`dataset.arrow`\r\n\r\nbut instead contains multiple files like:\r\n\r\n`data-00000-of-00002.arrow`\r\n`data-00001-of-00002.arrow`\r\n\r\nI think that it may have something to do with this recent PR that updated the behaviour of `dataset.save_to_disk` by introducing sharding: https://github.com/huggingface/datasets/pull/5268\r\n\r\nFor now I can get around this by forcing datasets==2.8.0 on machine that creates dataset and in the huggingface instance for training (by running this at the start of training script `os.system(\"pip install datasets==2.8.0\")`)\r\n\r\nTo ensure the dataset is a single shard when saving the dataset locally:\r\n\r\n```python3\r\ndataset.flatten_indices().save_to_disk('path/to/dataset', num_shards=1)\r\n```\r\n\r\n and then manually changing the name afterwards from `path/to/dataset/data-00000-of-00001.arrow` to `path/to/dataset/dataset.arrow` and updating the `path/to/dataset/state.json` to reflect this name change. i.e. by changing `state.json` to this:\r\n\r\n```javascript\r\n{\r\n \"_data_files\": [\r\n {\r\n \"filename\": \"dataset.arrow\"\r\n }\r\n ],\r\n \"_fingerprint\": \"420086f0636f8727\",\r\n \"_format_columns\": null,\r\n \"_format_kwargs\": {},\r\n \"_format_type\": null,\r\n \"_output_all_columns\": false,\r\n \"_split\": null\r\n}\r\n```",
"Does anyone know if this has been resolved?",
"I have the same issue in datasets version 2.3.2"
] | 2022-10-24T11:32:36
| 2024-05-12T07:40:03
| null |
NONE
| null | null | null | null |
### Describe the bug
Loading a dataset_dict from disk with `load_from_disk` is now creating a `KeyError "length"` that was not occurring in v2.5.2.
Context:
- Each individual dataset in the dict is created with `Dataset.from_pandas`
- The dataset_dict is create from a dict of `Dataset`s, e.g., `DatasetDict({"train": train_ds, "validation": val_ds})
- The pandas dataframe, besides text columns, has a column with a dictionary inside and potentially different keys in each row. Correctly the `Dataset.from_pandas` function adds `key: None` to all dictionaries in each row so that the schema can be correctly inferred.
### Steps to reproduce the bug
Steps to reproduce:
- Upgrade to datasets==2.6.1
- Create a dataset from pandas dataframe with `Dataset.from_pandas`
- Create a dataset_dict from a dict of `Dataset`s, e.g., `DatasetDict({"train": train_ds, "validation": val_ds})
- Save to disk with the `save` function
### Expected behavior
Same as in v2.5.2, that is load from disk without errors
### Environment info
- `datasets` version: 2.6.1
- Platform: Linux-5.4.209-129.367.amzn2int.x86_64-x86_64-with-glibc2.26
- Python version: 3.9.13
- PyArrow version: 9.0.0
- Pandas version: 1.5.1
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5150/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5150/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/5148
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5148/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5148/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5148/events
|
https://github.com/huggingface/datasets/issues/5148
| 1,420,219,222
|
I_kwDODunzps5UptNW
| 5,148
|
Cannot find the rvl_cdip dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/20509836?v=4",
"events_url": "https://api.github.com/users/santule/events{/privacy}",
"followers_url": "https://api.github.com/users/santule/followers",
"following_url": "https://api.github.com/users/santule/following{/other_user}",
"gists_url": "https://api.github.com/users/santule/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/santule",
"id": 20509836,
"login": "santule",
"node_id": "MDQ6VXNlcjIwNTA5ODM2",
"organizations_url": "https://api.github.com/users/santule/orgs",
"received_events_url": "https://api.github.com/users/santule/received_events",
"repos_url": "https://api.github.com/users/santule/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/santule/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/santule/subscriptions",
"type": "User",
"url": "https://api.github.com/users/santule",
"user_view_type": "public"
}
|
[] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] |
[
"Hi, @santule.\r\n\r\nWe have transferred all dataset scripts from GitHub to the Hugging Face Hub: https://huggingface.co/datasets\r\n- Concretely, you have \"rvl_cdip\" here: https://huggingface.co/datasets/rvl_cdip\r\n\r\nTo be able to load them, you should update your `datasets` library:\r\n```\r\npip install -U datasets\r\n```",
"thank you, it worked"
] | 2022-10-24T04:57:42
| 2022-10-24T12:23:47
| 2022-10-24T06:25:28
|
NONE
| null | null | null | null |
Hi,
I am trying to use load_dataset to load the official "rvl_cdip" dataset but getting an error.
dataset = load_dataset("rvl_cdip")
Couldn't find 'rvl_cdip' on the Hugging Face Hub either: FileNotFoundError: Couldn't find the file at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/rvl_cdip/rvl_cdip.py
Regards,
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5148/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5148/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 1:27:46
|
https://api.github.com/repos/huggingface/datasets/issues/5147
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5147/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5147/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5147/events
|
https://github.com/huggingface/datasets/issues/5147
| 1,419,522,275
|
I_kwDODunzps5UnDDj
| 5,147
|
Allow ignoring kwargs inside fn_kwargs during dataset.map's fingerprinting
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8387736?v=4",
"events_url": "https://api.github.com/users/falcaopetri/events{/privacy}",
"followers_url": "https://api.github.com/users/falcaopetri/followers",
"following_url": "https://api.github.com/users/falcaopetri/following{/other_user}",
"gists_url": "https://api.github.com/users/falcaopetri/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/falcaopetri",
"id": 8387736,
"login": "falcaopetri",
"node_id": "MDQ6VXNlcjgzODc3MzY=",
"organizations_url": "https://api.github.com/users/falcaopetri/orgs",
"received_events_url": "https://api.github.com/users/falcaopetri/received_events",
"repos_url": "https://api.github.com/users/falcaopetri/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/falcaopetri/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/falcaopetri/subscriptions",
"type": "User",
"url": "https://api.github.com/users/falcaopetri",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] |
[
"Hi ! In the `transformers` issue the object to not hash is a `Pool` - I think you can instantiate it inside your function instead of passing it as a parameter. It's good practice that your function and all its fn_kwargs are picklable, in case you want to parallelize `map` using `num_proc>1`\r\n\r\nFor the other case `def fn(example, verbose=False):` however, I agree it would be nice to let the user specify that \"verbose\" needs to be ignored.\r\n\r\nDo you think providing a decorator could help ? Maybe\r\n```python\r\n@datasets.hashing.register(ignore_kwargs=[\"verbose\"])\r\ndef func(example, verbose=False):\r\n ...\r\n```",
"Hi @lhoestq! Thanks for your response.\r\n\r\nA `Pool` shouldn't be instantiated within the function, because there's a huge overhead in doing so. The main idea is that the same `Pool` should be used across all function calls. Parallel `map` is not helpful/desired in that specific scenario, because the heavy parallel computation is done by another lib (`pyctcdecode`, called within `transformer`'s model inference code).\r\n\r\nBut yes, it makes sense to be able to leverage parallel processing by just doing `num_proc>1` when possible.\r\n\r\nYour decorator suggestions seems like a pretty clean API to me. I didn't find a `datasets.hashing` module though. Would it be created for this specific purpose? Any downsides in just using `datasets.fingerprint`?\r\n\r\nAnd would `datasets.hashing.register` just add some metadata to `func` in your approach (so it could be inspected from `fingerprint_transform`)?\r\n\r\nAnd looking to the `datasets.Dataset` API, `.filter` would also benefited from this.",
"> Would it be created for this specific purpose? Any downsides in just using datasets.fingerprint?\r\n\r\nThis can also go in datasets.fingerprint indeed - but maybe datasets.hashing tells more about what the register function does (i.e. register this function to have a custom hashing) ?\r\n\r\n> And would datasets.hashing.register just add some metadata to func in your approach (so it could be inspected from fingerprint_transform)?\r\n\r\nYup that's the idea :)\r\n\r\n> And looking to the datasets.Dataset API, .filter would also benefited from this.\r\n\r\nIndeed !\r\n\r\n-----\r\n\r\nIf you would like to contribute this you can assign yourself to this issue by posting #self-assign\r\nAnd of course if you have questions or if I can help, feel free to ping me !",
"> This can also go in datasets.fingerprint indeed - but maybe datasets.hashing tells more about what the register function does (i.e. register this function to have a custom hashing) ?\r\n\r\nSure, it makes sense.\r\n\r\n---\r\n\r\nI don't plan to work on it right now, so I'll let it unassigned in case somebody wants to join. I'll get back at it as soon as possible though.\r\n"
] | 2022-10-22T21:46:38
| 2022-11-01T22:19:07
| null |
NONE
| null | null | null | null |
### Feature request
`dataset.map` accepts a `fn_kwargs` that is passed to `fn`. Currently, the whole `fn_kwargs` is used by `fingerprint_transform` to calculate the new fingerprint.
I'd like to be able to inform `fingerprint_transform` which `fn_kwargs` shoud/shouldn't be taken into account during hashing.
Of course, users should be aware to properly use this new feature, just like the internal usages of `fingerprint_transform` [does](https://github.com/huggingface/datasets/blob/2699593b33ee63d17aad2a2bfddedd38a8df57b8/src/datasets/arrow_dataset.py#L2700).
### Motivation
This is originally motivated by https://github.com/huggingface/transformers/pull/18351#issuecomment-1263588680.
Nonetheless, consider a more general processing function that accepts a kwarg that does not influence it's output:
```python
def fn(example, verbose=False):
...
```
Then `dataset.map(fn, verbose=True)` would not benefit from dataset caching.
I'm not sure if other methods in the `Dataset` API could benefit from this feature.
### Your contribution
Based on `fingerprint_transform `'s `wrapper` function [here](https://github.com/huggingface/datasets/blob/c59cc34fcd2a369d27b77cc678017f5976a926a9/src/datasets/fingerprint.py#L443), it seems to me that it should be possible to make `.map`/`._map_single` accept something like `fn_use_fingerprint_kwargs`/`fn_ignore_fingerprint_kwargs` (probably another arg name). This would then be used by `fingerprint_transform.wrapper` to better/more flexibly hash the transformation.
I could contribute with a PR if this feature and approach look good to you.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5147/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5147/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/5145
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5145/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5145/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5145/events
|
https://github.com/huggingface/datasets/issues/5145
| 1,418,005,452
|
I_kwDODunzps5UhQvM
| 5,145
|
Dataset order is not deterministic with ZIP archives and `iter_files`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4",
"events_url": "https://api.github.com/users/fxmarty/events{/privacy}",
"followers_url": "https://api.github.com/users/fxmarty/followers",
"following_url": "https://api.github.com/users/fxmarty/following{/other_user}",
"gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/fxmarty",
"id": 9808326,
"login": "fxmarty",
"node_id": "MDQ6VXNlcjk4MDgzMjY=",
"organizations_url": "https://api.github.com/users/fxmarty/orgs",
"received_events_url": "https://api.github.com/users/fxmarty/received_events",
"repos_url": "https://api.github.com/users/fxmarty/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions",
"type": "User",
"url": "https://api.github.com/users/fxmarty",
"user_view_type": "public"
}
|
[] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] |
[
"Thanks for reporting ! The issue doesn't come from shuffling, but from `beans` row order not being deterministic:\r\n\r\nhttps://huggingface.co/datasets/beans/blob/main/beans.py uses `dl_manager.iter_files` on ZIP archives and the file order doesn't seen to be deterministic and changes across machines",
"Thank you for noticing indeed!",
"This is still a bug, so I'd keep this one open if you don't mind ;)",
"Besides the linked PR, to make the loading process fully deterministic, I believe we should also sort the data files [here](https://github.com/huggingface/datasets/blob/df4bdd365f2abb695f113cbf8856a925bc70901b/src/datasets/data_files.py#L276) and [here](https://github.com/huggingface/datasets/blob/df4bdd365f2abb695f113cbf8856a925bc70901b/src/datasets/data_files.py#L485) (e.g. fsspec's `LocalFileSystem.glob` relies on `os.scandir`, which yields the contents in arbitrary order). My concern is the overhead of these sorts... Maybe we could introduce a new flag to `load_dataset` similar to TFDS' [`shuffle_files`](https://www.tensorflow.org/datasets/determinism#determinism_when_reading) or sort only if the number of data files is small?",
"We already return the result sorted at the end of `_resolve_single_pattern_locally` and `_resolve_single_pattern_in_dataset_repository` if I'm not mistaken",
"@lhoestq Oh, you are right. Feel free to ignore my comment.",
"I think the corresponding PR is ready to be merged :hugs: ",
"@albertvillanova Thanks for the fix!"
] | 2022-10-21T09:00:03
| 2022-10-27T09:51:49
| 2022-10-27T09:51:10
|
CONTRIBUTOR
| null | null | null | null |
### Describe the bug
For the `beans` dataset (did not try on other), the order of samples is not the same on different machines. Tested on my local laptop, github actions machine, and ec2 instance. The three yield a different order.
### Steps to reproduce the bug
In a clean docker container or conda environment with datasets==2.6.1, run
```python
from datasets import load_dataset
from pprint import pprint
data = load_dataset("beans", split="validation")
pprint(data["image_file_path"])
```
### Expected behavior
The order of the images is the same on all machines.
### Environment info
On the EC2 instance:
```
- `datasets` version: 2.6.1
- Platform: Linux-4.14.291-218.527.amzn2.x86_64-x86_64-with-glibc2.2.5
- Python version: 3.7.10
- PyArrow version: 9.0.0
- Pandas version: 1.3.5
- Numpy version: not checked
```
On my local laptop:
```
- `datasets` version: 2.6.1
- Platform: Linux-5.15.0-50-generic-x86_64-with-glibc2.35
- Python version: 3.9.12
- PyArrow version: 7.0.0
- Pandas version: 1.3.5
- Numpy version: 1.23.1
```
On github actions:
```
- `datasets` version: 2.6.1
- Platform: Linux-5.15.0-1022-azure-x86_64-with-glibc2.2.5
- Python version: 3.8.14
- PyArrow version: 9.0.0
- Pandas version: 1.5.1
- Numpy version: 1.23.4
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5145/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5145/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 6 days, 0:51:07
|
https://api.github.com/repos/huggingface/datasets/issues/5144
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5144/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5144/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5144/events
|
https://github.com/huggingface/datasets/issues/5144
| 1,417,974,731
|
I_kwDODunzps5UhJPL
| 5,144
|
Inconsistent documentation on map remove_columns
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/22047467?v=4",
"events_url": "https://api.github.com/users/zhaowei-wang-nlp/events{/privacy}",
"followers_url": "https://api.github.com/users/zhaowei-wang-nlp/followers",
"following_url": "https://api.github.com/users/zhaowei-wang-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zhaowei-wang-nlp/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/zhaowei-wang-nlp",
"id": 22047467,
"login": "zhaowei-wang-nlp",
"node_id": "MDQ6VXNlcjIyMDQ3NDY3",
"organizations_url": "https://api.github.com/users/zhaowei-wang-nlp/orgs",
"received_events_url": "https://api.github.com/users/zhaowei-wang-nlp/received_events",
"repos_url": "https://api.github.com/users/zhaowei-wang-nlp/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/zhaowei-wang-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhaowei-wang-nlp/subscriptions",
"type": "User",
"url": "https://api.github.com/users/zhaowei-wang-nlp",
"user_view_type": "public"
}
|
[
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
},
{
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists",
"id": 1935892865,
"name": "duplicate",
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate"
},
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
},
{
"color": "DF8D62",
"default": false,
"description": "",
"id": 4614514401,
"name": "hacktoberfest",
"node_id": "LA_kwDODunzps8AAAABEwvm4Q",
"url": "https://api.github.com/repos/huggingface/datasets/labels/hacktoberfest"
}
] |
closed
| false
| null |
[] |
[
"Thanks for reporting, @zhaowei-wang-nlp.\r\n\r\nYou are right, the documentation is confusing on the behavior of `remove_columns`. We should better explain it. ",
"This is a duplicate of https://github.com/huggingface/datasets/issues/2343.",
"I'm closing this issue because as @mariosasko pointed out, it is a duplicate of:\r\n- #2343"
] | 2022-10-21T08:37:53
| 2022-11-15T14:15:10
| 2022-11-15T14:15:10
|
NONE
| null | null | null | null |
### Describe the bug
The page [process](https://huggingface.co/docs/datasets/process) says this about the parameter `remove_columns` of the function `map`:
When you remove a column, it is only removed after the example has been provided to the mapped function.
So it seems that the `remove_columns` parameter removes after the mapped functions.
However, another page, [the documentation of the function map](https://huggingface.co/docs/datasets/v2.6.1/en/package_reference/main_classes#datasets.Dataset.map.remove_columns) says:
Columns will be removed before updating the examples with the output of `function`, i.e. if `function` is adding columns with names in remove_columns, these columns will be kept.
So one page says "after the mapped function" and another says "before the mapped function."
Is there something wrong?
### Steps to reproduce the bug
Not about code.
### Expected behavior
consistent about the descriptions of the behavior of the parameter `remove_columns` in the function `map`.
### Environment info
datasets V2.6.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5144/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5144/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 25 days, 5:37:17
|
https://api.github.com/repos/huggingface/datasets/issues/5143
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5143/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5143/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5143/events
|
https://github.com/huggingface/datasets/issues/5143
| 1,416,837,186
|
I_kwDODunzps5UczhC
| 5,143
|
DownloadManager Git LFS support
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/62820084?v=4",
"events_url": "https://api.github.com/users/Muennighoff/events{/privacy}",
"followers_url": "https://api.github.com/users/Muennighoff/followers",
"following_url": "https://api.github.com/users/Muennighoff/following{/other_user}",
"gists_url": "https://api.github.com/users/Muennighoff/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Muennighoff",
"id": 62820084,
"login": "Muennighoff",
"node_id": "MDQ6VXNlcjYyODIwMDg0",
"organizations_url": "https://api.github.com/users/Muennighoff/orgs",
"received_events_url": "https://api.github.com/users/Muennighoff/received_events",
"repos_url": "https://api.github.com/users/Muennighoff/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Muennighoff/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Muennighoff/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Muennighoff",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
| null |
[] |
[
"Hey ! Actually it works, just pass the right URL ;)\r\nThe URL must be the one with β/resolve/β\r\n\r\ne.g. https://huggingface.co/datasets/imagenet-1k/resolve/main/data/test_images.tar.gz\r\n\r\nYou can even pass a relative path to the dl_manager instead, like `dl_manager.download(\"data/test_images.tar.gz\")`",
"Amazing it works, thanks!"
] | 2022-10-20T15:29:29
| 2022-10-20T17:17:10
| 2022-10-20T17:17:10
|
CONTRIBUTOR
| null | null | null | null |
### Feature request
Maybe I'm mistaken but the `DownloadManager` does not support extracting git lfs files out of the box right?
Using `dl_manager.download()` or `dl_manager.download_and_extract()` still returns lfs files afaict.
Is there a good way to write a dataset loading script for a repo with lfs files?
### Motivation
/
### Your contribution
/
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/62820084?v=4",
"events_url": "https://api.github.com/users/Muennighoff/events{/privacy}",
"followers_url": "https://api.github.com/users/Muennighoff/followers",
"following_url": "https://api.github.com/users/Muennighoff/following{/other_user}",
"gists_url": "https://api.github.com/users/Muennighoff/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Muennighoff",
"id": 62820084,
"login": "Muennighoff",
"node_id": "MDQ6VXNlcjYyODIwMDg0",
"organizations_url": "https://api.github.com/users/Muennighoff/orgs",
"received_events_url": "https://api.github.com/users/Muennighoff/received_events",
"repos_url": "https://api.github.com/users/Muennighoff/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Muennighoff/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Muennighoff/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Muennighoff",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5143/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5143/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 1:47:41
|
https://api.github.com/repos/huggingface/datasets/issues/5137
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5137/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5137/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5137/events
|
https://github.com/huggingface/datasets/issues/5137
| 1,414,642,723
|
I_kwDODunzps5UUbwj
| 5,137
|
Align task tags in dataset metadata
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script",
"id": 4564477500,
"name": "dataset contribution",
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] |
[
"I removed all the invalid task_ids in datasts without namespace, based on the <s>(internal)</s> types.ts",
"(Types.ts is not internal it's public)",
"I have opened PRs to fix the task_ids in all datasets within a namespace as well.\r\n\r\nWorking on task_categories...",
"For future reference: this fix had some complications\r\n\r\nWhen trying to open a PR to fix the task tags, an exception was thrown if:\r\n- the metadata contained \"languages\" or \"licenses\" (instead of \"language\" or \"license\")\r\n- the metadata contained a non-valid language: `en-US` (instead of `en`), `no` (instead of `'no'`),...\r\n- the metadata contained a non-valid license\r\n- either `task_categories` or `task_ids` was not an array (a dict for each config)\r\n- the metadata contained non-valid tag names\r\n\r\nErrors:\r\n```\r\nValueError: - Error: \"languages\" is deprecated. Use \"language\" instead.\r\n```\r\n```\r\nValueError: - Error: \"licenses\" is deprecated. Use \"license\" instead.\r\n```\r\n```\r\nValueError: - Error: \"language[17]\" must only contain lowercase characters\r\n```\r\n```\r\nValueError: - Error: \"language[0]\" with value \"cz, de, it\" is not valid. It must be an ISO 639-1, 639-2 or 639-3 code (two/three letters), or a special value like \"code\", \"multilingual\". If you want to use BCP-47 identifiers, you can specify them in language_bcp47.\r\n```\r\n```\r\nValueError: - Error: \"task_ids\" must be an array\r\n```",
"All Hub datasets are done.",
"great job! did you have feedback from Hub users/i.E. repo authors?",
"Yes, @julien-c. These are some of the feedbacks:\r\n- Most people just thank for the fix: [cahya/librivox-indonesia](https://huggingface.co/datasets/cahya/librivox-indonesia/discussions/1#6357cd8a292a050ebd705f84), [TurkuNLP/xlsum-fi](https://huggingface.co/datasets/TurkuNLP/xlsum-fi/discussions/1#6357828aa1f8ad1c31bcbe46), [coastalcph/fairlex](https://huggingface.co/datasets/coastalcph/fairlex/discussions/4#6351a527a8e595171ab1aef2)\r\n- Why are we changing their task names? [joelito/lextreme](https://huggingface.co/datasets/joelito/lextreme/discussions/1#6351b576fe367c0d9b12041b)\r\n - I take note of this for the next bulk operation; besides the PR title, we should also add a description to explain the reason for the change and also maybe putting a link to some pertinent GH Issue page\r\n- Some of them ask where to find the list of the supported task values is: [dennlinger/klexikon](https://huggingface.co/datasets/dennlinger/klexikon/discussions/3#6356b3ea80f8cb3ab777ac5c), [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad/discussions/1#635262467e4cc3135fd09f58)\r\n - Currently, the list is here: https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts#L85\r\n - Maybe we could made them more easily accessible\r\n- Some people do not agree about current \"hierarchy\":\r\n - text-scoring: [emrecan/nli_tr_for_simcse](https://huggingface.co/datasets/emrecan/nli_tr_for_simcse/discussions/1#6357c1b128792d8cdd51e9f9) (but referring to [emrecan/nli_tr_for_simcse](https://huggingface.co/datasets/emrecan/nli_tr_for_simcse/discussions/2/files))\r\n - Before \"text-scoring\" was a task_category, with task_ids [\"semantic-similarity-scoring\", \"sentiment-scoring\"]\r\n - Now all three are task_ids [\"text-scoring\", \"semantic-similarity-scoring\", \"sentiment-scoring\"] under the task_category \"text-classification\"\r\n - People complain that their scoring tasks are not classification task\r\n - binary-classification: why don't we have binary-classification? We have multi-class-classification, multi-label-classification and sentiment-classification, but not binary-classification\r\n - symbolic-regression: [yoshitomo-matsubara/srsd-feynman_hard](https://huggingface.co/datasets/yoshitomo-matsubara/srsd-feynman_hard/discussions/2#63614194c12a09b8a31457cc), [yoshitomo-matsubara/srsd-feynman_medium](https://huggingface.co/datasets/yoshitomo-matsubara/srsd-feynman_medium/discussions/2#6361418aeee0d27f04379e43), [yoshitomo-matsubara/srsd-feynman_easy](https://huggingface.co/datasets/yoshitomo-matsubara/srsd-feynman_easy/discussions/2#6361416e00905b1ffb8d0112)\r\n - Why don't we have symbolic-regression task?\r\n\r\nNOTE: I'm editing this comment to add more feedback",
"As someone with feedback on the updates (which I highly appreciate seeing included here :D), a few comments from a \"user perspective\": \r\n\r\n* I think the general confusion for me was also surrounding the hierarchy; it doesn't really become super clear (even when using the tagger space) that one is a subset of the other, especially since it seems to be still possible to include fine-grained tasks without the \"parent category\"?\r\n* The datasets explorer still shows tags that are no longer valid (e.g., super specific ones such as `summarization-other-paper-abstract-generation`, but also ones that should be `task_categories`, such as `summarization`). I'm assuming this will be fixed soon, but until then it can confuse people who don't understand why they suddenly can't use seemingly still valid tags anymore.\r\n* As I mentioned to @albertvillanova, having a dedicated page in the docs with explanations (especially wrt the difference between `task_categories` and `task_ids`) would be super helpful. However, I think it would have been sufficient to just include some description in the dataset PRs where you can link to the Github/other discussion on the topic :) That way, I can check myself what changes are expected to happen.\r\n\r\nThanks again for the streamlining process, I personally learned a fair bit about the tagging structure in the meantime!\r\nBest,\r\nDennis",
"Thanks to you both for your feedback! super useful! cc'ing @osanseviero too π\r\n\r\n> The datasets explorer still shows tags that are no longer valid\r\n\r\nwait which explorer is that? is it https://huggingface.co/datasets/viewer/ ?\r\n",
"Sorry, this one: https://huggingface.co/datasets \r\nAnd then selecting the \"Fine-Grained Tasks\".",
"good feedback! we'll improve this",
"Super useful feedback, thanks a lot!",
"- Some people do not agree about current \"hierarchy\":\r\n - symbolic-regression: [yoshitomo-matsubara/srsd-feynman_hard](https://huggingface.co/datasets/yoshitomo-matsubara/srsd-feynman_hard/discussions/2#63614194c12a09b8a31457cc), [yoshitomo-matsubara/srsd-feynman_medium](https://huggingface.co/datasets/yoshitomo-matsubara/srsd-feynman_medium/discussions/2#6361418aeee0d27f04379e43), [yoshitomo-matsubara/srsd-feynman_easy](https://huggingface.co/datasets/yoshitomo-matsubara/srsd-feynman_easy/discussions/2#6361416e00905b1ffb8d0112)\r\n - Why don't we have symbolic-regression task?",
"@albertvillanova \r\nThank you for sharing our voice here!\r\n\r\nYes, we want `symbolic-regression` to be listed as a task. This task has been attracting attention from the machine learning/deep learning community, and unfortunately existing symbolic regression datasets are de-centralized in the community (hosted at individual platforms like author website, github, etc).\r\nIt would be great for the community if Hugging Face can support the task."
] | 2022-10-19T09:41:42
| 2022-11-10T05:25:58
| 2022-10-25T06:17:00
|
MEMBER
| null | null | null | null |
## Describe
Once we have agreed on a common naming for task tags for all open source projects, we should align on them.
## Steps
- [x] Align task tags in canonical datasets
- [x] task_categories: 4 datasets
- [x] task_ids (by @lhoestq)
- [x] Open PRs in community datasets
- [x] task_categories: 451 datasets
- [x] task_ids: 556 datasets
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5137/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5137/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 5 days, 20:35:18
|
https://api.github.com/repos/huggingface/datasets/issues/5135
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5135/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5135/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5135/events
|
https://github.com/huggingface/datasets/issues/5135
| 1,414,413,519
|
I_kwDODunzps5UTjzP
| 5,135
|
Update docs once dataset scripts transferred to the Hub
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] |
[] | 2022-10-19T06:58:19
| 2022-10-20T08:10:01
| 2022-10-20T08:10:01
|
MEMBER
| null | null | null | null |
## Describe the bug
As discussed in:
- https://github.com/huggingface/hub-docs/pull/423#pullrequestreview-1146083701
we should update our docs once dataset scripts have been transferred to the Hub (and removed from GitHub):
- #4974
Concretely:
- [x] Datasets on GitHub (legacy): https://huggingface.co/docs/datasets/main/en/share#datasets-on-github-legacy
- [x] ADD_NEW_DATASET: https://github.com/huggingface/datasets/blob/main/ADD_NEW_DATASET.md
- ...
This PR complements the work of:
- #5067
This PR is a follow-up of PRs:
- #3777
CC: @julien-c
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5135/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5135/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 1 day, 1:11:42
|
https://api.github.com/repos/huggingface/datasets/issues/5134
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5134/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5134/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5134/events
|
https://github.com/huggingface/datasets/issues/5134
| 1,413,623,687
|
I_kwDODunzps5UQi-H
| 5,134
|
Raise ImportError instead of OSError if required extraction library is not installed
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
},
{
"color": "DF8D62",
"default": false,
"description": "",
"id": 4614514401,
"name": "hacktoberfest",
"node_id": "LA_kwDODunzps8AAAABEwvm4Q",
"url": "https://api.github.com/repos/huggingface/datasets/labels/hacktoberfest"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/114604338?v=4",
"events_url": "https://api.github.com/users/ayushthe1/events{/privacy}",
"followers_url": "https://api.github.com/users/ayushthe1/followers",
"following_url": "https://api.github.com/users/ayushthe1/following{/other_user}",
"gists_url": "https://api.github.com/users/ayushthe1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ayushthe1",
"id": 114604338,
"login": "ayushthe1",
"node_id": "U_kgDOBtS5Mg",
"organizations_url": "https://api.github.com/users/ayushthe1/orgs",
"received_events_url": "https://api.github.com/users/ayushthe1/received_events",
"repos_url": "https://api.github.com/users/ayushthe1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ayushthe1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ayushthe1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ayushthe1",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/114604338?v=4",
"events_url": "https://api.github.com/users/ayushthe1/events{/privacy}",
"followers_url": "https://api.github.com/users/ayushthe1/followers",
"following_url": "https://api.github.com/users/ayushthe1/following{/other_user}",
"gists_url": "https://api.github.com/users/ayushthe1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ayushthe1",
"id": 114604338,
"login": "ayushthe1",
"node_id": "U_kgDOBtS5Mg",
"organizations_url": "https://api.github.com/users/ayushthe1/orgs",
"received_events_url": "https://api.github.com/users/ayushthe1/received_events",
"repos_url": "https://api.github.com/users/ayushthe1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ayushthe1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ayushthe1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ayushthe1",
"user_view_type": "public"
}
] |
[
"hey ,i would like to work on this issue . Please assign it to me.",
"hey @mariosasko , i made a pr for this issue. Could you please review it.\r\nAlso i found multiple `OSError` in `extract.py` file which i thought could be replaced too but wasn't sure about them.\r\nPlease do tell if that also needs to be done."
] | 2022-10-18T17:53:46
| 2022-10-25T15:56:59
| 2022-10-25T15:56:59
|
COLLABORATOR
| null | null | null | null |
According to the official Python docs, `OSError` should be thrown in the following situations:
> This exception is raised when a system function returns a system-related error, including I/O failures such as βfile not foundβ or βdisk fullβ (not for illegal argument types or other incidental errors).
Hence, it makes more sense to raise `ImportError` instead of `OSError` when the required extraction/decompression library is not installed.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5134/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5134/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 6 days, 22:03:13
|
https://api.github.com/repos/huggingface/datasets/issues/5133
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5133/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5133/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5133/events
|
https://github.com/huggingface/datasets/issues/5133
| 1,413,623,462
|
I_kwDODunzps5UQi6m
| 5,133
|
Tensor operation not functioning in dataset mapping
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/50691954?v=4",
"events_url": "https://api.github.com/users/xinghaow99/events{/privacy}",
"followers_url": "https://api.github.com/users/xinghaow99/followers",
"following_url": "https://api.github.com/users/xinghaow99/following{/other_user}",
"gists_url": "https://api.github.com/users/xinghaow99/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/xinghaow99",
"id": 50691954,
"login": "xinghaow99",
"node_id": "MDQ6VXNlcjUwNjkxOTU0",
"organizations_url": "https://api.github.com/users/xinghaow99/orgs",
"received_events_url": "https://api.github.com/users/xinghaow99/received_events",
"repos_url": "https://api.github.com/users/xinghaow99/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/xinghaow99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xinghaow99/subscriptions",
"type": "User",
"url": "https://api.github.com/users/xinghaow99",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] |
[
"Hi! The Torch ops in your snippet are not equivalent to the NumPy ones, hence the difference. You can get the same behavior by replacing the line `feature = torch.mean(feature, dim=1)` with `feature = feature.squeeze().mean(1)` .",
"> Hi! The Torch ops in your snippet are not equivalent to the NumPy ones, hence the difference. You can get the same behavior by replacing the line `feature = torch.mean(feature, dim=1)` with `feature = feature.squeeze().mean(1)` .\r\n\r\nThank you. "
] | 2022-10-18T17:53:35
| 2022-10-19T04:15:45
| 2022-10-19T04:15:44
|
NONE
| null | null | null | null |
## Describe the bug
I'm doing a torch.mean() operation in data preprocessing, and it's not working.
## Steps to reproduce the bug
```
from transformers import pipeline
import torch
import numpy as np
from datasets import load_dataset
device = 'cuda:0'
raw_dataset = load_dataset("glue", "sst2")
feature_extraction = pipeline('feature-extraction', 'bert-base-uncased', device=device)
def extracted_data(examples):
# feature = torch.tensor(feature_extraction(examples['sentence'], batch_size=16), device=device)
# feature = torch.mean(feature, dim=1)
feature = np.asarray(feature_extraction(examples['sentence'], batch_size=16)).squeeze().mean(1)
print(feature.shape)
return {'feature': feature}
extracted_dataset = raw_dataset.map(extracted_data, batched=True, batch_size=16)
```
## Results
When running with torch.mean(), the shape printed out is [16, seq_len, 768], which is exactly the same before the operation. While numpy works just fine, which gives [16, 768].
## Environment info
- `datasets` version: 2.6.1
- Platform: Linux-4.4.0-142-generic-x86_64-with-glibc2.31
- Python version: 3.10.6
- PyArrow version: 9.0.0
- Pandas version: 1.5.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/50691954?v=4",
"events_url": "https://api.github.com/users/xinghaow99/events{/privacy}",
"followers_url": "https://api.github.com/users/xinghaow99/followers",
"following_url": "https://api.github.com/users/xinghaow99/following{/other_user}",
"gists_url": "https://api.github.com/users/xinghaow99/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/xinghaow99",
"id": 50691954,
"login": "xinghaow99",
"node_id": "MDQ6VXNlcjUwNjkxOTU0",
"organizations_url": "https://api.github.com/users/xinghaow99/orgs",
"received_events_url": "https://api.github.com/users/xinghaow99/received_events",
"repos_url": "https://api.github.com/users/xinghaow99/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/xinghaow99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xinghaow99/subscriptions",
"type": "User",
"url": "https://api.github.com/users/xinghaow99",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5133/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5133/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 10:22:09
|
https://api.github.com/repos/huggingface/datasets/issues/5132
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5132/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5132/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5132/events
|
https://github.com/huggingface/datasets/issues/5132
| 1,413,607,306
|
I_kwDODunzps5UQe-K
| 5,132
|
Depracate `num_proc` parameter in `DownloadManager.extract`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
},
{
"color": "DF8D62",
"default": false,
"description": "",
"id": 4614514401,
"name": "hacktoberfest",
"node_id": "LA_kwDODunzps8AAAABEwvm4Q",
"url": "https://api.github.com/repos/huggingface/datasets/labels/hacktoberfest"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/114604338?v=4",
"events_url": "https://api.github.com/users/ayushthe1/events{/privacy}",
"followers_url": "https://api.github.com/users/ayushthe1/followers",
"following_url": "https://api.github.com/users/ayushthe1/following{/other_user}",
"gists_url": "https://api.github.com/users/ayushthe1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ayushthe1",
"id": 114604338,
"login": "ayushthe1",
"node_id": "U_kgDOBtS5Mg",
"organizations_url": "https://api.github.com/users/ayushthe1/orgs",
"received_events_url": "https://api.github.com/users/ayushthe1/received_events",
"repos_url": "https://api.github.com/users/ayushthe1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ayushthe1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ayushthe1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ayushthe1",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/114604338?v=4",
"events_url": "https://api.github.com/users/ayushthe1/events{/privacy}",
"followers_url": "https://api.github.com/users/ayushthe1/followers",
"following_url": "https://api.github.com/users/ayushthe1/following{/other_user}",
"gists_url": "https://api.github.com/users/ayushthe1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ayushthe1",
"id": 114604338,
"login": "ayushthe1",
"node_id": "U_kgDOBtS5Mg",
"organizations_url": "https://api.github.com/users/ayushthe1/orgs",
"received_events_url": "https://api.github.com/users/ayushthe1/received_events",
"repos_url": "https://api.github.com/users/ayushthe1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ayushthe1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ayushthe1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ayushthe1",
"user_view_type": "public"
}
] |
[
"I can take this! #self-assign",
"#self-assign",
"@lazarust i'm already working on this issue :smile: ",
"#self-assign",
"hey @mariosasko , i made a pr for this issue. Could you please review it."
] | 2022-10-18T17:41:05
| 2022-10-25T15:56:46
| 2022-10-25T15:56:46
|
COLLABORATOR
| null | null | null | null |
The `num_proc` parameter is only present in `DownloadManager.extract` but not in `StreamingDownloadManager.extract`, making it impossible to support streaming in the dataset scripts that use it (`openwebtext` and `the_pile_stack_exchange`). We can avoid this situation by deprecating this parameter and passing `DownloadConfig`'s `num_proc` to `map_nested` instead, as it's done in `DownloadManager.download`.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5132/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5132/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 6 days, 22:15:41
|
https://api.github.com/repos/huggingface/datasets/issues/5131
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5131/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5131/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5131/events
|
https://github.com/huggingface/datasets/issues/5131
| 1,413,534,863
|
I_kwDODunzps5UQNSP
| 5,131
|
WikiText 103 tokenizer hangs
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/12433427?v=4",
"events_url": "https://api.github.com/users/TrentBrick/events{/privacy}",
"followers_url": "https://api.github.com/users/TrentBrick/followers",
"following_url": "https://api.github.com/users/TrentBrick/following{/other_user}",
"gists_url": "https://api.github.com/users/TrentBrick/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/TrentBrick",
"id": 12433427,
"login": "TrentBrick",
"node_id": "MDQ6VXNlcjEyNDMzNDI3",
"organizations_url": "https://api.github.com/users/TrentBrick/orgs",
"received_events_url": "https://api.github.com/users/TrentBrick/received_events",
"repos_url": "https://api.github.com/users/TrentBrick/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/TrentBrick/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TrentBrick/subscriptions",
"type": "User",
"url": "https://api.github.com/users/TrentBrick",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] |
[
"any updates on this? It happens to me on [OpenWikiText-20%](https://huggingface.co/datasets/Bingsu/openwebtext_20p) dataset, but not on [OpenWebText-10k](https://huggingface.co/datasets/stas/openwebtext-10k). This is really strange because I don't change anything else in my running script.\r\n\r\ntransformers version 4.18.0.dev0\r\ndatasets version 1.18.0"
] | 2022-10-18T16:44:00
| 2023-08-08T08:42:40
| 2023-07-21T14:41:51
|
NONE
| null | null | null | null |
See issue here: https://github.com/huggingface/transformers/issues/19702
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5131/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5131/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 275 days, 21:57:51
|
https://api.github.com/repos/huggingface/datasets/issues/5129
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5129/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5129/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5129/events
|
https://github.com/huggingface/datasets/issues/5129
| 1,413,031,664
|
I_kwDODunzps5UOSbw
| 5,129
|
unexpected `cast` or `class_encode_column` result after `rename_column`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/35144675?v=4",
"events_url": "https://api.github.com/users/quaeast/events{/privacy}",
"followers_url": "https://api.github.com/users/quaeast/followers",
"following_url": "https://api.github.com/users/quaeast/following{/other_user}",
"gists_url": "https://api.github.com/users/quaeast/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/quaeast",
"id": 35144675,
"login": "quaeast",
"node_id": "MDQ6VXNlcjM1MTQ0Njc1",
"organizations_url": "https://api.github.com/users/quaeast/orgs",
"received_events_url": "https://api.github.com/users/quaeast/received_events",
"repos_url": "https://api.github.com/users/quaeast/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/quaeast/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/quaeast/subscriptions",
"type": "User",
"url": "https://api.github.com/users/quaeast",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] |
[
"Hi! Unfortunately, I can't reproduce this issue locally (in Python 3.7/3.10) or in Colab. I would assume this is due to a bug we fixed in the latest release, but your version is up-to-date, so I'm not sure if there is something we can do to help...",
"Hi, ζΉεδΈ. I tried running the code with exact the same configuration (both datasets 2.5.2 and 2.6.1, python, pyarrow, pandas), but on Linux. The results seem to be the expected `{<pyarrow.Int64Scalar: 4>, <pyarrow.Int64Scalar: 2>, <pyarrow.Int64Scalar: 3>, <pyarrow.Int64Scalar: 0>, <pyarrow.Int64Scalar: 1>}`.\r\nI don't have a Mac device. I can't verify whether this is a M1 chip-specific problem.",
"I've just tested the code on my M1 Mac, and it behaves as expected.",
"> Hi! Unfortunately, I can't reproduce this issue locally (in Python 3.7/3.10) or in Colab. I would assume this is due to a bug we fixed in the latest release, but your version is up-to-date, so I'm not sure if there is something we can do to help...\r\n\r\nThank you for your attention and feel sorry to take your time. Since this is a bug of old version, I think mybe my problem is because `cast` operation directaly used cached data generated by older verion of `datasets`. I tried to deleted the cached data and I got expected result.\r\n"
] | 2022-10-18T11:15:24
| 2022-10-19T03:02:26
| 2022-10-19T03:02:26
|
NONE
| null | null | null | null |
## Describe the bug
When invoke `cast` or `class_encode_column` to a colunm renamed by `rename_column` , it will convert all the variables in this column into one variable. I also run this script in version 2.5.2, this bug does not appear. So I switched to the older version.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("amazon_reviews_multi", "en")
data = dataset['train']
data = data.remove_columns(
[
"review_id",
"product_id",
"reviewer_id",
"review_title",
"language",
"product_category",
]
)
data = data.rename_column("review_body", "text")
data1 = data.class_encode_column("stars")
print(set(data1.data.columns[0]))
# output: {<pyarrow.Int64Scalar: 4>, <pyarrow.Int64Scalar: 2>, <pyarrow.Int64Scalar: 3>, <pyarrow.Int64Scalar: 0>, <pyarrow.Int64Scalar: 1>}
data = data.rename_column("stars", "label")
print(set(data.data.columns[0]))
# output: {<pyarrow.Int32Scalar: 5>, <pyarrow.Int32Scalar: 4>, <pyarrow.Int32Scalar: 1>, <pyarrow.Int32Scalar: 3>, <pyarrow.Int32Scalar: 2>}
data2 = data.class_encode_column("label")
print(set(data2.data.columns[0]))
# output: {<pyarrow.Int64Scalar: 0>}
```
## Expected results
the last print should be:
{<pyarrow.Int64Scalar: 4>, <pyarrow.Int64Scalar: 2>, <pyarrow.Int64Scalar: 3>, <pyarrow.Int64Scalar: 0>, <pyarrow.Int64Scalar: 1>}
## Actual results
but it output:
{<pyarrow.Int64Scalar: 0>}
## Environment info
- `datasets` version: 2.6.1
- Platform: macOS-12.5.1-arm64-arm-64bit
- Python version: 3.10.6
- PyArrow version: 9.0.0
- Pandas version: 1.5.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/35144675?v=4",
"events_url": "https://api.github.com/users/quaeast/events{/privacy}",
"followers_url": "https://api.github.com/users/quaeast/followers",
"following_url": "https://api.github.com/users/quaeast/following{/other_user}",
"gists_url": "https://api.github.com/users/quaeast/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/quaeast",
"id": 35144675,
"login": "quaeast",
"node_id": "MDQ6VXNlcjM1MTQ0Njc1",
"organizations_url": "https://api.github.com/users/quaeast/orgs",
"received_events_url": "https://api.github.com/users/quaeast/received_events",
"repos_url": "https://api.github.com/users/quaeast/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/quaeast/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/quaeast/subscriptions",
"type": "User",
"url": "https://api.github.com/users/quaeast",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5129/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5129/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 15:47:02
|
https://api.github.com/repos/huggingface/datasets/issues/5123
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5123/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5123/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5123/events
|
https://github.com/huggingface/datasets/issues/5123
| 1,410,828,756
|
I_kwDODunzps5UF4nU
| 5,123
|
datasets freezes with streaming mode in multiple-gpu
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/59409879?v=4",
"events_url": "https://api.github.com/users/jackfeinmann5/events{/privacy}",
"followers_url": "https://api.github.com/users/jackfeinmann5/followers",
"following_url": "https://api.github.com/users/jackfeinmann5/following{/other_user}",
"gists_url": "https://api.github.com/users/jackfeinmann5/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jackfeinmann5",
"id": 59409879,
"login": "jackfeinmann5",
"node_id": "MDQ6VXNlcjU5NDA5ODc5",
"organizations_url": "https://api.github.com/users/jackfeinmann5/orgs",
"received_events_url": "https://api.github.com/users/jackfeinmann5/received_events",
"repos_url": "https://api.github.com/users/jackfeinmann5/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jackfeinmann5/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jackfeinmann5/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jackfeinmann5",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
open
| false
| null |
[] |
[
"@lhoestq I tested the script without accelerator, and I confirm this is due to datasets part as this gets similar results without accelerator.",
"Hi ! You said it works on 1 GPU but doesn't wortk without accelerator - what's the difference between running on 1 GPU and running without accelerator in your case ?",
"Hi @lhoestq \r\nthanks for coming back to me. Sorry for the confusion I made. I meant this works fine on 1 GPU, but on multi-gpu it is freezing. \"accelerator\" is not an issue as if you adapt the code without accelerator this still gets the same issue.\r\nIn order to test it. Please run \"accelerate config\", then use the setup for multi-gpu in one node.\r\nAfter that run \"accelerate launch code.py\" and then you would see the freezing occurs.",
"Hi @lhoestq \r\ncould you have the chance to reproduce the error by running the minimal example shared?\r\nthanks",
"I think you need to do `train_dataset = train_dataset.with_format(\"torch\")` to work with the DataLoader in a multiprocessing setup :)\r\n\r\nThe hang is probably caused by our streamign lib `fsspec` which doesn't work in multiprocessing out of the box - but we made it work with the PyTorch DataLoader when the dataset format is set to \"torch\"",
"Hi @lhoestq \r\nthanks for the response. I added the line suggested right before calling `with accelerator.main_process_first():` in the code above and I confirm this also freezes. to reproduce it please run \"accelerate launch code.py\". I was wondering if you could have more suggestions for me? I do not have an idea how to fix this or debug this freezing. many thanks.",
"Maybe the `fsspec` stuff need to be clearer even before - can you try to run this function at the very beginning of your script ?\r\n```python\r\nimport fsspec\r\n\r\ndef _set_fsspec_for_multiprocess() -> None:\r\n \"\"\"\r\n Clear reference to the loop and thread.\r\n This is necessary otherwise HTTPFileSystem hangs in the ML training loop.\r\n Only required for fsspec >= 0.9.0\r\n See https://github.com/fsspec/gcsfs/issues/379\r\n \"\"\"\r\n fsspec.asyn.iothread[0] = None\r\n fsspec.asyn.loop[0] = None\r\n\r\n_set_fsspec_for_multiprocess()\r\n```",
"Hi @lhoestq \r\nthank you. I tried it, I am getting `AttributeError: module 'fsspec' has no attribute 'asyn'`. which version of fsspect do you use?\r\nI am using \r\n```fsspec 2022.8.2 pypi_0 pypi```\r\nthank you.",
"Hi @lhoestq \r\nI solved `fsspec` error with this hack for now https://discuss.huggingface.co/t/attributeerror-module-fsspec-has-no-attribute-asyn/19255 but this is still freezing, I greatly appreciate if you could run this script on your side. Many thanks.\r\n\r\n```\r\nimport fsspec\r\n\r\ndef _set_fsspec_for_multiprocess() -> None:\r\n \"\"\"\r\n Clear reference to the loop and thread.\r\n This is necessary otherwise HTTPFileSystem hangs in the ML training loop.\r\n Only required for fsspec >= 0.9.0\r\n See https://github.com/fsspec/gcsfs/issues/379\r\n \"\"\"\r\n fsspec.asyn.iothread[0] = None\r\n fsspec.asyn.loop[0] = None\r\n\r\n\r\n_set_fsspec_for_multiprocess()\r\n\r\nfrom accelerate import Accelerator\r\nfrom accelerate.logging import get_logger\r\nfrom datasets import load_dataset\r\nfrom torch.utils.data.dataloader import DataLoader\r\nimport torch\r\nfrom datasets import load_dataset\r\nfrom transformers import AutoTokenizer\r\nimport torch\r\nfrom accelerate.logging import get_logger\r\nfrom torch.utils.data import IterableDataset\r\nfrom torch.utils.data.datapipes.iter.combinatorics import ShufflerIterDataPipe\r\n\r\n\r\nlogger = get_logger(__name__)\r\n\r\n\r\nclass ConstantLengthDataset(IterableDataset):\r\n \"\"\"\r\n Iterable dataset that returns constant length chunks of tokens from stream of text files.\r\n Args:\r\n tokenizer (Tokenizer): The processor used for proccessing the data.\r\n dataset (dataset.Dataset): Dataset with text files.\r\n infinite (bool): If True the iterator is reset after dataset reaches end else stops.\r\n max_seq_length (int): Length of token sequences to return.\r\n num_of_sequences (int): Number of token sequences to keep in buffer.\r\n chars_per_token (int): Number of characters per token used to estimate number of tokens in text buffer.\r\n \"\"\"\r\n\r\n def __init__(\r\n self,\r\n tokenizer,\r\n dataset,\r\n infinite=False,\r\n max_seq_length=1024,\r\n num_of_sequences=1024,\r\n chars_per_token=3.6,\r\n ):\r\n self.tokenizer = tokenizer\r\n # self.concat_token_id = tokenizer.bos_token_id\r\n self.dataset = dataset\r\n self.max_seq_length = max_seq_length\r\n self.epoch = 0\r\n self.infinite = infinite\r\n self.current_size = 0\r\n self.max_buffer_size = max_seq_length * chars_per_token * num_of_sequences\r\n self.content_field = \"text\"\r\n\r\n def __iter__(self):\r\n iterator = iter(self.dataset)\r\n more_examples = True\r\n while more_examples:\r\n buffer, buffer_len = [], 0\r\n while True:\r\n if buffer_len >= self.max_buffer_size:\r\n break\r\n try:\r\n buffer.append(next(iterator)[self.content_field])\r\n buffer_len += len(buffer[-1])\r\n except StopIteration:\r\n if self.infinite:\r\n iterator = iter(self.dataset)\r\n self.epoch += 1\r\n logger.info(f\"Dataset epoch: {self.epoch}\")\r\n else:\r\n more_examples = False\r\n break\r\n tokenized_inputs = self.tokenizer(buffer, truncation=False)[\"input_ids\"]\r\n all_token_ids = []\r\n for tokenized_input in tokenized_inputs:\r\n all_token_ids.extend(tokenized_input)\r\n for i in range(0, len(all_token_ids), self.max_seq_length):\r\n input_ids = all_token_ids[i : i + self.max_seq_length]\r\n if len(input_ids) == self.max_seq_length:\r\n self.current_size += 1\r\n yield torch.tensor(input_ids)\r\n\r\n def shuffle(self, buffer_size=1000):\r\n return ShufflerIterDataPipe(self, buffer_size=buffer_size)\r\n\r\n\r\ndef create_dataloaders(tokenizer, accelerator):\r\n ds_kwargs = {\"streaming\": True}\r\n # In distributed training, the load_dataset function gaurantees that only one process\r\n # can concurrently download the dataset.\r\n datasets = load_dataset(\r\n \"c4\",\r\n \"en\",\r\n cache_dir=\"cache_dir\",\r\n **ds_kwargs,\r\n )\r\n train_data, valid_data = datasets[\"train\"], datasets[\"validation\"]\r\n with accelerator.main_process_first():\r\n train_data = train_data.shuffle(buffer_size=10000, seed=None)\r\n train_dataset = ConstantLengthDataset(\r\n tokenizer,\r\n train_data,\r\n infinite=True,\r\n max_seq_length=256,\r\n )\r\n valid_dataset = ConstantLengthDataset(\r\n tokenizer,\r\n valid_data,\r\n infinite=False,\r\n max_seq_length=256,\r\n )\r\n train_dataset = train_dataset.shuffle(buffer_size=10000)\r\n train_dataloader = DataLoader(train_dataset, batch_size=160, shuffle=True)\r\n eval_dataloader = DataLoader(valid_dataset, batch_size=160)\r\n return train_dataloader, eval_dataloader\r\n\r\n\r\ndef main():\r\n # Accelerator.\r\n logging_dir = \"data_save_dir/log\"\r\n accelerator = Accelerator(\r\n gradient_accumulation_steps=1,\r\n mixed_precision=\"bf16\",\r\n log_with=\"tensorboard\",\r\n logging_dir=logging_dir,\r\n )\r\n # We need to initialize the trackers we use, and also store our configuration.\r\n # The trackers initializes automatically on the main process.\r\n if accelerator.is_main_process:\r\n accelerator.init_trackers(\"test\")\r\n tokenizer = AutoTokenizer.from_pretrained(\"bert-base-uncased\")\r\n\r\n # Load datasets and create dataloaders.\r\n train_dataloader, _ = create_dataloaders(tokenizer, accelerator)\r\n\r\n train_dataloader = accelerator.prepare(train_dataloader)\r\n for step, batch in enumerate(train_dataloader, start=1):\r\n print(step)\r\n accelerator.end_training()\r\n\r\n\r\nif __name__ == \"__main__\":\r\n main()\r\n```",
"Are you using `Pytorch 1.11`? Otherwise the script freezes because of the shuffling in this line: \r\n```\r\n return ShufflerIterDataPipe(self, buffer_size=buffer_size)\r\n```\r\n`ShufflerIterDataPipe` behavior must have changed for newer Pytorch versions. But this doesn't change whether you're using streaming or not in `datasets`, so probably not the same issue, but something to try.",
"> Are you using `Pytorch 1.11`? Otherwise the script freezes because of the shuffling in this line:\r\n> \r\n> ```\r\n> return ShufflerIterDataPipe(self, buffer_size=buffer_size)\r\n> ```\r\n> \r\n> `ShufflerIterDataPipe` behavior must have changed for newer Pytorch versions. But this doesn't change whether you're using streaming or not in `datasets`, so probably not the same issue, but something to try.\r\n\r\nI met the same issue for pytorch 1.12 and 1.13, is there a way to work around for this function for newer pytorch versions?"
] | 2022-10-17T03:28:16
| 2023-05-14T06:55:20
| null |
NONE
| null | null | null | null |
## Describe the bug
Hi. I am using this dataloader, which is for processing large datasets in streaming mode mentioned in one of examples of huggingface. I am using it to read c4: https://github.com/huggingface/transformers/blob/b48ac1a094e572d6076b46a9e4ed3e0ebe978afc/examples/research_projects/codeparrot/scripts/codeparrot_training.py#L22
During using multi-gpu in accelerator in one node, the code freezes, but works for 1 GPU:
```
10/16/2022 14:18:46 - INFO - datasets.info - Loading Dataset Infos from /home/jack/.cache/huggingface/modules/datasets_modules/datasets/c4/df532b158939272d032cc63ef19cd5b83e9b4d00c922b833e4cb18b2e9869b01
Steps: 0%| | 0/400000 [00:00<?, ?it/s]10/16/2022 14:18:47 - INFO - torch.utils.data.dataloader - Shared seed (135290893754684706) sent to store on rank 0
```
# Code to reproduce
please run this code with `accelerate launch code.py`
```
from accelerate import Accelerator
from accelerate.logging import get_logger
from datasets import load_dataset
from torch.utils.data.dataloader import DataLoader
import torch
from datasets import load_dataset
from transformers import AutoTokenizer
import torch
from accelerate.logging import get_logger
from torch.utils.data import IterableDataset
from torch.utils.data.datapipes.iter.combinatorics import ShufflerIterDataPipe
logger = get_logger(__name__)
class ConstantLengthDataset(IterableDataset):
"""
Iterable dataset that returns constant length chunks of tokens from stream of text files.
Args:
tokenizer (Tokenizer): The processor used for proccessing the data.
dataset (dataset.Dataset): Dataset with text files.
infinite (bool): If True the iterator is reset after dataset reaches end else stops.
max_seq_length (int): Length of token sequences to return.
num_of_sequences (int): Number of token sequences to keep in buffer.
chars_per_token (int): Number of characters per token used to estimate number of tokens in text buffer.
"""
def __init__(
self,
tokenizer,
dataset,
infinite=False,
max_seq_length=1024,
num_of_sequences=1024,
chars_per_token=3.6,
):
self.tokenizer = tokenizer
# self.concat_token_id = tokenizer.bos_token_id
self.dataset = dataset
self.max_seq_length = max_seq_length
self.epoch = 0
self.infinite = infinite
self.current_size = 0
self.max_buffer_size = max_seq_length * chars_per_token * num_of_sequences
self.content_field = "text"
def __iter__(self):
iterator = iter(self.dataset)
more_examples = True
while more_examples:
buffer, buffer_len = [], 0
while True:
if buffer_len >= self.max_buffer_size:
break
try:
buffer.append(next(iterator)[self.content_field])
buffer_len += len(buffer[-1])
except StopIteration:
if self.infinite:
iterator = iter(self.dataset)
self.epoch += 1
logger.info(f"Dataset epoch: {self.epoch}")
else:
more_examples = False
break
tokenized_inputs = self.tokenizer(buffer, truncation=False)["input_ids"]
all_token_ids = []
for tokenized_input in tokenized_inputs:
all_token_ids.extend(tokenized_input)
for i in range(0, len(all_token_ids), self.max_seq_length):
input_ids = all_token_ids[i : i + self.max_seq_length]
if len(input_ids) == self.max_seq_length:
self.current_size += 1
yield torch.tensor(input_ids)
def shuffle(self, buffer_size=1000):
return ShufflerIterDataPipe(self, buffer_size=buffer_size)
def create_dataloaders(tokenizer, accelerator):
ds_kwargs = {"streaming": True}
# In distributed training, the load_dataset function gaurantees that only one process
# can concurrently download the dataset.
datasets = load_dataset(
"c4",
"en",
cache_dir="cache_dir",
**ds_kwargs,
)
train_data, valid_data = datasets["train"], datasets["validation"]
with accelerator.main_process_first():
train_data = train_data.shuffle(buffer_size=10000, seed=None)
train_dataset = ConstantLengthDataset(
tokenizer,
train_data,
infinite=True,
max_seq_length=256,
)
valid_dataset = ConstantLengthDataset(
tokenizer,
valid_data,
infinite=False,
max_seq_length=256,
)
train_dataset = train_dataset.shuffle(buffer_size=10000)
train_dataloader = DataLoader(train_dataset, batch_size=160, shuffle=True)
eval_dataloader = DataLoader(valid_dataset, batch_size=160)
return train_dataloader, eval_dataloader
def main():
# Accelerator.
logging_dir = "data_save_dir/log"
accelerator = Accelerator(
gradient_accumulation_steps=1,
mixed_precision="bf16",
log_with="tensorboard",
logging_dir=logging_dir,
)
# We need to initialize the trackers we use, and also store our configuration.
# The trackers initializes automatically on the main process.
if accelerator.is_main_process:
accelerator.init_trackers("test")
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
# Load datasets and create dataloaders.
train_dataloader, _ = create_dataloaders(tokenizer, accelerator)
train_dataloader = accelerator.prepare(train_dataloader)
for step, batch in enumerate(train_dataloader, start=1):
print(step)
accelerator.end_training()
if __name__ == "__main__":
main()
```
## Results expected
Being able to run the code for streamining datasets with multi-gpu
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.5.2
- Platform: linux
- Python version: 3.9.12
- PyArrow version: 9.0.0
@lhoestq I do not have any idea why this freezing happens, and I removed the streaming mode and this was working fine, so I know this is caused by streaming mode of the dataloader part not working well with multi-gpu setting. Since datasets are large, I hope to keep the streamining mode. I very much appreciate your help.
| null |
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5123/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5123/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/5118
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5118/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5118/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5118/events
|
https://github.com/huggingface/datasets/issues/5118
| 1,410,547,373
|
I_kwDODunzps5UEz6t
| 5,118
|
Installing `datasets` on M1 computers
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/9879252?v=4",
"events_url": "https://api.github.com/users/david1542/events{/privacy}",
"followers_url": "https://api.github.com/users/david1542/followers",
"following_url": "https://api.github.com/users/david1542/following{/other_user}",
"gists_url": "https://api.github.com/users/david1542/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/david1542",
"id": 9879252,
"login": "david1542",
"node_id": "MDQ6VXNlcjk4NzkyNTI=",
"organizations_url": "https://api.github.com/users/david1542/orgs",
"received_events_url": "https://api.github.com/users/david1542/received_events",
"repos_url": "https://api.github.com/users/david1542/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/david1542/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/david1542/subscriptions",
"type": "User",
"url": "https://api.github.com/users/david1542",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] |
[
"Thanks for reporting, @david1542."
] | 2022-10-16T16:50:08
| 2022-10-19T09:10:08
| 2022-10-19T09:10:08
|
CONTRIBUTOR
| null | null | null | null |
## Describe the bug
I wanted to install `datasets` dependencies on my M1 (in order to start contributing to the project). However, I got an error regarding `tensorflow`.
On M1, `tensorflow-macos` needs to be installed instead. Can we add a conditional requirement, so that `tensorflow-macos` would be installed on M1?
## Steps to reproduce the bug
Fresh clone this project (on m1), create a virtualenv and run this:
```python
pip install -e ".[dev]"
```
## Expected results
Installation should be smooth, and all the dependencies should be installed on M1.
## Actual results
You should receive an error, saying pip couldn't find a version that matches this pattern:
```
tensorflow>=2.3,!=2.6.0,!=2.6.1
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.6.2.dev0
- Platform: macOS-12.6-arm64-arm-64bit
- Python version: 3.9.6
- PyArrow version: 7.0.0
- Pandas version: 1.5.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5118/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5118/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 2 days, 16:20:00
|
https://api.github.com/repos/huggingface/datasets/issues/5117
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5117/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5117/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5117/events
|
https://github.com/huggingface/datasets/issues/5117
| 1,409,571,346
|
I_kwDODunzps5UBFoS
| 5,117
|
Progress bars have color red and never completed to 100%
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/63857529?v=4",
"events_url": "https://api.github.com/users/echatzikyriakidis/events{/privacy}",
"followers_url": "https://api.github.com/users/echatzikyriakidis/followers",
"following_url": "https://api.github.com/users/echatzikyriakidis/following{/other_user}",
"gists_url": "https://api.github.com/users/echatzikyriakidis/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/echatzikyriakidis",
"id": 63857529,
"login": "echatzikyriakidis",
"node_id": "MDQ6VXNlcjYzODU3NTI5",
"organizations_url": "https://api.github.com/users/echatzikyriakidis/orgs",
"received_events_url": "https://api.github.com/users/echatzikyriakidis/received_events",
"repos_url": "https://api.github.com/users/echatzikyriakidis/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/echatzikyriakidis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/echatzikyriakidis/subscriptions",
"type": "User",
"url": "https://api.github.com/users/echatzikyriakidis",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/9879252?v=4",
"events_url": "https://api.github.com/users/david1542/events{/privacy}",
"followers_url": "https://api.github.com/users/david1542/followers",
"following_url": "https://api.github.com/users/david1542/following{/other_user}",
"gists_url": "https://api.github.com/users/david1542/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/david1542",
"id": 9879252,
"login": "david1542",
"node_id": "MDQ6VXNlcjk4NzkyNTI=",
"organizations_url": "https://api.github.com/users/david1542/orgs",
"received_events_url": "https://api.github.com/users/david1542/received_events",
"repos_url": "https://api.github.com/users/david1542/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/david1542/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/david1542/subscriptions",
"type": "User",
"url": "https://api.github.com/users/david1542",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/9879252?v=4",
"events_url": "https://api.github.com/users/david1542/events{/privacy}",
"followers_url": "https://api.github.com/users/david1542/followers",
"following_url": "https://api.github.com/users/david1542/following{/other_user}",
"gists_url": "https://api.github.com/users/david1542/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/david1542",
"id": 9879252,
"login": "david1542",
"node_id": "MDQ6VXNlcjk4NzkyNTI=",
"organizations_url": "https://api.github.com/users/david1542/orgs",
"received_events_url": "https://api.github.com/users/david1542/received_events",
"repos_url": "https://api.github.com/users/david1542/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/david1542/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/david1542/subscriptions",
"type": "User",
"url": "https://api.github.com/users/david1542",
"user_view_type": "public"
}
] |
[
"Hi @echatzikyriakidis, thanks for submitting the issue.\r\nWhich shell are you using exactly? I tried to run the command you sent, but I don't see colors at all π§\r\n\r\nI tried from bash and zsh as well.",
"Hi @david1542 ,\r\n\r\nI use Google Colab.\r\n",
"Got it. I [created a PR](https://github.com/huggingface/datasets/pull/5120) that fixes this issue. Turns out that the wrapping logic for the inner loop was slightly incorrect.",
"Thank you!",
"Hello @mariosasko \r\n\r\nI am still facing this issue. Was this problem fixed?\r\n\r\n\r\n\r\nI cleared the hugging face cache before running, and no error message was given. Let me know if you need a minimal repro of my code."
] | 2022-10-14T16:12:30
| 2024-06-19T19:03:42
| 2022-10-23T12:58:41
|
NONE
| null | null | null | null |
## Describe the bug
Progress bars after transformative operations turn in red and never be completed to 100%
## Steps to reproduce the bug
```python
from datasets import load_dataset
load_dataset('rotten_tomatoes', split='test').filter(lambda o: True)
```
## Expected results
Progress bar should be 100% and green
## Actual results
Progress bar turn in red and never completed to 100%
## Environment info
- `datasets` version: 2.6.1
- Platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.14
- PyArrow version: 6.0.1
- Pandas version: 1.3.5
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5117/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5117/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 8 days, 20:46:11
|
https://api.github.com/repos/huggingface/datasets/issues/5114
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5114/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5114/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5114/events
|
https://github.com/huggingface/datasets/issues/5114
| 1,409,236,738
|
I_kwDODunzps5T_z8C
| 5,114
|
load_from_disk with remote filesystem fails due to a wrong temporary local folder path
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/48770768?v=4",
"events_url": "https://api.github.com/users/bruno-hays/events{/privacy}",
"followers_url": "https://api.github.com/users/bruno-hays/followers",
"following_url": "https://api.github.com/users/bruno-hays/following{/other_user}",
"gists_url": "https://api.github.com/users/bruno-hays/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bruno-hays",
"id": 48770768,
"login": "bruno-hays",
"node_id": "MDQ6VXNlcjQ4NzcwNzY4",
"organizations_url": "https://api.github.com/users/bruno-hays/orgs",
"received_events_url": "https://api.github.com/users/bruno-hays/received_events",
"repos_url": "https://api.github.com/users/bruno-hays/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bruno-hays/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bruno-hays/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bruno-hays",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
open
| false
| null |
[] |
[
"Hi Hubert! Could you please probably create a publicly available `gs://` dataset link? I think this would be easier for others to directly start to debug.",
"What seems to work is to change the line to:\r\n```\r\nfs.download(src_dataset_path, dataset_path.parent.as_posix(), recursive=True)\r\n```"
] | 2022-10-14T11:54:53
| 2022-11-19T07:13:10
| null |
CONTRIBUTOR
| null | null | null | null |
## Describe the bug
The function load_from_disk fails when using a remote filesystem because of a wrong temporary path generation in the load_from_disk method of arrow_dataset.py:
```python
if is_remote_filesystem(fs):
src_dataset_path = extract_path_from_uri(dataset_path)
dataset_path = Dataset._build_local_temp_path(src_dataset_path)
fs.download(src_dataset_path, dataset_path.as_posix(), recursive=True)
```
If _dataset_path_ is `gs://speech/mydataset/train`, then _src_dataset_path_ will be `speech/mydataset/train` and _dataset_path_ will be something like `/var/folders/9s/gf0b/T/tmp6t/speech/mydataset/train`
Then, after downloading the **folder** _src_dataset_path_, you will get a path like `/var/folders/9s/gf0b/T/tmp6t/speech/mydataset/train/train/state.json` (notice we have train twice)
Instead of downloading the remote folder we should be downloading all the files in the folder for the path to be right:
```python
fs.download(os.path.join(src_dataset_path,*), dataset_path.as_posix(), recursive=True)
```
## Steps to reproduce the bug
```python
fs = gcsfs.GCSFileSystem(**storage_options)
dataset = load_from_disk("common_voice_processed") # loading local dataset previously saved locally, works fine
dataset.save_to_disk(output_dir, fs=fs) #works fine
dataset = load_from_disk(output_dir, fs=fs) # crashes
```
## Expected results
The dataset is loaded
## Actual results
FileNotFoundError: [Errno 2] No such file or directory: '/var/folders/9s/gf0b9jz15d517yrf7m3nvlxr0000gn/T/tmp6t5e221_/speech/datasets/tests/common_voice_processed/train/state.json'
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: datasets-2.6.1.dev0
- Platform: mac os monterey 12.5.1
- Python version: 3.8.13
- PyArrow version:pyarrow==9.0.0
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5114/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5114/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/5112
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5112/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5112/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5112/events
|
https://github.com/huggingface/datasets/issues/5112
| 1,409,143,409
|
I_kwDODunzps5T_dJx
| 5,112
|
Bug with filtered indices
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] |
[
"The issue is here:\r\nhttps://github.com/huggingface/datasets/blob/3ad9644b9a2e4558dd1d0f1e43c67658674e6228/src/datasets/arrow_dataset.py#L2964",
"@PartiallyTyped, @Muennighoff: the issue is fixed.\r\n\r\nWe are planning to make a patch release today.",
"Thanks a lot for the swift response!Β For a brief moment yesterday I thought I had gone insane π€£On 14 Oct 2022, at 15:44, Albert Villanova del Moral ***@***.***> wrote:ο»Ώ\n@PartiallyTyped, @Muennighoff: the issue is fixed.\nWe are planning to make a patch release today.\n\nβReply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you were mentioned.Message ID: ***@***.***>"
] | 2022-10-14T10:35:47
| 2022-10-14T13:55:03
| 2022-10-14T12:11:45
|
MEMBER
| null | null | null | null |
## Describe the bug
As reported by @PartiallyTyped (and by @Muennighoff):
- https://github.com/huggingface/datasets/issues/5111#issuecomment-1278652524
There is an issue with the indices of a filtered dataset.
## Steps to reproduce the bug
```python
ds = Dataset.from_dict({"num": [0, 1, 2, 3]})
ds = ds.filter(lambda num: num % 2 == 0, input_columns="num", batch_size=2)
assert all(item["num"] % 2 == 0 for item in ds)
```
## Expected results
The indices of the filtered dataset should correspond to the examples with "language" equals to "english".
## Actual results
Indices to items with other languages are included in the filtered dataset indices
## Preliminar investigation
It seems a bug introduced by:
- #5030
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5112/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5112/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 1:35:58
|
https://api.github.com/repos/huggingface/datasets/issues/5111
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5111/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5111/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5111/events
|
https://github.com/huggingface/datasets/issues/5111
| 1,408,143,170
|
I_kwDODunzps5T7o9C
| 5,111
|
map and filter not working properly in multiprocessing with the new release 2.6.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/44069155?v=4",
"events_url": "https://api.github.com/users/loubnabnl/events{/privacy}",
"followers_url": "https://api.github.com/users/loubnabnl/followers",
"following_url": "https://api.github.com/users/loubnabnl/following{/other_user}",
"gists_url": "https://api.github.com/users/loubnabnl/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/loubnabnl",
"id": 44069155,
"login": "loubnabnl",
"node_id": "MDQ6VXNlcjQ0MDY5MTU1",
"organizations_url": "https://api.github.com/users/loubnabnl/orgs",
"received_events_url": "https://api.github.com/users/loubnabnl/received_events",
"repos_url": "https://api.github.com/users/loubnabnl/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/loubnabnl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/loubnabnl/subscriptions",
"type": "User",
"url": "https://api.github.com/users/loubnabnl",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
] |
[
"Same bug exists with `num_proc=1` on colab. `3.7.14 (default, Sep 8 2022, 00:06:44) [GCC 7.5.0]` ",
"Thanks for reporting, @loubnabnl and for the additional information, @PartiallyTyped.\r\n\r\nHowever, I'm not able to reproduce this issue, neither locally nor on Colab:\r\n```\r\nDataset({\r\n features: ['repo_name', 'path', 'copies', 'size', 'content', 'license', 'hash', 'line_mean', 'line_max', 'alpha_frac', 'autogenerated'],\r\n num_rows: 10\r\n})\r\nDataset({\r\n features: ['repo_name', 'path', 'copies', 'size', 'content', 'license', 'hash', 'line_mean', 'line_max', 'alpha_frac', 'autogenerated'],\r\n num_rows: 10\r\n})\r\n```\r\nCC: @huggingface/datasets can anybody reproduce this?",
"This is the minimum reproducible example. I ran this on the premium instances of colab.\r\n\r\n```\r\n# !pip install datasets\r\nimport datasets\r\nfrom datasets import load_dataset\r\nds = load_dataset(\"copenlu/answerable_tydiqa\").filter(\"english\".__eq__, input_columns=\"language\")\r\nassert all(map(\"english\".__eq__, ds[\"train\"][\"language\"]))\r\n```\r\n\r\nIn my case, the number of samples is correct, however, the samples selected when indexing are wrong.\r\n\r\n```python\r\nDatasetDict({\r\n validation: Dataset({\r\n features: ['question_text', 'document_title', 'language', 'annotations', 'document_plaintext', 'document_url'],\r\n num_rows: 990\r\n })\r\n train: Dataset({\r\n features: ['question_text', 'document_title', 'language', 'annotations', 'document_plaintext', 'document_url'],\r\n num_rows: 7389\r\n })\r\n})\r\n```\r\n\r\nThe number of rows is indeed correct, and i have checked it with a version that works.",
"I can reproduce the issue on my mac too \r\n```\r\n- `datasets` version: 2.6.0\r\n- Platform: macOS-12.2.1-arm64-arm-64bit\r\n- Python version: 3.9.13\r\n- PyArrow version: 9.0.0\r\n- Pandas version: 1.4.3\r\n```\r\nBut not on Colab with python 3.7, maybe related to python version? (didn't manage to install python 3.9)\r\n```\r\n- `datasets` version: 2.6.0\r\n- Platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic\r\n- Python version: 3.7.14\r\n- PyArrow version: 9.0.0\r\n- Pandas version: 1.3.5\r\n```",
"I have the same issue, here's a simple notebook to reproduce: https://colab.research.google.com/drive/1Lvo9fg5DSpGUUgXW5JAutZ0bFsR-WV--?usp=sharing\r\n\r\n\r\n\r\n",
"I think there are 2 different issues here:\r\n- the one reported by @loubnabnl is related to multiprocessing in map and then filter; we should reproduce it first: I have tried with Python version 3.9.7 and I can't reproduce it either; maybe it is related to the version of PyArrow? To be checked.\r\n- the issue reported by @PartiallyTyped is related just to \"filter\" (without multiprocessing) and I can reproduce it.",
"Could you create another issue for the @PartiallyTyped one please ?\r\n\r\nRegarding the OP issue, I also tried on colab or locally on py3.7 or py3.10 but didn't reproduce",
"I have created another issue for the one reported by @PartiallyTyped: \r\n- #5112 ",
"I managed to reproduce your issue @loubnabnl on colab by upgrading pyarrow to 9.0.0 instead of 6.0.1",
"I managed to have a _super_ minimal reproducible example:\r\n```python\r\n\r\nfrom datasets import Dataset, concatenate_datasets\r\n\r\nds = concatenate_datasets([Dataset.from_dict({\"a\": [i]}) for i in range(10)])\r\nds2 = ds.map(lambda _: {}, batched=True)\r\nassert list(ds2) == list(ds)\r\n```\r\n(filter uses a batched `map` under the hood)",
"> the one reported by @loubnabnl is related to multiprocessing in map and then filter; we should reproduce it first: I have tried with Python version 3.9.7 and I can't reproduce it either; maybe it is related to the version of PyArrow? To be checked.\r\n\r\nSo finally it was related to PyArrow version! :+1: ",
"Doing a patch release asap :)",
"Did the patch release yesterday, lmk if you still have issues",
"It works now, thanks!\r\n"
] | 2022-10-13T17:00:55
| 2022-10-17T08:26:59
| 2022-10-14T14:59:59
|
NONE
| null | null | null | null |
## Describe the bug
When mapping is used on a dataset with more than one process, there is a weird behavior when trying to use `filter` , it's like only the samples from one worker are retrieved, one needs to specify the same `num_proc` in filter for it to work properly. This doesn't happen with `datasets` version 2.5.2
In the code below the data is filtered differently when we increase `num_proc` used in `map` although the datsets before and after mapping have identical elements.
## Steps to reproduce the bug
```python
import datasets
from datasets import load_dataset
def preprocess(example):
return example
ds = load_dataset("codeparrot/codeparrot-clean-valid", split="train").select([i for i in range(10)])
ds1 = ds.map(preprocess, num_proc=2)
ds2 = ds.map(preprocess)
# the datasets elements are the same
for i in range(len(ds1)):
assert ds1[i]==ds2[i]
print(f'Target column before filtering {ds1["autogenerated"]}')
print(f'Target column before filtering {ds2["autogenerated"]}')
print(f"datasets version {datasets.__version__}")
ds_filtered_1 = ds1.filter(lambda x: not x["autogenerated"])
ds_filtered_2 = ds2.filter(lambda x: not x["autogenerated"])
# all elements in Target column are false so they should all be kept, but for ds2 only the first 5=num_samples/num_proc are kept
print(ds_filtered_1)
print(ds_filtered_2)
```
```
Target column before filtering [False, False, False, False, False, False, False, False, False, False]
Target column before filtering [False, False, False, False, False, False, False, False, False, False]
Dataset({
features: ['repo_name', 'path', 'copies', 'size', 'content', 'license', 'hash', 'line_mean', 'line_max', 'alpha_frac', 'autogenerated'],
num_rows: 5
})
Dataset({
features: ['repo_name', 'path', 'copies', 'size', 'content', 'license', 'hash', 'line_mean', 'line_max', 'alpha_frac', 'autogenerated'],
num_rows: 10
})
```
## Expected results
Increasing `num_proc` in mapping shouldn't alter filtering. With the previous version 2.5.2 this doesn't happen
## Actual results
Filtering doesn't work properly when we increase `num_proc` in mapping but not when calling `filter`
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.6.0
- Platform: Linux-4.19.0-22-cloud-amd64-x86_64-with-glibc2.28
- Python version: 3.9.13
- PyArrow version: 8.0.0
- Pandas version: 1.4.2
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5111/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5111/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 21:59:04
|
https://api.github.com/repos/huggingface/datasets/issues/5109
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5109/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5109/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5109/events
|
https://github.com/huggingface/datasets/issues/5109
| 1,407,434,706
|
I_kwDODunzps5T47_S
| 5,109
|
Map caching not working for some class methods
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/23029765?v=4",
"events_url": "https://api.github.com/users/Mouhanedg56/events{/privacy}",
"followers_url": "https://api.github.com/users/Mouhanedg56/followers",
"following_url": "https://api.github.com/users/Mouhanedg56/following{/other_user}",
"gists_url": "https://api.github.com/users/Mouhanedg56/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Mouhanedg56",
"id": 23029765,
"login": "Mouhanedg56",
"node_id": "MDQ6VXNlcjIzMDI5NzY1",
"organizations_url": "https://api.github.com/users/Mouhanedg56/orgs",
"received_events_url": "https://api.github.com/users/Mouhanedg56/received_events",
"repos_url": "https://api.github.com/users/Mouhanedg56/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Mouhanedg56/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mouhanedg56/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Mouhanedg56",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] |
[
"The hash used for caching is computed by pickling recursively the function passed to `map`. Maybe some objects don't have the same hash across sessions. In particular you can check the hash of your model using\r\n```python\r\nfrom datasets.fingerprint import Hasher\r\nobj = AutoModel.from_config(config=config, add_pooling_layer=False)\r\nprint(Hasher.hash(obj))\r\n```\r\n\r\nYou can find mode info here: https://huggingface.co/docs/datasets/about_cache\r\n\r\nYou can also provide your own unique hash in `map` if you want, with the `new_fingerprint` argument",
"Indeed, the hash is changing. The `dumps` function serialize the model object in different ways because the model object is not deterministic\r\n```python\r\nfrom datasets.utils.py_utils import dumps\r\nobj1 = AutoModel.from_config(config=config, add_pooling_layer=False)\r\nobj2 = AutoModel.from_config(config=config, add_pooling_layer=False)\r\n\r\ndumps(bert) == dumps(bert2). # False\r\n```\r\n\r\n> You can find mode info here: https://huggingface.co/docs/datasets/about_cache\r\n> \r\n> You can also provide your own unique hash in map if you want, with the new_fingerprint argument\r\n\r\n\r\nThanks, the doc is so helpful. Indeed, we can fix the hash and get cache hit using `new_fingerprint`. Closing the issue."
] | 2022-10-13T09:12:58
| 2022-10-17T10:38:45
| 2022-10-17T10:38:45
|
CONTRIBUTOR
| null | null | null | null |
## Describe the bug
The cache loading is not working as expected for some class methods with a model stored in an attribute.
The new fingerprint for `_map_single` is not the same at each run. The hasher generate a different hash for the class method.
This comes from `dumps` function in `datasets.utils.py_utils` which generates a different dump at each run.
## Steps to reproduce the bug
```python
from datasets import load_dataset
from transformers import AutoConfig, AutoModel, AutoTokenizer
dataset = load_dataset("ethos", "binary")
BASE_MODELNAME = "sentence-transformers/all-MiniLM-L6-v2"
class Object:
def __init__(self):
config = AutoConfig.from_pretrained(BASE_MODELNAME)
self.bert = AutoModel.from_config(config=config, add_pooling_layer=False)
self.tok = AutoTokenizer.from_pretrained(BASE_MODELNAME)
def tokenize(self, examples):
tokenized_texts = self.tok(
examples["text"],
padding="max_length",
truncation=True,
max_length=256,
)
return tokenized_texts
instance = Object()
result = dict()
for phase in ["train"]:
result[phase] = dataset[phase].map(instance.tokenize, batched=True, load_from_cache_file=True, num_proc=2)
```
## Expected results
Load cache instead of recompute result.
## Actual results
Result recomputed from scratch at each run.
The cache works fine when deleting `bert` attribute.
## Environment info
- `datasets` version: 2.5.3.dev0
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.9.13
- PyArrow version: 7.0.0
- Pandas version: 1.5.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/23029765?v=4",
"events_url": "https://api.github.com/users/Mouhanedg56/events{/privacy}",
"followers_url": "https://api.github.com/users/Mouhanedg56/followers",
"following_url": "https://api.github.com/users/Mouhanedg56/following{/other_user}",
"gists_url": "https://api.github.com/users/Mouhanedg56/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Mouhanedg56",
"id": 23029765,
"login": "Mouhanedg56",
"node_id": "MDQ6VXNlcjIzMDI5NzY1",
"organizations_url": "https://api.github.com/users/Mouhanedg56/orgs",
"received_events_url": "https://api.github.com/users/Mouhanedg56/received_events",
"repos_url": "https://api.github.com/users/Mouhanedg56/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Mouhanedg56/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mouhanedg56/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Mouhanedg56",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5109/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5109/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 4 days, 1:25:47
|
https://api.github.com/repos/huggingface/datasets/issues/5105
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5105/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5105/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5105/events
|
https://github.com/huggingface/datasets/issues/5105
| 1,406,078,357
|
I_kwDODunzps5Tzw2V
| 5,105
|
Specifying an exisiting folder in download_and_prepare deletes everything in it
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/3664563?v=4",
"events_url": "https://api.github.com/users/cakiki/events{/privacy}",
"followers_url": "https://api.github.com/users/cakiki/followers",
"following_url": "https://api.github.com/users/cakiki/following{/other_user}",
"gists_url": "https://api.github.com/users/cakiki/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cakiki",
"id": 3664563,
"login": "cakiki",
"node_id": "MDQ6VXNlcjM2NjQ1NjM=",
"organizations_url": "https://api.github.com/users/cakiki/orgs",
"received_events_url": "https://api.github.com/users/cakiki/received_events",
"repos_url": "https://api.github.com/users/cakiki/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cakiki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cakiki/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cakiki",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
open
| false
| null |
[] |
[
"cc @lhoestq ",
"Thanks for reporting, @cakiki.\r\n\r\nI would say the deletion of the dir is an expected behavior though...",
"`dask.to_parquet` has an \"overwrite\" parameter and default is `False`, we could also have something similar",
"Thank you both for your feedback!\r\n\r\n@albertvillanova I think I might have have the wrong mental model of what the function was meant to do. I thought it would be an API similar to the pandas `to_XX` write methods (Like the one @lhoestq mentions) so I just assumed it would download the dataframe to whichever folder I specififed (`\"./\"` in my case) so I could load it into a dask dataframe. I absolutely did not expect it to delete everything in my local directory, including the script where I called it from :smile: \r\n\r\nI think Quentin's proposed solution sounds like a reasonable feature!",
"actually there's already a `download_mode` parameter that defaults to `REUSE_DATASET_IF_EXISTS` - so I guess it's just a matter of not deleting files unrelated to the dataset, and to overwrite existing dataset files if the download mode is `REUSE_CACHE_IF_EXISTS` or `FORCE_REDOWNLOAD`"
] | 2022-10-12T11:53:33
| 2022-10-20T11:53:59
| null |
CONTRIBUTOR
| null | null | null | null |
## Describe the bug
The builder correctly creates the `output_dir` folder if it doesn't exist, but if the folder exists everything within it is deleted. Specifying `"."` as the `output_dir` deletes everything in your current dir but also leads to **another bug** whose traceback is the following:
```
Traceback (most recent call last)
Input In [11], in <cell line: 1>()
----> 1 rotten_tomatoes_builder.download_and_prepare(output_dir=".", max_shard_size="200MB", file_format="parquet")
File ~/BIGSCIENCE/env/lib/python3.9/site-packages/datasets/builder.py:818, in download_and_prepare(self, output_dir, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, storage_options, **download_and_prepare_kwargs)
File /usr/lib/python3.9/contextlib.py:124, in _GeneratorContextManager.__exit__(self, type, value, traceback)
122 if type is None:
123 try:
--> 124 next(self.gen)
125 except StopIteration:
126 return False
File ~/BIGSCIENCE/env/lib/python3.9/site-packages/datasets/builder.py:760, in incomplete_dir(dirname)
File /usr/lib/python3.9/shutil.py:722, in rmtree(path, ignore_errors, onerror)
720 os.rmdir(path)
721 except OSError:
--> 722 onerror(os.rmdir, path, sys.exc_info())
723 else:
724 try:
725 # symlinks to directories are forbidden, see bug #1669
File /usr/lib/python3.9/shutil.py:720, in rmtree(path, ignore_errors, onerror)
718 _rmtree_safe_fd(fd, path, onerror)
719 try:
--> 720 os.rmdir(path)
721 except OSError:
722 onerror(os.rmdir, path, sys.exc_info())
OSError: [Errno 22] Invalid argument: '/home/christopher/BIGSCIENCE/.'
```
## Steps to reproduce the bug
```python
rotten_tomatoes_builder = load_dataset_builder("rotten_tomatoes")
rotten_tomatoes_builder.download_and_prepare(output_dir="./test_folder", max_shard_size="200MB", file_format="parquet")
```
If `test_folder` contains any files they will all be deleted
## Expected results
Either a warning that all files will be deleted, but preferably that they not be deleted at all.
## Actual results
N/A
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.3.2
- Platform: Linux-5.15.0-48-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 8.0.0
- Pandas version: 1.4.3
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5105/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5105/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/5102
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5102/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5102/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5102/events
|
https://github.com/huggingface/datasets/issues/5102
| 1,404,746,554
|
I_kwDODunzps5Turs6
| 5,102
|
Error in create a dataset from a Python generator
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/9004682?v=4",
"events_url": "https://api.github.com/users/yangxuhui/events{/privacy}",
"followers_url": "https://api.github.com/users/yangxuhui/followers",
"following_url": "https://api.github.com/users/yangxuhui/following{/other_user}",
"gists_url": "https://api.github.com/users/yangxuhui/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yangxuhui",
"id": 9004682,
"login": "yangxuhui",
"node_id": "MDQ6VXNlcjkwMDQ2ODI=",
"organizations_url": "https://api.github.com/users/yangxuhui/orgs",
"received_events_url": "https://api.github.com/users/yangxuhui/received_events",
"repos_url": "https://api.github.com/users/yangxuhui/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yangxuhui/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yangxuhui/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yangxuhui",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
},
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
},
{
"color": "DF8D62",
"default": false,
"description": "",
"id": 4614514401,
"name": "hacktoberfest",
"node_id": "LA_kwDODunzps8AAAABEwvm4Q",
"url": "https://api.github.com/repos/huggingface/datasets/labels/hacktoberfest"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/9295277?v=4",
"events_url": "https://api.github.com/users/riccardobucco/events{/privacy}",
"followers_url": "https://api.github.com/users/riccardobucco/followers",
"following_url": "https://api.github.com/users/riccardobucco/following{/other_user}",
"gists_url": "https://api.github.com/users/riccardobucco/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/riccardobucco",
"id": 9295277,
"login": "riccardobucco",
"node_id": "MDQ6VXNlcjkyOTUyNzc=",
"organizations_url": "https://api.github.com/users/riccardobucco/orgs",
"received_events_url": "https://api.github.com/users/riccardobucco/received_events",
"repos_url": "https://api.github.com/users/riccardobucco/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/riccardobucco/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/riccardobucco/subscriptions",
"type": "User",
"url": "https://api.github.com/users/riccardobucco",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/9295277?v=4",
"events_url": "https://api.github.com/users/riccardobucco/events{/privacy}",
"followers_url": "https://api.github.com/users/riccardobucco/followers",
"following_url": "https://api.github.com/users/riccardobucco/following{/other_user}",
"gists_url": "https://api.github.com/users/riccardobucco/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/riccardobucco",
"id": 9295277,
"login": "riccardobucco",
"node_id": "MDQ6VXNlcjkyOTUyNzc=",
"organizations_url": "https://api.github.com/users/riccardobucco/orgs",
"received_events_url": "https://api.github.com/users/riccardobucco/received_events",
"repos_url": "https://api.github.com/users/riccardobucco/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/riccardobucco/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/riccardobucco/subscriptions",
"type": "User",
"url": "https://api.github.com/users/riccardobucco",
"user_view_type": "public"
}
] |
[
"Hi, thanks for reporting! The last line should be `dataset = Dataset.from_generator(my_gen)`.",
"Can I work on this one?"
] | 2022-10-11T14:28:58
| 2022-10-12T11:31:56
| 2022-10-12T11:31:56
|
NONE
| null | null | null | null |
## Describe the bug
In HOW-TO-GUIDES > Load > [Python generator](https://huggingface.co/docs/datasets/v2.5.2/en/loading#python-generator), the code example defines the `my_gen` function, but when creating the dataset, an undefined `my_dict` is passed in.
```Python
>>> from datasets import Dataset
>>> def my_gen():
... for i in range(1, 4):
... yield {"a": i}
>>> dataset = Dataset.from_generator(my_dict)
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5102/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5102/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 21:02:58
|
https://api.github.com/repos/huggingface/datasets/issues/5100
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5100/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5100/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5100/events
|
https://github.com/huggingface/datasets/issues/5100
| 1,404,458,586
|
I_kwDODunzps5TtlZa
| 5,100
|
datasets[s3] sagemaker can't run a model - datasets issue with Value and ClassLabel and cast() method
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/115545475?v=4",
"events_url": "https://api.github.com/users/jagochi/events{/privacy}",
"followers_url": "https://api.github.com/users/jagochi/followers",
"following_url": "https://api.github.com/users/jagochi/following{/other_user}",
"gists_url": "https://api.github.com/users/jagochi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jagochi",
"id": 115545475,
"login": "jagochi",
"node_id": "U_kgDOBuMVgw",
"organizations_url": "https://api.github.com/users/jagochi/orgs",
"received_events_url": "https://api.github.com/users/jagochi/received_events",
"repos_url": "https://api.github.com/users/jagochi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jagochi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jagochi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jagochi",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[] | 2022-10-11T11:16:31
| 2022-10-11T13:48:26
| 2022-10-11T13:48:26
|
NONE
| null | null | null | null | null |
{
"avatar_url": "https://avatars.githubusercontent.com/u/115545475?v=4",
"events_url": "https://api.github.com/users/jagochi/events{/privacy}",
"followers_url": "https://api.github.com/users/jagochi/followers",
"following_url": "https://api.github.com/users/jagochi/following{/other_user}",
"gists_url": "https://api.github.com/users/jagochi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jagochi",
"id": 115545475,
"login": "jagochi",
"node_id": "U_kgDOBuMVgw",
"organizations_url": "https://api.github.com/users/jagochi/orgs",
"received_events_url": "https://api.github.com/users/jagochi/received_events",
"repos_url": "https://api.github.com/users/jagochi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jagochi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jagochi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jagochi",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5100/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5100/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 2:31:55
|
https://api.github.com/repos/huggingface/datasets/issues/5099
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5099/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5099/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5099/events
|
https://github.com/huggingface/datasets/issues/5099
| 1,404,370,191
|
I_kwDODunzps5TtP0P
| 5,099
|
datasets doesn't support # in data paths
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/44069155?v=4",
"events_url": "https://api.github.com/users/loubnabnl/events{/privacy}",
"followers_url": "https://api.github.com/users/loubnabnl/followers",
"following_url": "https://api.github.com/users/loubnabnl/following{/other_user}",
"gists_url": "https://api.github.com/users/loubnabnl/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/loubnabnl",
"id": 44069155,
"login": "loubnabnl",
"node_id": "MDQ6VXNlcjQ0MDY5MTU1",
"organizations_url": "https://api.github.com/users/loubnabnl/orgs",
"received_events_url": "https://api.github.com/users/loubnabnl/received_events",
"repos_url": "https://api.github.com/users/loubnabnl/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/loubnabnl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/loubnabnl/subscriptions",
"type": "User",
"url": "https://api.github.com/users/loubnabnl",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
},
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
},
{
"color": "DF8D62",
"default": false,
"description": "",
"id": 4614514401,
"name": "hacktoberfest",
"node_id": "LA_kwDODunzps8AAAABEwvm4Q",
"url": "https://api.github.com/repos/huggingface/datasets/labels/hacktoberfest"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/9295277?v=4",
"events_url": "https://api.github.com/users/riccardobucco/events{/privacy}",
"followers_url": "https://api.github.com/users/riccardobucco/followers",
"following_url": "https://api.github.com/users/riccardobucco/following{/other_user}",
"gists_url": "https://api.github.com/users/riccardobucco/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/riccardobucco",
"id": 9295277,
"login": "riccardobucco",
"node_id": "MDQ6VXNlcjkyOTUyNzc=",
"organizations_url": "https://api.github.com/users/riccardobucco/orgs",
"received_events_url": "https://api.github.com/users/riccardobucco/received_events",
"repos_url": "https://api.github.com/users/riccardobucco/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/riccardobucco/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/riccardobucco/subscriptions",
"type": "User",
"url": "https://api.github.com/users/riccardobucco",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/9295277?v=4",
"events_url": "https://api.github.com/users/riccardobucco/events{/privacy}",
"followers_url": "https://api.github.com/users/riccardobucco/followers",
"following_url": "https://api.github.com/users/riccardobucco/following{/other_user}",
"gists_url": "https://api.github.com/users/riccardobucco/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/riccardobucco",
"id": 9295277,
"login": "riccardobucco",
"node_id": "MDQ6VXNlcjkyOTUyNzc=",
"organizations_url": "https://api.github.com/users/riccardobucco/orgs",
"received_events_url": "https://api.github.com/users/riccardobucco/received_events",
"repos_url": "https://api.github.com/users/riccardobucco/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/riccardobucco/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/riccardobucco/subscriptions",
"type": "User",
"url": "https://api.github.com/users/riccardobucco",
"user_view_type": "public"
}
] |
[
"`datasets` doesn't seem to urlencode the directory names here\r\n\r\nhttps://github.com/huggingface/datasets/blob/7feeb5648a63b6135a8259dedc3b1e19185ee4c7/src/datasets/utils/file_utils.py#L109-L111\r\n\r\nfor example we should have\r\n```python\r\nfrom datasets.utils.file_utils import hf_hub_url\r\n\r\nurl = hf_hub_url(\"loubnabnl/bigcode_csharp\", \"data/c#/data_0003.jsonl\")\r\nprint(url)\r\n# Currently returns\r\n# https://huggingface.co/datasets/loubnabnl/bigcode_csharp/resolve/main/data/c#/data_0003.jsonl\r\n# while it should be \r\n# https://huggingface.co/datasets/loubnabnl/bigcode_csharp/resolve/main/data/c%23/data_0003.jsonl\r\n```",
"I'll work on this :)",
"@loubnabnl The dataset you linked in the description of the bug does not work and returns a 404. Where can I find the dataset to reproduce the bug?",
"I think you can create a dataset repository on the Hub with a dummy file containing a `#`",
"Ah sorry it was private I just made it public, I can also help with this if needed",
"@lhoestq Should I url encode also repo_id and revision parameters? I'm not sure what are the valid characters there.\r\n\r\nPersonally, I would be cautious and only url encode the path parameter.",
"These are possible solutions (assuming `from urllib.parse import quote`):\r\n\r\n1) url encode only the path parameter:\r\n```\r\n# src/datasets/utils/file_utils.py\r\ndef hf_hub_url(repo_id: str, path: str, revision: Optional[str] = None) -> str:\r\n revision = revision or config.HUB_DEFAULT_VERSION\r\n return config.HUB_DATASETS_URL.format(repo_id=repo_id, path=quote(path), revision=revision)\r\n```\r\n2) url encode all parameters:\r\n```\r\n# src/datasets/utils/file_utils.py\r\ndef hf_hub_url(repo_id: str, path: str, revision: Optional[str] = None) -> str:\r\n revision = revision or config.HUB_DEFAULT_VERSION\r\n return config.HUB_DATASETS_URL.format(repo_id=quote(repo_id), path=quote(path), revision=quote(revision))\r\n```\r\n3) url encode the whole url:\r\n```\r\n# src/datasets/config.py\r\nHUB_DATASETS_PATH = \"/datasets/{repo_id}/resolve/{revision}/{path}\"\r\nHUB_DATASETS_URL = HF_ENDPOINT + HUB_DATASETS_PATH\r\n```\r\n```\r\n# src/datasets/utils/file_utils.py\r\ndef hf_hub_url(repo_id: str, path: str, revision: Optional[str] = None) -> str:\r\n revision = revision or config.HUB_DEFAULT_VERSION\r\n return config.HF_ENDPOINT + quote(config.HUB_DATASETS_PATH.format(repo_id=repo_id, path=path, revision=revision))\r\n```",
"repo_id can only contain alphanumeric characters and _- so it doesn't need to be encoded.\r\n\r\nHowever I agree it's a good idea to also apply `quote` to the revision as well as in 2. !",
"Should be fixed by https://github.com/huggingface/datasets/issues/5099 - we'll do a release later today"
] | 2022-10-11T10:05:32
| 2022-10-13T13:14:20
| 2022-10-13T13:14:20
|
NONE
| null | null | null | null |
## Describe the bug
dataset files with `#` symbol their paths aren't read correctly.
## Steps to reproduce the bug
The data in folder `c#`of this [dataset](https://huggingface.co/datasets/loubnabnl/bigcode_csharp) can't be loaded. While the folder `c_sharp` with the same data is loaded properly
```python
ds = load_dataset('loubnabnl/bigcode_csharp', split="train", data_files=["data/c#/*"])
```
```
FileNotFoundError: Couldn't find file at https://huggingface.co/datasets/loubnabnl/bigcode_csharp/resolve/27a3166cff4bb18e11919cafa6f169c0f57483de/data/c#/data_0003.jsonl
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.5.2
- Platform: macOS-12.2.1-arm64-arm-64bit
- Python version: 3.9.13
- PyArrow version: 9.0.0
- Pandas version: 1.4.3
cc @lhoestq
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/44069155?v=4",
"events_url": "https://api.github.com/users/loubnabnl/events{/privacy}",
"followers_url": "https://api.github.com/users/loubnabnl/followers",
"following_url": "https://api.github.com/users/loubnabnl/following{/other_user}",
"gists_url": "https://api.github.com/users/loubnabnl/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/loubnabnl",
"id": 44069155,
"login": "loubnabnl",
"node_id": "MDQ6VXNlcjQ0MDY5MTU1",
"organizations_url": "https://api.github.com/users/loubnabnl/orgs",
"received_events_url": "https://api.github.com/users/loubnabnl/received_events",
"repos_url": "https://api.github.com/users/loubnabnl/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/loubnabnl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/loubnabnl/subscriptions",
"type": "User",
"url": "https://api.github.com/users/loubnabnl",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5099/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5099/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 2 days, 3:08:48
|
https://api.github.com/repos/huggingface/datasets/issues/5098
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5098/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5098/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5098/events
|
https://github.com/huggingface/datasets/issues/5098
| 1,404,058,518
|
I_kwDODunzps5TsDuW
| 5,098
|
Classes label error when loading symbolic links using imagefolder
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/49552732?v=4",
"events_url": "https://api.github.com/users/horizon86/events{/privacy}",
"followers_url": "https://api.github.com/users/horizon86/followers",
"following_url": "https://api.github.com/users/horizon86/following{/other_user}",
"gists_url": "https://api.github.com/users/horizon86/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/horizon86",
"id": 49552732,
"login": "horizon86",
"node_id": "MDQ6VXNlcjQ5NTUyNzMy",
"organizations_url": "https://api.github.com/users/horizon86/orgs",
"received_events_url": "https://api.github.com/users/horizon86/received_events",
"repos_url": "https://api.github.com/users/horizon86/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/horizon86/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/horizon86/subscriptions",
"type": "User",
"url": "https://api.github.com/users/horizon86",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
},
{
"color": "DF8D62",
"default": false,
"description": "",
"id": 4614514401,
"name": "hacktoberfest",
"node_id": "LA_kwDODunzps8AAAABEwvm4Q",
"url": "https://api.github.com/repos/huggingface/datasets/labels/hacktoberfest"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/9295277?v=4",
"events_url": "https://api.github.com/users/riccardobucco/events{/privacy}",
"followers_url": "https://api.github.com/users/riccardobucco/followers",
"following_url": "https://api.github.com/users/riccardobucco/following{/other_user}",
"gists_url": "https://api.github.com/users/riccardobucco/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/riccardobucco",
"id": 9295277,
"login": "riccardobucco",
"node_id": "MDQ6VXNlcjkyOTUyNzc=",
"organizations_url": "https://api.github.com/users/riccardobucco/orgs",
"received_events_url": "https://api.github.com/users/riccardobucco/received_events",
"repos_url": "https://api.github.com/users/riccardobucco/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/riccardobucco/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/riccardobucco/subscriptions",
"type": "User",
"url": "https://api.github.com/users/riccardobucco",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/9295277?v=4",
"events_url": "https://api.github.com/users/riccardobucco/events{/privacy}",
"followers_url": "https://api.github.com/users/riccardobucco/followers",
"following_url": "https://api.github.com/users/riccardobucco/following{/other_user}",
"gists_url": "https://api.github.com/users/riccardobucco/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/riccardobucco",
"id": 9295277,
"login": "riccardobucco",
"node_id": "MDQ6VXNlcjkyOTUyNzc=",
"organizations_url": "https://api.github.com/users/riccardobucco/orgs",
"received_events_url": "https://api.github.com/users/riccardobucco/received_events",
"repos_url": "https://api.github.com/users/riccardobucco/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/riccardobucco/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/riccardobucco/subscriptions",
"type": "User",
"url": "https://api.github.com/users/riccardobucco",
"user_view_type": "public"
}
] |
[
"It can be solved temporarily by remove `resolve` in \r\nhttps://github.com/huggingface/datasets/blob/bef23be3d9543b1ca2da87ab2f05070201044ddc/src/datasets/data_files.py#L278",
"Hi, thanks for reporting and suggesting a fix! We still need to account for `.`/`..` in the file path, so a more robust fix would be `Path(os.path.abspath(filepath))`.",
"> Hi, thanks for reporting and suggesting a fix! We still need to account for `.`/`..` in the file path, so a more robust fix would be `Path(os.path.abspath(filepath))`.\r\n\r\nThanks for your reply!"
] | 2022-10-11T06:10:58
| 2022-11-14T14:40:20
| 2022-11-14T14:40:20
|
NONE
| null | null | null | null |
**Is your feature request related to a problem? Please describe.**
Like this: #4015
When there are **symbolic links** to pictures in the data folder, the parent folder name of the **real file** will be used as the class name instead of the parent folder of the symbolic link itself. Can you give an option to decide whether to enable symbolic link tracking?
This is inconsistent with the `torchvision.datasets.ImageFolder` behavior.
For example:


It use `others` in green circle as class label but not `abnormal`, I wish `load_dataset` not use the real file parent as label.
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context about the feature request here.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5098/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5098/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 34 days, 8:29:22
|
https://api.github.com/repos/huggingface/datasets/issues/5097
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5097/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5097/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5097/events
|
https://github.com/huggingface/datasets/issues/5097
| 1,403,679,353
|
I_kwDODunzps5TqnJ5
| 5,097
|
Fatal error with pyarrow/libarrow.so
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/11340846?v=4",
"events_url": "https://api.github.com/users/catalys1/events{/privacy}",
"followers_url": "https://api.github.com/users/catalys1/followers",
"following_url": "https://api.github.com/users/catalys1/following{/other_user}",
"gists_url": "https://api.github.com/users/catalys1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/catalys1",
"id": 11340846,
"login": "catalys1",
"node_id": "MDQ6VXNlcjExMzQwODQ2",
"organizations_url": "https://api.github.com/users/catalys1/orgs",
"received_events_url": "https://api.github.com/users/catalys1/received_events",
"repos_url": "https://api.github.com/users/catalys1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/catalys1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/catalys1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/catalys1",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] |
[
"Thanks for reporting, @catalys1.\r\n\r\nThis seems a duplicate of:\r\n- #3310 \r\n\r\nThe source of the problem is in PyArrow:\r\n- [ARROW-15141: [C++] Fatal error condition occurred in aws_thread_launch](https://issues.apache.org/jira/browse/ARROW-15141)\r\n- [ARROW-17501: [C++] Fatal error condition occurred in aws_thread_launch](https://issues.apache.org/jira/browse/ARROW-17501)\r\n\r\nThe bug in their dependency is still unresolved:\r\n- https://github.com/aws/aws-sdk-cpp/issues/1809\r\n\r\nApparently, the `aws-sdk-cpp` PyArrow dependency needs to be pinned at version `1.8.186` if using conda. Have you updated it after installing PyArrow?\r\n```shell\r\nconda list aws-sdk-cpp\r\n```\r\n\r\nMaybe you should try to downgrade it to that version:\r\n```shell\r\nconda install -c conda-forge aws-sdk-cpp=1.8.186\r\n```"
] | 2022-10-10T20:29:04
| 2022-10-11T06:56:01
| 2022-10-11T06:56:00
|
NONE
| null | null | null | null |
## Describe the bug
When using datasets, at the very end of my jobs the program crashes (see trace below).
It doesn't seem to affect anything, as it appears to happen as the program is closing down. Just importing `datasets` is enough to cause the error.
## Steps to reproduce the bug
This is sufficient to reproduce the problem:
```bash
python -c "import datasets"
```
## Expected results
Program should run to completion without an error.
## Actual results
```bash
Fatal error condition occurred in /opt/vcpkg/buildtrees/aws-c-io/src/9e6648842a-364b708815.clean/source/event_loop.c:72: aws_thread_launch(&cleanup_thread, s_event_loop_destroy_async_thread_fn, el_group, &thread_options) == AWS_OP_SUCCESS
Exiting Application
################################################################################
Stack trace:
################################################################################
/u/user/miniconda3/envs/env/lib/python3.10/site-packages/pyarrow/libarrow.so.900(+0x200af06) [0x150dff547f06]
/u/user/miniconda3/envs/env/lib/python3.10/site-packages/pyarrow/libarrow.so.900(+0x20028e5) [0x150dff53f8e5]
/u/user/miniconda3/envs/env/lib/python3.10/site-packages/pyarrow/libarrow.so.900(+0x1f27e09) [0x150dff464e09]
/u/user/miniconda3/envs/env/lib/python3.10/site-packages/pyarrow/libarrow.so.900(+0x200ba3d) [0x150dff548a3d]
/u/user/miniconda3/envs/env/lib/python3.10/site-packages/pyarrow/libarrow.so.900(+0x1f25948) [0x150dff462948]
/u/user/miniconda3/envs/env/lib/python3.10/site-packages/pyarrow/libarrow.so.900(+0x200ba3d) [0x150dff548a3d]
/u/user/miniconda3/envs/env/lib/python3.10/site-packages/pyarrow/libarrow.so.900(+0x1ee0b46) [0x150dff41db46]
/u/user/miniconda3/envs/env/lib/python3.10/site-packages/pyarrow/libarrow.so.900(+0x194546a) [0x150dfee8246a]
/lib64/libc.so.6(+0x39b0c) [0x150e15eadb0c]
/lib64/libc.so.6(on_exit+0) [0x150e15eadc40]
/u/user/miniconda3/envs/env/bin/python(+0x28db18) [0x560ae370eb18]
/u/user/miniconda3/envs/env/bin/python(+0x28db4b) [0x560ae370eb4b]
/u/user/miniconda3/envs/env/bin/python(+0x28db90) [0x560ae370eb90]
/u/user/miniconda3/envs/env/bin/python(_PyRun_SimpleFileObject+0x1e6) [0x560ae37123e6]
/u/user/miniconda3/envs/env/bin/python(_PyRun_AnyFileObject+0x44) [0x560ae37124c4]
/u/user/miniconda3/envs/env/bin/python(Py_RunMain+0x35d) [0x560ae37135bd]
/u/user/miniconda3/envs/env/bin/python(Py_BytesMain+0x39) [0x560ae37137d9]
/lib64/libc.so.6(__libc_start_main+0xf3) [0x150e15e97493]
/u/user/miniconda3/envs/env/bin/python(+0x2125d4) [0x560ae36935d4]
Aborted (core dumped)
```
## Environment info
- `datasets` version: 2.5.1
- Platform: Linux-4.18.0-348.23.1.el8_5.x86_64-x86_64-with-glibc2.28
- Python version: 3.10.4
- PyArrow version: 9.0.0
- Pandas version: 1.4.3
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5097/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5097/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 10:26:56
|
https://api.github.com/repos/huggingface/datasets/issues/5096
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5096/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5096/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5096/events
|
https://github.com/huggingface/datasets/issues/5096
| 1,403,379,816
|
I_kwDODunzps5TpeBo
| 5,096
|
Transfer some canonical datasets under an organization namespace
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script",
"id": 4564477500,
"name": "dataset contribution",
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] |
[
"The transfer of the dummy dataset to the dummy org works as expected:\r\n```python\r\nIn [1]: from datasets import load_dataset; ds = load_dataset(\"dummy_canonical_dataset\", download_mode=\"force_redownload\"); ds\r\nDownloading builder script: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 2.98k/2.98k [00:00<00:00, 2.01MB/s]\r\nDownloading and preparing dataset dummy_canonical_dataset/default (download: 411 bytes, generated: 385 bytes, post-processed: Unknown size, total: 796 bytes) to .../.cache/huggingface/datasets/dummy_canonical_dataset/default/1.0.0/100870c358637e269fee140585e61e1472d5075a9bf6f866719934c725e55fb4...\r\nDownloading data: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 411/411 [00:00<00:00, 293kB/s]\r\nDataset dummy_canonical_dataset downloaded and prepared to .../.cache/huggingface/datasets/dummy_canonical_dataset/default/1.0.0/100870c358637e269fee140585e61e1472d5075a9bf6f866719934c725e55fb4. Subsequent calls will reuse this data.\r\n100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:00<00:00, 304.16it/s]\r\nOut[1]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['langs', 'ner_tags', 'tokens'],\r\n num_rows: 3\r\n })\r\n})\r\n\r\nIn [2]: from datasets import load_dataset; ds = load_dataset(\"dummy-canonical-org/dummy_canonical_dataset\"); ds\r\nDownloading builder script: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 2.98k/2.98k [00:00<00:00, 1.57MB/s]\r\nDownloading and preparing dataset dummy_canonical_dataset/default to .../.cache/huggingface/datasets/dummy-canonical-org___dummy_canonical_dataset/default/1.0.0/100870c358637e269fee140585e61e1472d5075a9bf6f866719934c725e55fb4...\r\nDataset dummy_canonical_dataset downloaded and prepared to .../.cache/huggingface/datasets/dummy-canonical-org___dummy_canonical_dataset/default/1.0.0/100870c358637e269fee140585e61e1472d5075a9bf6f866719934c725e55fb4. Subsequent calls will reuse this data.\r\n100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:00<00:00, 362.48it/s]\r\nOut[2]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['langs', 'ner_tags', 'tokens'],\r\n num_rows: 3\r\n })\r\n})\r\n```",
"Cool ! π ",
"Maybe we should be a bit more proactive with these transfers. There are only β70 canonical models, so reaching that number with datasets would be great, too. It's not easy considering the current number of β750 canonical datasets, but doable.\r\n\r\nFor instance, it shouldn't be too hard to transfer these datasets (partial list; all of them have more than > 1k downloads):\r\n\r\n<details>\r\n\r\n<summary> Datasets to transfer </summary>\r\n\r\n```\r\nquickdraw -> google\r\nopenai_humaneval -> openai\r\nc4 -> allenai/c4 (the canonical version reads data from the org version)\r\nmbpp -> google (ask jaaustin (author) where to transfer the dataset)\r\ncompetition_math -> hendrycks (author)\r\ngsm8k -> openai\r\nai2_arc -> allenai\r\nimdb -> stanfordai\r\ngreek_legal_code -> chrispap (author)\r\nspider -> Yale-LILY\r\nsquad and squad_v2 -> rajpurkarlab (or rajpurkar, a member of the org and one of the authors)\r\ncppe-5 -> rishitdagli\r\nnews_commentary -> Helsinki-NLP\r\njfleg -> keisks (author)\r\npubmed_qa -> qiaojin (author)\r\nmedmcqa -> infinitylogesh (author)\r\ncifar10 and cifar100 -> UniversityofToronto\r\ncc100 -> gwenzek (author)\r\nasset -> facebook\r\nblbooks -> BritishLibraryLabs\r\ncapes -> FLSRDS (maybe the author?)\r\ncc_news -> fhamborg (author)\r\nclue -> CLUE benchmark\r\ncoqa -> stanfordnlp\r\nlambada -> germank (author)\r\nlibrispeech_asr -> openslr\r\ndrop -> allenai\r\nduorc -> salesforce (ask amritasaha87 (author) where to transfer)\r\nglue -> nyu-mll ?\r\ngo_emotions -> google\r\ncommonsense_qa -> tau\r\ndbpedia_14 -> JensLehmann (author?)\r\ndiscofuse -> google\r\nmc4 -> allenai/c4\r\nopenbookqa -> allenai\r\nropes -> allene\r\ntrivia_qa -> mandarjoshi (author)\r\nwikiann -> afshinrahimi (author)\r\nxtreme -> google\r\nxscr -> INK-USC\r\nyelp_review_full -> Yelp\r\ntruthful_qa -> jacobhilton22 (author)\r\nbigbench -> google\r\nxnli -> facebook\r\nsciq -> allenai\r\nsst2 -> stanfordnlp\r\nblimp -> alexwarstadt (author)\r\ntweet_eval -> cardiffnlp\r\nbeans -> AI-Lab-Makerere\r\nlex_glue -> coastalcph\r\namericas_nli -> abteen (author)\r\nopus_euconst -> tiedeman (author)\r\nmedical_questions_pairs -> curaihealth\r\nweb_questions -> joberant (author)\r\nanli -> facebook\r\nrace -> CarnegieMellonCS\r\nklue -> klue\r\nwino_bias -> uclanlp\r\nwiki_qa -> microsoft\r\nxcopa -> cambridgeltl\r\nindic_glue -> ai4bharat\r\nboolq -> google\r\nadversarial_qa -> mbartolo (author)\r\nnq_open -> google\r\nsnli -> stanfordnlp\r\nstsb_multi_mt -> PhilipMay (author)\r\nmulti_nli -> sleepinyourhat (author)\r\npaws -> google\r\npaws-x -> google\r\nms_marco - microsoft\r\nxquad -> deepmind\r\nnarrativeqa -> deepmind\r\nkilt_tasks -> facebook\r\nhate_speech_offensive -> tdavidson (author)\r\nwiki40b -> google\r\ncovost2 -> facebook\r\ncommon_gen -> INKLAB\r\nmulti_eurlex -> kiddothe2b (author)\r\nexams -> mhardalov (author)\r\ntiny_shakespeare -> karpathy (author)\r\nblbooksgenre -> BritishLibraryLabs ?\r\nfood101 -> ethz ?\r\nscitail -> allenai\r\nbillsum -> FiscalNote\r\nimppres -> facebook\r\nquartz -> allenai\r\nqasc -> allenai\r\nquail -> textmachinelab\r\nwiki_lingua -> esdurmus\r\ncos_e -> salesforce ?\r\ncivil_comments -> google ? (create a βjigsawβ org) \r\nxquad_r -> google\r\nwikitext-> metamind (or salesforce)\r\n\r\n// deprecate c4 and mc4 in favor of allenai/c4 (add a dataset script to the org version to make it easier to use?)\r\n```\r\n</details>\r\n\r\nAlso, a space that allows users to claim the existing canonical datasets (for themselves or their organizations) could be nice.\r\n\r\nWDYT?",
"Next week I can take care of some of them :) In most cases we just need to send an email to ask them if they're ok with it.\r\nLet's coordinate on slack ?",
"Yup, sounds good to me!",
"I can also continuing working on this if we agree this has become a priority now.",
"cool stuff! \r\n\r\nthis morning on my side i moved huggingface.co/ctrl (a not very used model) to its rightful entity",
"As a previous step before transferring the datasets, we decided we should convert them to Parquet, so that the viewer does not stop working (the viewer does not support datasets with scripts). \r\n\r\nDatasets converted to Parquet:\r\n- [x] adversarial_qa\r\n- [x] ai2_arc\r\n- [x] americas_nli\r\n- [x] anli\r\n- [x] asset\r\n- [x] beans\r\n- [ ] bigbench\r\n- [x] billsum\r\n- [ ] blbooks: it was already transferred to: TheBritishLibrary/blbooks\r\n- [ ] blbooksgenre: it was already transferred to: TheBritishLibrary/blbooksgenre\r\n- [x] blimp\r\n- [x] boolq\r\n- [ ] c4\r\n- [x] capes\r\n- [ ] cc100\r\n- [x] cc_news\r\n- [x] cifar10\r\n- [x] cifar100\r\n- [x] civil_comments\r\n- [x] clue\r\n- [x] common_gen\r\n- [x] commonsense_qa\r\n- [ ] competition_math: it was already transferred to: hendrycks/competition_math\r\n- [x] coqa\r\n- [x] cos_e\r\n- [ ] covost2: it requires manual download\r\n- [x] cppe-5\r\n- [x] dbpedia_14\r\n- [x] discofuse\r\n- [x] drop\r\n- [x] duorc\r\n- [x] exams\r\n- [x] food101\r\n- [x] glue\r\n- [x] go_emotions\r\n- [x] greek_legal_code\r\n- [x] gsm8k\r\n- [x] hate_speech_offensive\r\n- [x] imdb\r\n- [x] imppres\r\n- [x] indic_glue\r\n- [x] jfleg\r\n- [x] kilt_tasks\r\n- [x] klue\r\n- [x] lambada\r\n- [x] lex_glue\r\n- [ ] librispeech_asr\r\n- [x] mbpp\r\n- [ ] mc4\r\n- [x] medical_questions_pairs\r\n- [x] medmcqa\r\n- [x] ms_marco\r\n- [ ] multi_eurlex\r\n- [x] multi_nli\r\n- [ ] narrativeqa\r\n- [ ] news_commentary\r\n- [x] nq_open\r\n- [x] openai_humaneval\r\n- [x] openbookqa\r\n- [ ] opus_euconst\r\n- [x] paws\r\n- [x] paws-x\r\n- [x] pubmed_qa\r\n- [x] qasc\r\n- [x] quail\r\n- [x] quartz\r\n- [ ] quickdraw\r\n- [x] race\r\n- [x] ropes\r\n- [x] sciq\r\n- [x] scitail\r\n- [ ] snli\r\n- [x] spider\r\n- [x] squad\r\n- [x] squad_v2\r\n- [x] sst2\r\n- [x] stsb_multi_mt\r\n- [x] tiny_shakespeare\r\n- [x] trivia_qa\r\n- [x] truthful_qa\r\n- [x] tweet_eval\r\n- [x] web_questions\r\n- [ ] wiki40b\r\n- [x] wiki_lingua\r\n- [x] wiki_qa\r\n- [ ] wikiann\r\n- [x] wikitext\r\n- [x] wino_bias\r\n- [x] xcopa\r\n- [x] xcsr\r\n- [x] xnli\r\n- [x] xquad\r\n- [x] xquad_r\r\n- [ ] xtreme\r\n- [x] yelp_review_full\r\n",
"For `c4` and `mc4` I was thinking of adding the corresponding configs to `allenai/c4` and redirect `c4` and `mc4` to `allenai/c4`. I'll open a PR on `allenai/c4` if it's good for you",
"@davanstrien and @lhoestq, I have shared with you this spreadsheet: https://docs.google.com/spreadsheets/d/1GvNTd1UxmtTvEFOK-Eq6E3Str4FUWQuWZsEN0WVFirs/edit?usp=sharing\r\n\r\nThis way we can take datasets by batches to contact the authors and transfer to the organizations.",
"We have already transferred all canonical datasets under organization/user namespaces."
] | 2022-10-10T15:44:31
| 2024-06-24T06:06:28
| 2024-06-24T06:02:45
|
MEMBER
| null | null | null | null |
As discussed during our @huggingface/datasets meeting, we are planning to move some "canonical" dataset scripts under their corresponding organization namespace (if this does not exist).
On the contrary, if the dataset already exists under the organization namespace, we are deprecating the canonical one (and eventually delete it).
First, we should test it using a dummy dataset/organization.
TODO:
- [x] Test with a dummy dataset
- [x] Create dummy canonical dataset: https://huggingface.co/datasets/dummy_canonical_dataset
- [x] Create dummy organization: https://huggingface.co/dummy-canonical-org
- [x] Transfer dummy canonical dataset to dummy organization
- [ ] Transfer datasets
- [x] babi_qa => facebook
- [x] blbooks => TheBritishLibrary/blbooks
- [x] blbooksgenre => TheBritishLibrary/blbooksgenre
- [x] common_gen => allenai
- [x] commonsense_qa => tau
- [x] competition_math => hendrycks/competition_math
- [x] cord19 => allenai
- [x] emotion => dair-ai
- [ ] gem => GEM
- [x] hellaswag => Rowan
- [x] hendrycks_test => cais/mmlu
- [x] indonlu => indonlp
- [ ] multilingual_librispeech => facebook
- It already exists "facebook/multilingual_librispeech"
- [ ] oscar => oscar-corpus
- [x] peer_read => allenai
- [x] qasper => allenai
- [x] reddit => webis/tldr-17
- [x] russian_super_glue => russiannlp
- [x] rvl_cdip => aharley
- [x] s2orc => allenai
- [x] scicite => allenai
- [x] scifact => allenai
- [x] scitldr => allenai
- [x] swiss_judgment_prediction => rcds
- [x] the_pile => EleutherAI
- [ ] wmt14, wmt15, wmt16, wmt17, wmt18, wmt19,... => wmt
- [ ] Deprecate (and eventually remove) datasets that cannot be transferred because they already exist
- [x] banking77 => PolyAI
- [x] common_voice => mozilla-foundation
- [x] german_legal_entity_recognition => elenanereiss
- ...
EDIT: the list above is continuously being updated
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 2,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5096/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5096/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 622 days, 14:18:14
|
https://api.github.com/repos/huggingface/datasets/issues/5094
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5094/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5094/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5094/events
|
https://github.com/huggingface/datasets/issues/5094
| 1,403,214,950
|
I_kwDODunzps5To1xm
| 5,094
|
Multiprocessing with `Dataset.map` and `PyTorch` results in deadlock
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/36822895?v=4",
"events_url": "https://api.github.com/users/RR-28023/events{/privacy}",
"followers_url": "https://api.github.com/users/RR-28023/followers",
"following_url": "https://api.github.com/users/RR-28023/following{/other_user}",
"gists_url": "https://api.github.com/users/RR-28023/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/RR-28023",
"id": 36822895,
"login": "RR-28023",
"node_id": "MDQ6VXNlcjM2ODIyODk1",
"organizations_url": "https://api.github.com/users/RR-28023/orgs",
"received_events_url": "https://api.github.com/users/RR-28023/received_events",
"repos_url": "https://api.github.com/users/RR-28023/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/RR-28023/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RR-28023/subscriptions",
"type": "User",
"url": "https://api.github.com/users/RR-28023",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] |
[
"Hi ! Could it be an Out of Memory issue that could have killed one of the processes ? can you check your memory ?",
"Hi! I don't think it is a memory issue. I'm monitoring the main and spawn python processes and threads with `htop` and the memory does not peak. Besides, the example I've posted above should not be that demanding in terms of memory, right? (I have 32GB of RAM). ",
"Indeed it should be fine. I couldn't reproduce the error though - I ran your script on my side and it works fine. What version of pytorch are you using ?",
"Interesting.. I'm using `torch 1.12.1`",
"I also tried on colab and it works fine π€ \r\nMaybe something is wrong with your installation of pytorch ?",
"Oh actually I just saw that you're using python 3.9\r\n\r\nThis could be related to https://github.com/huggingface/datasets/issues/4113\r\n\r\nWe'll fix that as soon as we can, in the meantime you can try to use use single process, or use an older version of python maybe ?",
"I tried with python 3.7 and the issue persists. In collab, which also uses 3.7 I don't get the issue, so yes I guess is something on mu side... will post it here if I manage to fix it",
"Hi! Which version of transformers are you using? I test the code on Colab (so python 3.7) with transformers 4.23.1, torch 1.12.1 and pyarrow 9.0.0 (also 6.x), it worked without stuck.",
"Hi, I have the same problem in use **datasets.IterableDatasetDict.map()**\r\nmy pytorch is 2.0.0a0+gitc263bd4\r\nmy python is 3.8.16(default, Jun 12 2023, 17:37:21)\r\nwork on aarch64 in 16 node, each node with 4*nVidia-A100-40G\r\nevery node have 4 process execute code as β\r\n\r\n```\r\nfrom datasets import load_dataset, interleave_datasets, IterableDatasetDict, concatenate_datasets\r\n```\r\n...\r\n```\r\n model_args.cache_dir = '/home/scx/.cache'\r\n for dataset_name in data_args.datasets_name:\r\n train_datasets.append(\r\n load_dataset(\r\n dataset_name,\r\n cache_dir=model_args.cache_dir,\r\n use_auth_token=True if model_args.use_auth_token else None,\r\n streaming=data_args.streaming,\r\n split='train'\r\n ).select_columns('text')\r\n )\r\n valid_datasets.append(\r\n load_dataset(\r\n dataset_name,\r\n cache_dir=model_args.cache_dir,\r\n use_auth_token=True if model_args.use_auth_token else None,\r\n streaming=data_args.streaming,\r\n split='validation'\r\n ).select_columns('text')\r\n )\r\n train_dataset = interleave_datasets(train_datasets,\r\n probabilities=data_args.datasets_probabilities, \r\n seed=training_args.seed,\r\n stopping_strategy='all_exhausted')\r\n raw_datasets = IterableDatasetDict({'train': train_dataset, 'validation': valid_dataset})\r\n```\r\n...\r\n\r\n```\r\n tokenized_datasets = None\r\n with training_args.main_process_first(desc=\"dataset map tokenization\"):\r\n if not data_args.streaming:\r\n tokenized_datasets = raw_datasets.map(\r\n tokenize_function,\r\n batched=True,\r\n num_proc=data_args.preprocessing_num_workers,\r\n load_from_cache_file=not data_args.overwrite_cache,\r\n desc=\"Running tokenizer on dataset\",\r\n remove_columns=column_names,\r\n )\r\n else:\r\n #TODO 20230722\r\n logger.info('{}: {}'.format(__file__, 'tokenized_datasets = raw_datasets.map('))\r\n logger.info('len raw_datasets: {}'.format(len(raw_datasets.items())))\r\n logger.info('raw_datasets:{}'.format(raw_datasets.items()))\r\n tokenized_datasets = raw_datasets.map(\r\n tokenize_function,\r\n batched=True,\r\n batch_size=1000,\r\n remove_columns=column_names\r\n )\r\n logger.info('map ok!')\r\n logger.info('show train: {}'.format(next(iter(tokenized_datasets['train']))))\r\n logger.info('ok')\r\n # ### RAW CODE ###\r\n # tokenized_datasets = raw_datasets.map(\r\n # tokenize_function,\r\n # batched=True,\r\n # batch_size=1000,\r\n # remove_columns=column_names\r\n # )\r\n #TODO 20230722\r\n logger.info(\"Finish tokenization\")\r\n```\r\nthe output of my code is\r\n```\r\n07/22/2023 21:57:09 - INFO - __main__ - /demo/run_blue_space.py: tokenized_datasets = raw_datasets.map(\r\n07/22/2023 21:57:09 - INFO - __main__ - len raw_datasets: 2\r\n07/22/2023 21:57:09 - INFO - __main__ - raw_datasets:dict_items([('train', <datasets.iterable_dataset.IterableDataset object at 0x4005ee301190>), ('validation', <datasets.iterable_dataset.IterableDataset object at 0x4005ee5427f0>)])\r\n07/22/2023 21:57:09 - INFO - __main__ - map ok!\r\n07/22/2023 22:01:07 - INFO - __main__ - show train: {'input_ids': [14608, 26797, 31891, 34260, 12227, 33207, 5, 5, 31632, 26797, 31891, 34260, 12227, 33207, 7398, 28561, 31236, 31177, 31253, 33558, 31556, 31377, 72, 20732, 32383, 32295, 14027, 31178, 53, 61, 53, 55, 31189, 31146, 31321, 31235, 53, 61, 56, 58, 31189, 31145, 72, 53, 61, 58, 54, 31189, 54, 31245, 53, 60, 31224, 31896, 31178, 28561, 29331, 20732, 31888, 32637, 4426, 2824, 72, 53, 61, 60, 55, 31189, 53, 54, 31245, 53, 31224, 31896, 31178, 28561, 29331, 26137, 20732, 4426, 2824, 73, 54, 52, 52, 52, 31189, 61, 31245, 59, 31224, 31896, 31178, 29331, 28561, 20732, 4426, 2824, 73, 5], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}\r\n07/22/2023 22:01:07 - INFO - __main__ - ok\r\n```\r\n\r\n",
"@bio-punk `IterableDatasetDict.map` does not support multiprocessing (only `DatasetDict.map` and `Dataset.map` do), so please open a new issue as this doesn't seem to be related to the original issue. ",
"Closing as this issue doesn't seem to be related to `datasets`."
] | 2022-10-10T13:50:56
| 2023-07-24T15:29:13
| 2023-07-24T15:29:13
|
NONE
| null | null | null | null |
## Describe the bug
There seems to be an issue with using multiprocessing with `datasets.Dataset.map` (i.e. setting `num_proc` to a value greater than one) combined with a function that uses `torch` under the hood. The subprocesses that `datasets.Dataset.map` spawns [a this step](https://github.com/huggingface/datasets/blob/1b935dab9d2f171a8c6294269421fe967eb55e34/src/datasets/arrow_dataset.py#L2663) go into wait mode forever.
## Steps to reproduce the bug
The below code goes into deadlock when `NUMBER_OF_PROCESSES` is greater than one.
```python
NUMBER_OF_PROCESSES = 2
from transformers import AutoTokenizer, AutoModel
from datasets import load_dataset
dataset = load_dataset("glue", "mrpc", split="train")
tokenizer = AutoTokenizer.from_pretrained("sentence-transformers/all-MiniLM-L6-v2")
model = AutoModel.from_pretrained("sentence-transformers/all-MiniLM-L6-v2")
model.to("cpu")
def cls_pooling(model_output):
return model_output.last_hidden_state[:, 0]
def generate_embeddings_batched(examples):
sentences_batch = list(examples['sentence1'])
encoded_input = tokenizer(
sentences_batch, padding=True, truncation=True, return_tensors="pt"
)
encoded_input = {k: v.to("cpu") for k, v in encoded_input.items()}
model_output = model(**encoded_input)
embeddings = cls_pooling(model_output)
examples['embeddings'] = embeddings.detach().cpu().numpy() # 64, 384
return examples
embeddings_dataset = dataset.map(
generate_embeddings_batched,
batched=True,
batch_size=10,
num_proc=NUMBER_OF_PROCESSES
)
```
While debugging it I've seen that it gets "stuck" when calling `torch.nn.Embedding.forward` but some testing shows that the same happens with other functions from `torch.nn`.
## Environment info
- Platform: Linux-5.14.0-1052-oem-x86_64-with-glibc2.31
- Python version: 3.9.14
- PyArrow version: 9.0.0
- Pandas version: 1.5.0
Not sure if this is a HF problem, a PyTorch problem or something I'm doing wrong..
Thanks!
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5094/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5094/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 287 days, 1:38:17
|
https://api.github.com/repos/huggingface/datasets/issues/5093
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5093/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5093/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5093/events
|
https://github.com/huggingface/datasets/issues/5093
| 1,402,939,660
|
I_kwDODunzps5TnykM
| 5,093
|
Mismatch between tutoriel and doc
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/22726840?v=4",
"events_url": "https://api.github.com/users/clefourrier/events{/privacy}",
"followers_url": "https://api.github.com/users/clefourrier/followers",
"following_url": "https://api.github.com/users/clefourrier/following{/other_user}",
"gists_url": "https://api.github.com/users/clefourrier/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/clefourrier",
"id": 22726840,
"login": "clefourrier",
"node_id": "MDQ6VXNlcjIyNzI2ODQw",
"organizations_url": "https://api.github.com/users/clefourrier/orgs",
"received_events_url": "https://api.github.com/users/clefourrier/received_events",
"repos_url": "https://api.github.com/users/clefourrier/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/clefourrier/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/clefourrier/subscriptions",
"type": "User",
"url": "https://api.github.com/users/clefourrier",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
},
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
},
{
"color": "DF8D62",
"default": false,
"description": "",
"id": 4614514401,
"name": "hacktoberfest",
"node_id": "LA_kwDODunzps8AAAABEwvm4Q",
"url": "https://api.github.com/repos/huggingface/datasets/labels/hacktoberfest"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/9295277?v=4",
"events_url": "https://api.github.com/users/riccardobucco/events{/privacy}",
"followers_url": "https://api.github.com/users/riccardobucco/followers",
"following_url": "https://api.github.com/users/riccardobucco/following{/other_user}",
"gists_url": "https://api.github.com/users/riccardobucco/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/riccardobucco",
"id": 9295277,
"login": "riccardobucco",
"node_id": "MDQ6VXNlcjkyOTUyNzc=",
"organizations_url": "https://api.github.com/users/riccardobucco/orgs",
"received_events_url": "https://api.github.com/users/riccardobucco/received_events",
"repos_url": "https://api.github.com/users/riccardobucco/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/riccardobucco/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/riccardobucco/subscriptions",
"type": "User",
"url": "https://api.github.com/users/riccardobucco",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/9295277?v=4",
"events_url": "https://api.github.com/users/riccardobucco/events{/privacy}",
"followers_url": "https://api.github.com/users/riccardobucco/followers",
"following_url": "https://api.github.com/users/riccardobucco/following{/other_user}",
"gists_url": "https://api.github.com/users/riccardobucco/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/riccardobucco",
"id": 9295277,
"login": "riccardobucco",
"node_id": "MDQ6VXNlcjkyOTUyNzc=",
"organizations_url": "https://api.github.com/users/riccardobucco/orgs",
"received_events_url": "https://api.github.com/users/riccardobucco/received_events",
"repos_url": "https://api.github.com/users/riccardobucco/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/riccardobucco/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/riccardobucco/subscriptions",
"type": "User",
"url": "https://api.github.com/users/riccardobucco",
"user_view_type": "public"
}
] |
[
"Hi, thanks for reporting! This line should be replaced with \r\n```python\r\ndataset = dataset.map(lambda examples: tokenizer(examples[\"text\"], return_tensors=\"np\"), batched=True)\r\n```\r\nfor it to work (the `return_tensors` part inside the `tokenizer` call).",
"Can I work on this?",
"Fixed in https://github.com/huggingface/datasets/pull/5095"
] | 2022-10-10T10:23:53
| 2022-10-10T17:51:15
| 2022-10-10T17:51:14
|
MEMBER
| null | null | null | null |
## Describe the bug
In the "Process text data" tutorial, [`map` has `return_tensors` as kwarg](https://huggingface.co/docs/datasets/main/en/nlp_process#map). It does not seem to appear in the [function documentation](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.Dataset.map), nor to work.
## Steps to reproduce the bug
MWE:
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
from datasets import load_dataset
dataset = load_dataset("lhoestq/demo1", split="train")
dataset = dataset.map(lambda examples: tokenizer(examples["review"]), batched=True, return_tensors="pt")
```
## Expected results
return_tensors to be a valid kwarg :smiley:
## Actual results
```python
>> TypeError: map() got an unexpected keyword argument 'return_tensors'
```
## Environment info
- `datasets` version: 2.3.2
- Platform: Linux-5.14.0-1052-oem-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 8.0.0
- Pandas version: 1.4.3
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5093/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5093/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 7:27:21
|
https://api.github.com/repos/huggingface/datasets/issues/5090
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5090/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5090/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5090/events
|
https://github.com/huggingface/datasets/issues/5090
| 1,401,102,407
|
I_kwDODunzps5TgyBH
| 5,090
|
Review sync issues from GitHub to Hub
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] |
[
"Nice!!"
] | 2022-10-07T12:31:56
| 2022-10-08T07:07:36
| 2022-10-08T07:07:36
|
MEMBER
| null | null | null | null |
## Describe the bug
We have discovered that sometimes there were sync issues between GitHub and Hub datasets, after a merge commit to main branch.
For example:
- this merge commit: https://github.com/huggingface/datasets/commit/d74a9e8e4bfff1fed03a4cab99180a841d7caf4b
- was not properly synced with the Hub: https://github.com/huggingface/datasets/actions/runs/3002495269/jobs/4819769684
```
[main 9e641de] Add Papers with Code ID to scifact dataset (#4941)
Author: Albert Villanova del Moral <albertvillanova@users.noreply.huggingface.co>
1 file changed, 42 insertions(+), 14 deletions(-)
push failed !
GitCommandError(['git', 'push'], 1, b'remote: ---------------------------------------------------------- \nremote: Sorry, your push was rejected during YAML metadata verification: \nremote: - Error: "license" does not match any of the allowed types \nremote: ---------------------------------------------------------- \nremote: Please find the documentation at: \nremote: https://huggingface.co/docs/hub/models-cards#model-card-metadata \nremote: ---------------------------------------------------------- \nTo [https://huggingface.co/datasets/scifact.git\n](https://huggingface.co/datasets/scifact.git/n) ! [remote rejected] main -> main (pre-receive hook declined)\nerror: failed to push some refs to \'[https://huggingface.co/datasets/scifact.git\](https://huggingface.co/datasets/scifact.git/)'', b'')
```
We are reviewing sync issues in previous commits to recover them and repushing to the Hub.
TODO: Review
- [x] #4941
- scifact
- [x] #4931
- scifact
- [x] #4753
- wikipedia
- [x] #4554
- wmt17, wmt19, wmt_t2t
- Fixed with "Release 2.4.0" commit: https://github.com/huggingface/datasets/commit/401d4c4f9b9594cb6527c599c0e7a72ce1a0ea49
- https://huggingface.co/datasets/wmt17/commit/5c0afa83fbbd3508ff7627c07f1b27756d1379ea
- https://huggingface.co/datasets/wmt19/commit/b8ad5bf1960208a376a0ab20bc8eac9638f7b400
- https://huggingface.co/datasets/wmt_t2t/commit/b6d67191804dd0933476fede36754a436b48d1fc
- [x] #4607
- [x] #4416
- lccc
- Fixed with "Release 2.3.0" commit: https://huggingface.co/datasets/lccc/commit/8b1f8cf425b5653a0a4357a53205aac82ce038d1
- [x] #4367
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5090/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5090/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 18:35:40
|
https://api.github.com/repos/huggingface/datasets/issues/5089
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5089/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5089/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5089/events
|
https://github.com/huggingface/datasets/issues/5089
| 1,400,788,486
|
I_kwDODunzps5TflYG
| 5,089
|
Resume failed process
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/208336?v=4",
"events_url": "https://api.github.com/users/felix-schneider/events{/privacy}",
"followers_url": "https://api.github.com/users/felix-schneider/followers",
"following_url": "https://api.github.com/users/felix-schneider/following{/other_user}",
"gists_url": "https://api.github.com/users/felix-schneider/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/felix-schneider",
"id": 208336,
"login": "felix-schneider",
"node_id": "MDQ6VXNlcjIwODMzNg==",
"organizations_url": "https://api.github.com/users/felix-schneider/orgs",
"received_events_url": "https://api.github.com/users/felix-schneider/received_events",
"repos_url": "https://api.github.com/users/felix-schneider/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/felix-schneider/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/felix-schneider/subscriptions",
"type": "User",
"url": "https://api.github.com/users/felix-schneider",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] |
[] | 2022-10-07T08:07:03
| 2022-10-07T08:07:03
| null |
NONE
| null | null | null | null |
**Is your feature request related to a problem? Please describe.**
When a process (`map`, `filter`, etc.) crashes part-way through, you lose all progress.
**Describe the solution you'd like**
It would be good if the cache reflected the partial progress, so that after we restart the script, the process can restart where it left off.
**Describe alternatives you've considered**
Doing processing outside of `datasets`, by writing the dataset to json files and building a restart mechanism myself.
**Additional context**
N/A
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5089/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5089/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/5088
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5088/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5088/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5088/events
|
https://github.com/huggingface/datasets/issues/5088
| 1,400,530,412
|
I_kwDODunzps5TemXs
| 5,088
|
load_datasets("json", ...) don't read local .json.gz properly
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/112650299?v=4",
"events_url": "https://api.github.com/users/junwang-wish/events{/privacy}",
"followers_url": "https://api.github.com/users/junwang-wish/followers",
"following_url": "https://api.github.com/users/junwang-wish/following{/other_user}",
"gists_url": "https://api.github.com/users/junwang-wish/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/junwang-wish",
"id": 112650299,
"login": "junwang-wish",
"node_id": "U_kgDOBrboOw",
"organizations_url": "https://api.github.com/users/junwang-wish/orgs",
"received_events_url": "https://api.github.com/users/junwang-wish/received_events",
"repos_url": "https://api.github.com/users/junwang-wish/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/junwang-wish/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/junwang-wish/subscriptions",
"type": "User",
"url": "https://api.github.com/users/junwang-wish",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
open
| false
| null |
[] |
[
"Hi @junwang-wish, thanks for reporting.\r\n\r\nUnfortunately, I'm not able to reproduce the bug. Which version of `datasets` are you using? Does the problem persist if you update `datasets`?\r\n```shell\r\npip install -U datasets\r\n``` ",
"Thanks @albertvillanova I updated `datasets` from `2.5.1` to `2.5.2` and tested copying the `json.gz` to a different directory and my mind was blown:\r\n\r\n```python\r\nfpath = '/data/junwang/.cache/general/57b6f2314cbe0bc45dda5b78f0871df2/test.json.gz'\r\nds_panda = DatasetDict(\r\n test=Dataset.from_pandas(\r\n pd.read_json(fpath, lines=True)\r\n )\r\n)\r\nds_direct = load_dataset(\r\n 'json', data_files={\r\n 'test': fpath\r\n }, features=Features(\r\n text_input=Value(dtype=\"string\", id=None),\r\n text_output=Value(dtype=\"string\", id=None)\r\n )\r\n)\r\nlen(ds_panda['test']), len(ds_direct['test'])\r\n```\r\nproduces \r\n```python\r\nUsing custom data configuration default-0e6cf24134163e8b\r\nFound cached dataset json (/data/junwang/.cache/huggingface/datasets/json/default-0e6cf24134163e8b/0.0.0/e6070c77f18f01a5ad4551a8b7edfba20b8438b7cad4d94e6ad9378022ce4aab)\r\n(1, 0)\r\n```\r\nbut then I ran below command to see if the same file in a different directory leads to same discrepancy\r\n```shell\r\ncp /data/junwang/.cache/general/57b6f2314cbe0bc45dda5b78f0871df2/test.json.gz tmp_test.json.gz\r\n```\r\nand so I ran\r\n```python\r\nfpath = 'tmp_test.json.gz'\r\nds_panda = DatasetDict(\r\n test=Dataset.from_pandas(\r\n pd.read_json(fpath, lines=True)\r\n )\r\n)\r\nds_direct = load_dataset(\r\n 'json', data_files={\r\n 'test': fpath\r\n }, features=Features(\r\n text_input=Value(dtype=\"string\", id=None),\r\n text_output=Value(dtype=\"string\", id=None)\r\n )\r\n)\r\nlen(ds_panda['test']), len(ds_direct['test'])\r\n```\r\nand behold, I get \r\n```python\r\nUsing custom data configuration default-f679b32ab0008520\r\nDownloading and preparing dataset json/default to /data/junwang/.cache/huggingface/datasets/json/default-f679b32ab0008520/0.0.0/e6070c77f18f01a5ad4551a8b7edfba20b8438b7cad4d94e6ad9378022ce4aab...\r\nDataset json downloaded and prepared to /data/junwang/.cache/huggingface/datasets/json/default-f679b32ab0008520/0.0.0/e6070c77f18f01a5ad4551a8b7edfba20b8438b7cad4d94e6ad9378022ce4aab. Subsequent calls will reuse this data.\r\n(1, 1)\r\n```\r\nThey match now !\r\n\r\nThis problem happens regardless of the shell I use (VScode jupyter extension or plain old Python REPL). \r\n\r\nI attached the `json.gz` here for reference: [test.json.gz](https://github.com/huggingface/datasets/files/9734843/test.json.gz)\r\n\r\n"
] | 2022-10-07T02:16:58
| 2022-10-07T14:43:16
| null |
NONE
| null | null | null | null |
## Describe the bug
I have a local file `*.json.gz` and it can be read by `pandas.read_json(lines=True)`, but cannot be read by `load_datasets("json")` (resulting in 0 lines)
## Steps to reproduce the bug
```python
fpath = '/data/junwang/.cache/general/57b6f2314cbe0bc45dda5b78f0871df2/test.json.gz'
ds_panda = DatasetDict(
test=Dataset.from_pandas(
pd.read_json(fpath, lines=True)
)
)
ds_direct = load_dataset(
'json', data_files={
'test': fpath
}, features=Features(
text_input=Value(dtype="string", id=None),
text_output=Value(dtype="string", id=None)
)
)
len(ds_panda['test']), len(ds_direct['test'])
```
## Expected results
Lines of `ds_panda['test']` and `ds_direct['test']` should match.
## Actual results
```
Using custom data configuration default-c0ef2598760968aa
Downloading and preparing dataset json/default to /data/junwang/.cache/huggingface/datasets/json/default-c0ef2598760968aa/0.0.0/e6070c77f18f01a5ad4551a8b7edfba20b8438b7cad4d94e6ad9378022ce4aab...
Dataset json downloaded and prepared to /data/junwang/.cache/huggingface/datasets/json/default-c0ef2598760968aa/0.0.0/e6070c77f18f01a5ad4551a8b7edfba20b8438b7cad4d94e6ad9378022ce4aab. Subsequent calls will reuse this data.
(62087, 0)
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform: Ubuntu 18.04.4 LTS
- Python version: 3.8.13
- PyArrow version: 9.0.0
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5088/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5088/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/5086
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5086/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5086/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5086/events
|
https://github.com/huggingface/datasets/issues/5086
| 1,400,216,975
|
I_kwDODunzps5TdZ2P
| 5,086
|
HTTPError: 404 Client Error: Not Found for url
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/54015474?v=4",
"events_url": "https://api.github.com/users/keyuchen21/events{/privacy}",
"followers_url": "https://api.github.com/users/keyuchen21/followers",
"following_url": "https://api.github.com/users/keyuchen21/following{/other_user}",
"gists_url": "https://api.github.com/users/keyuchen21/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/keyuchen21",
"id": 54015474,
"login": "keyuchen21",
"node_id": "MDQ6VXNlcjU0MDE1NDc0",
"organizations_url": "https://api.github.com/users/keyuchen21/orgs",
"received_events_url": "https://api.github.com/users/keyuchen21/received_events",
"repos_url": "https://api.github.com/users/keyuchen21/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/keyuchen21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/keyuchen21/subscriptions",
"type": "User",
"url": "https://api.github.com/users/keyuchen21",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] |
[
"FYI @lewtun ",
"Hi @km5ar, thanks for reporting.\r\n\r\nThis should be fixed in the notebook:\r\n- the filename `datasets-issues-with-hf-doc-builder.jsonl` no longer exists on the repo; instead, current filename is `datasets-issues-with-comments.jsonl`\r\n- see: https://huggingface.co/datasets/lewtun/github-issues/tree/main\r\n\r\nAnyway, depending on your version of `datasets`, you can now use:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nissues_dataset = load_dataset(\"lewtun/github-issues\")\r\nissues_dataset\r\n```\r\ninstead of:\r\n```python\r\nfrom huggingface_hub import hf_hub_url\r\n\r\ndata_files = hf_hub_url(\r\n repo_id=\"lewtun/github-issues\",\r\n filename=\"datasets-issues-with-hf-doc-builder.jsonl\",\r\n repo_type=\"dataset\",\r\n)\r\nfrom datasets import load_dataset\r\n\r\nissues_dataset = load_dataset(\"json\", data_files=data_files, split=\"train\")\r\nissues_dataset\r\n```\r\n\r\nOutput:\r\n```python\r\nIn [25]: ds = load_dataset(\"lewtun/github-issues\")\r\nDownloading: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 10.5k/10.5k [00:00<00:00, 5.75MB/s]\r\nUsing custom data configuration lewtun--github-issues-cff5093ecc410ea2\r\nDownloading and preparing dataset json/lewtun--github-issues to .../.cache/huggingface/datasets/lewtun___json/lewtun--github-issues-cff5093ecc410ea2/0.0.0/e6070c77f18f01a5ad4551a8b7edfba20b8438b7cad4d94e6ad9378022ce4aab...\r\nDownloading data: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 12.2M/12.2M [00:00<00:00, 26.5MB/s]\r\nDownloading data files: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:02<00:00, 2.70s/it]\r\nExtracting data files: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:00<00:00, 1589.96it/s]\r\nDataset json downloaded and prepared to .../.cache/huggingface/datasets/lewtun___json/lewtun--github-issues-cff5093ecc410ea2/0.0.0/e6070c77f18f01a5ad4551a8b7edfba20b8438b7cad4d94e6ad9378022ce4aab. Subsequent calls will reuse this data.\r\n100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:00<00:00, 133.95it/s]\r\n\r\nIn [26]: ds\r\nOut[26]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['url', 'repository_url', 'labels_url', 'comments_url', 'events_url', 'html_url', 'id', 'node_id', 'number', 'title', 'user', 'labels', 'state', 'locked', 'assignee', 'assignees', 'milestone', 'comments', 'created_at', 'updated_at', 'closed_at', 'author_association', 'active_lock_reason', 'pull_request', 'body', 'timeline_url', 'performed_via_github_app', 'is_pull_request'],\r\n num_rows: 3019\r\n })\r\n})\r\n```",
"Thanks for reporting @km5ar and thank you @albertvillanova for the quick solution! I'll post a fix on the source too"
] | 2022-10-06T19:48:58
| 2022-10-07T15:12:01
| 2022-10-07T15:12:01
|
NONE
| null | null | null | null |
## Describe the bug
I was following chap 5 from huggingface course: https://huggingface.co/course/chapter5/6?fw=tf
However, I'm not able to download the datasets, with a 404 erros
<img width="1160" alt="iShot2022-10-06_15 54 50" src="https://user-images.githubusercontent.com/54015474/194406327-ae62c2f3-1da5-4686-8631-13d879a0edee.png">
## Steps to reproduce the bug
```python
from huggingface_hub import hf_hub_url
data_files = hf_hub_url(
repo_id="lewtun/github-issues",
filename="datasets-issues-with-hf-doc-builder.jsonl",
repo_type="dataset",
)
from datasets import load_dataset
issues_dataset = load_dataset("json", data_files=data_files, split="train")
issues_dataset
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.5.2
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.9.12
- PyArrow version: 9.0.0
- Pandas version: 1.4.4
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lewtun",
"id": 26859204,
"login": "lewtun",
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"repos_url": "https://api.github.com/users/lewtun/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lewtun",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5086/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5086/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 19:23:03
|
https://api.github.com/repos/huggingface/datasets/issues/5085
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5085/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5085/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5085/events
|
https://github.com/huggingface/datasets/issues/5085
| 1,400,113,569
|
I_kwDODunzps5TdAmh
| 5,085
|
Filtering on an empty dataset returns a corrupted dataset.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/36087158?v=4",
"events_url": "https://api.github.com/users/gabegma/events{/privacy}",
"followers_url": "https://api.github.com/users/gabegma/followers",
"following_url": "https://api.github.com/users/gabegma/following{/other_user}",
"gists_url": "https://api.github.com/users/gabegma/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gabegma",
"id": 36087158,
"login": "gabegma",
"node_id": "MDQ6VXNlcjM2MDg3MTU4",
"organizations_url": "https://api.github.com/users/gabegma/orgs",
"received_events_url": "https://api.github.com/users/gabegma/received_events",
"repos_url": "https://api.github.com/users/gabegma/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gabegma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gabegma/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gabegma",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
},
{
"color": "DF8D62",
"default": false,
"description": "",
"id": 4614514401,
"name": "hacktoberfest",
"node_id": "LA_kwDODunzps8AAAABEwvm4Q",
"url": "https://api.github.com/repos/huggingface/datasets/labels/hacktoberfest"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/23029765?v=4",
"events_url": "https://api.github.com/users/Mouhanedg56/events{/privacy}",
"followers_url": "https://api.github.com/users/Mouhanedg56/followers",
"following_url": "https://api.github.com/users/Mouhanedg56/following{/other_user}",
"gists_url": "https://api.github.com/users/Mouhanedg56/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Mouhanedg56",
"id": 23029765,
"login": "Mouhanedg56",
"node_id": "MDQ6VXNlcjIzMDI5NzY1",
"organizations_url": "https://api.github.com/users/Mouhanedg56/orgs",
"received_events_url": "https://api.github.com/users/Mouhanedg56/received_events",
"repos_url": "https://api.github.com/users/Mouhanedg56/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Mouhanedg56/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mouhanedg56/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Mouhanedg56",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/23029765?v=4",
"events_url": "https://api.github.com/users/Mouhanedg56/events{/privacy}",
"followers_url": "https://api.github.com/users/Mouhanedg56/followers",
"following_url": "https://api.github.com/users/Mouhanedg56/following{/other_user}",
"gists_url": "https://api.github.com/users/Mouhanedg56/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Mouhanedg56",
"id": 23029765,
"login": "Mouhanedg56",
"node_id": "MDQ6VXNlcjIzMDI5NzY1",
"organizations_url": "https://api.github.com/users/Mouhanedg56/orgs",
"received_events_url": "https://api.github.com/users/Mouhanedg56/received_events",
"repos_url": "https://api.github.com/users/Mouhanedg56/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Mouhanedg56/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mouhanedg56/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Mouhanedg56",
"user_view_type": "public"
}
] |
[
"~~It seems like #5043 fix (merged recently) is the root cause of such behaviour. When we empty indices mapping (because the dataset length equals to zero), we can no longer get column item like: `ds_filter_2['sentence']` which uses\r\n`ds_filter_1._indices.column(0)`~~\r\n\r\n**UPDATE:**\r\nEmpty datasets are returned without going through partial function on `map` method, which will not work to get indices for `filter`: we need to run `get_indices_from_mask_function` partial function on the dataset to get output = `{\"indices\": []}`. But this is complicated since functions used in args, in particular `get_indices_from_mask_function`, do not support empty datasets.\r\nWe can just handle empty datasets aside on filter method.",
"#self-assign",
"Thank you for solving this amazingly quickly!"
] | 2022-10-06T18:18:49
| 2022-10-07T19:06:02
| 2022-10-07T18:40:26
|
NONE
| null | null | null | null |
## Describe the bug
When filtering a dataset twice, where the first result is an empty dataset, the second dataset seems corrupted.
## Steps to reproduce the bug
```python
datasets = load_dataset("glue", "sst2")
dataset_split = datasets['validation']
ds_filter_1 = dataset_split.filter(lambda x: False) # Some filtering condition that leads to an empty dataset
assert ds_filter_1.num_rows == 0
sentences = ds_filter_1['sentence']
assert len(sentences) == 0
ds_filter_2 = ds_filter_1.filter(lambda x: False) # Some other filtering condition
assert ds_filter_2.num_rows == 0
assert 'sentence' in ds_filter_2.column_names
sentences = ds_filter_2['sentence']
```
## Expected results
The last line should be returning an empty list, same as 4 lines above.
## Actual results
The last line currently raises `IndexError: index out of bounds`.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.5.2
- Platform: macOS-11.6.6-x86_64-i386-64bit
- Python version: 3.9.11
- PyArrow version: 7.0.0
- Pandas version: 1.4.1
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 3,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5085/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5085/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 1 day, 0:21:37
|
https://api.github.com/repos/huggingface/datasets/issues/5083
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5083/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5083/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5083/events
|
https://github.com/huggingface/datasets/issues/5083
| 1,399,842,514
|
I_kwDODunzps5Tb-bS
| 5,083
|
Support numpy/torch/tf/jax formatting for IterableDataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "fef2c0",
"default": false,
"description": "",
"id": 3287858981,
"name": "streaming",
"node_id": "MDU6TGFiZWwzMjg3ODU4OTgx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/streaming"
},
{
"color": "BDE59C",
"default": false,
"description": "Issues a bit more difficult than \"Good First\" issues",
"id": 3761482852,
"name": "good second issue",
"node_id": "LA_kwDODunzps7gM6xk",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20second%20issue"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
] |
[
"hii @lhoestq, can you assign this issue to me? Though i am new to open source still I would love to put my best foot forward. I can see there isn't anyone right now assigned to this issue.",
"Hi @zutarich ! This issue was fixed by #5852 - sorry I forgot to close it\r\n\r\nFeel free to look for other issues and ping me or @mariosasko if you have questions :)\r\nAlso let us know if we can help find an issue that can correspond to what you're looking for"
] | 2022-10-06T15:14:58
| 2023-10-09T12:42:15
| 2023-10-09T12:42:15
|
MEMBER
| null | null | null | null |
Right now `IterableDataset` doesn't do any formatting.
In particular this code should return a numpy array:
```python
from datasets import load_dataset
ds = load_dataset("imagenet-1k", split="train", streaming=True).with_format("np")
print(next(iter(ds))["image"])
```
Right now it returns a PIL.Image.
Setting `streaming=False` does return a numpy array after #5072
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5083/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5083/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 367 days, 21:27:17
|
https://api.github.com/repos/huggingface/datasets/issues/5081
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5081/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5081/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5081/events
|
https://github.com/huggingface/datasets/issues/5081
| 1,399,340,050
|
I_kwDODunzps5TaDwS
| 5,081
|
Bug loading `sentence-transformers/parallel-sentences`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4",
"events_url": "https://api.github.com/users/PhilipMay/events{/privacy}",
"followers_url": "https://api.github.com/users/PhilipMay/followers",
"following_url": "https://api.github.com/users/PhilipMay/following{/other_user}",
"gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/PhilipMay",
"id": 229382,
"login": "PhilipMay",
"node_id": "MDQ6VXNlcjIyOTM4Mg==",
"organizations_url": "https://api.github.com/users/PhilipMay/orgs",
"received_events_url": "https://api.github.com/users/PhilipMay/received_events",
"repos_url": "https://api.github.com/users/PhilipMay/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions",
"type": "User",
"url": "https://api.github.com/users/PhilipMay",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
open
| false
| null |
[] |
[
"tagging @nreimers ",
"The dataset is sadly not really compatible to be loaded with `load_dataset`. So far it is better to git clone it and to use the files directly.\r\n\r\nA data loading script would be needed to be added to this dataset. But this was too much overhead / not really intuitive how to create it.",
"Since the dataset is a bunch of TSVs we should not need a dataset script I think.\r\n\r\nBy default it tries to load all the TSVs at once, which fails here because they don't all have the same columns (pd.read_csv uses the first line as header by default). But those files have no header ! So, to properly load any TSV file in this repo, one has to pass `names=[...]` for pd.read_csv to know which column names to use.\r\n\r\nTo fix this situation, we can either do\r\n1. replace the TSVs by TSV with column names\r\n2. OR specify the pd.read_csv kwargs as YAML in the dataset card - and `datasets` would use that by default\r\n\r\nWDTY ?",
"There are more issues in the dataset.\r\nTo load OpenSubtitles I have to provide this (see `skiprows`):\r\n\r\n```python\r\ndf_os = pd.read_csv(\r\n \"./parallel-sentences/OpenSubtitles/OpenSubtitles-en-de-train.tsv.gz\", \r\n sep=\"\\t\", \r\n quoting=csv.QUOTE_NONE,\r\n header=None,\r\n names=[\"en\", \"de\"],\r\n skiprows=[540344, 9151700, 10040173, 10040199, 11314673, 11338258, 11869223, 12159297, 12251078, 12303334],\r\n)\r\n```",
"What's wrong with those lines exactly ?\r\nMaybe passing `error_bad_lines=False` (and maybe `warn_bad_lines=True`) can be helpful",
"> What's wrong with those lines exactly ? \r\n\r\nStuff like this: `ParserError: Error tokenizing data. C error: Expected 2 fields in line 540345, saw 3`\r\n\r\n",
"> Maybe passing error_bad_lines=False (and maybe warn_bad_lines=True) can be helpful\r\n\r\nYes. That would hide the issue but not solve it.",
"@nreimers WDYT about the two options mentioned above ?"
] | 2022-10-06T10:47:51
| 2022-10-11T10:00:48
| null |
CONTRIBUTOR
| null | null | null | null |
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("sentence-transformers/parallel-sentences")
```
raises this:
```
/home/phmay/miniconda3/envs/paraphrase-mining/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py:697: FutureWarning: the 'mangle_dupe_cols' keyword is deprecated and will be removed in a future version. Please take steps to stop the use of 'mangle_dupe_cols'
return pd.read_csv(xopen(filepath_or_buffer, "rb", use_auth_token=use_auth_token), **kwargs)
/home/phmay/miniconda3/envs/paraphrase-mining/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py:697: FutureWarning: the 'mangle_dupe_cols' keyword is deprecated and will be removed in a future version. Please take steps to stop the use of 'mangle_dupe_cols'
return pd.read_csv(xopen(filepath_or_buffer, "rb", use_auth_token=use_auth_token), **kwargs)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In [4], line 1
----> 1 dataset = load_dataset("sentence-transformers/parallel-sentences", split="train")
File ~/miniconda3/envs/paraphrase-mining/lib/python3.9/site-packages/datasets/load.py:1693, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)
1690 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES
1692 # Download and prepare data
-> 1693 builder_instance.download_and_prepare(
1694 download_config=download_config,
1695 download_mode=download_mode,
1696 ignore_verifications=ignore_verifications,
1697 try_from_hf_gcs=try_from_hf_gcs,
1698 use_auth_token=use_auth_token,
1699 )
1701 # Build dataset for splits
1702 keep_in_memory = (
1703 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)
1704 )
File ~/miniconda3/envs/paraphrase-mining/lib/python3.9/site-packages/datasets/builder.py:807, in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, storage_options, **download_and_prepare_kwargs)
801 if not downloaded_from_gcs:
802 prepare_split_kwargs = {
803 "file_format": file_format,
804 "max_shard_size": max_shard_size,
805 **download_and_prepare_kwargs,
806 }
--> 807 self._download_and_prepare(
808 dl_manager=dl_manager,
809 verify_infos=verify_infos,
810 **prepare_split_kwargs,
811 **download_and_prepare_kwargs,
812 )
813 # Sync info
814 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values())
File ~/miniconda3/envs/paraphrase-mining/lib/python3.9/site-packages/datasets/builder.py:898, in DatasetBuilder._download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
894 split_dict.add(split_generator.split_info)
896 try:
897 # Prepare split will record examples associated to the split
--> 898 self._prepare_split(split_generator, **prepare_split_kwargs)
899 except OSError as e:
900 raise OSError(
901 "Cannot find data file. "
902 + (self.manual_download_instructions or "")
903 + "\nOriginal error:\n"
904 + str(e)
905 ) from None
File ~/miniconda3/envs/paraphrase-mining/lib/python3.9/site-packages/datasets/builder.py:1513, in ArrowBasedBuilder._prepare_split(self, split_generator, file_format, max_shard_size)
1506 shard_id += 1
1507 writer = writer_class(
1508 features=writer._features,
1509 path=fpath.replace("SSSSS", f"{shard_id:05d}"),
1510 storage_options=self._fs.storage_options,
1511 embed_local_files=embed_local_files,
1512 )
-> 1513 writer.write_table(table)
1514 finally:
1515 num_shards = shard_id + 1
File ~/miniconda3/envs/paraphrase-mining/lib/python3.9/site-packages/datasets/arrow_writer.py:540, in ArrowWriter.write_table(self, pa_table, writer_batch_size)
538 if self.pa_writer is None:
539 self._build_writer(inferred_schema=pa_table.schema)
--> 540 pa_table = table_cast(pa_table, self._schema)
541 if self.embed_local_files:
542 pa_table = embed_table_storage(pa_table)
File ~/miniconda3/envs/paraphrase-mining/lib/python3.9/site-packages/datasets/table.py:2044, in table_cast(table, schema)
2032 """Improved version of pa.Table.cast.
2033
2034 It supports casting to feature types stored in the schema metadata.
(...)
2041 table (:obj:`pyarrow.Table`): the casted table
2042 """
2043 if table.schema != schema:
-> 2044 return cast_table_to_schema(table, schema)
2045 elif table.schema.metadata != schema.metadata:
2046 return table.replace_schema_metadata(schema.metadata)
File ~/miniconda3/envs/paraphrase-mining/lib/python3.9/site-packages/datasets/table.py:2005, in cast_table_to_schema(table, schema)
2003 features = Features.from_arrow_schema(schema)
2004 if sorted(table.column_names) != sorted(features):
-> 2005 raise ValueError(f"Couldn't cast\n{table.schema}\nto\n{features}\nbecause column names don't match")
2006 arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
2007 return pa.Table.from_arrays(arrays, schema=schema)
ValueError: Couldn't cast
Action taken on Parliament's resolutions: see Minutes: string
NΓ‘slednΓ½ postup na zΓ‘kladΔ usnesenΓ Parlamentu: viz zΓ‘pis: string
-- schema metadata --
pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 742
to
{'Membership of Parliament: see Minutes': Value(dtype='string', id=None), 'Π‘ΡΡΡΠ°Π² Π½Π° ΠΠ°ΡΠ»Π°ΠΌΠ΅Π½ΡΠ°: Π²ΠΆ. ΠΏΡΠΎΡΠΎΠΊΠΎΠ»ΠΈ': Value(dtype='string', id=None)}
because column names don't match
```
## Expected results
no error
## Actual results
error
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform: Linux
- Python version: Python 3.9.13
- PyArrow version: pyarrow 9.0.0
- transformers 4.22.2
- datasets 2.5.2
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5081/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5081/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/5080
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5080/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5080/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5080/events
|
https://github.com/huggingface/datasets/issues/5080
| 1,398,849,565
|
I_kwDODunzps5TYMAd
| 5,080
|
Use hfh for caching
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] |
[
"There is some discussion in https://github.com/huggingface/huggingface_hub/pull/1088 if it can help :)"
] | 2022-10-06T05:51:58
| 2022-10-06T14:26:05
| null |
MEMBER
| null | null | null | null |
## Is your feature request related to a problem?
As previously discussed in our meeting with @Wauplin and agreed on our last datasets team sync meeting, I'm investigating how `datasets` can use `hfh` for caching.
## Describe the solution you'd like
Due to the peculiarities of the `datasets` cache, I would propose adopting `hfh` caching system in stages.
First, we could easily start using `hfh` caching for:
- dataset Python scripts
- dataset READMEs
- dataset infos JSON files (now deprecated)
Second, we could also use `hfh` caching for data files downloaded from the Hub.
Further investigation is needed for:
- files downloaded from non-Hub hosts
- extracted files from downloaded archive/compressed files
- generated Arrow files
## Additional context
Docs about the `hfh` caching system:
- [Manage huggingface_hub cache-system](https://huggingface.co/docs/huggingface_hub/main/en/how-to-cache)
- [Cache-system reference](https://huggingface.co/docs/huggingface_hub/main/en/package_reference/cache)
The `transformers` library has already adopted `hfh` for caching. See:
- huggingface/transformers#18438
- huggingface/transformers#18857
- huggingface/transformers#18966
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5080/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5080/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/5075
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5075/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5075/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5075/events
|
https://github.com/huggingface/datasets/issues/5075
| 1,397,865,501
|
I_kwDODunzps5TUbwd
| 5,075
|
Throw EnvironmentError when token is not present
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
[
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
},
{
"color": "DF8D62",
"default": false,
"description": "",
"id": 4614514401,
"name": "hacktoberfest",
"node_id": "LA_kwDODunzps8AAAABEwvm4Q",
"url": "https://api.github.com/repos/huggingface/datasets/labels/hacktoberfest"
}
] |
closed
| false
| null |
[] |
[
"@mariosasko I've raised a PR #5076 against this issue. Please help to review. Thanks."
] | 2022-10-05T14:14:18
| 2022-10-07T14:33:28
| 2022-10-07T14:33:28
|
COLLABORATOR
| null | null | null | null |
Throw EnvironmentError instead of OSError ([link](https://github.com/huggingface/datasets/blob/6ad430ba0cdeeb601170f732d4bd977f5c04594d/src/datasets/arrow_dataset.py#L4306) to the line) in `push_to_hub` when the Hub token is not present.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5075/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5075/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 2 days, 0:19:10
|
https://api.github.com/repos/huggingface/datasets/issues/5074
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5074/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5074/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5074/events
|
https://github.com/huggingface/datasets/issues/5074
| 1,397,850,352
|
I_kwDODunzps5TUYDw
| 5,074
|
Replace AssertionErrors with more meaningful errors
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
[
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
},
{
"color": "DF8D62",
"default": false,
"description": "",
"id": 4614514401,
"name": "hacktoberfest",
"node_id": "LA_kwDODunzps8AAAABEwvm4Q",
"url": "https://api.github.com/repos/huggingface/datasets/labels/hacktoberfest"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/20004072?v=4",
"events_url": "https://api.github.com/users/galbwe/events{/privacy}",
"followers_url": "https://api.github.com/users/galbwe/followers",
"following_url": "https://api.github.com/users/galbwe/following{/other_user}",
"gists_url": "https://api.github.com/users/galbwe/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/galbwe",
"id": 20004072,
"login": "galbwe",
"node_id": "MDQ6VXNlcjIwMDA0MDcy",
"organizations_url": "https://api.github.com/users/galbwe/orgs",
"received_events_url": "https://api.github.com/users/galbwe/received_events",
"repos_url": "https://api.github.com/users/galbwe/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/galbwe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/galbwe/subscriptions",
"type": "User",
"url": "https://api.github.com/users/galbwe",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/20004072?v=4",
"events_url": "https://api.github.com/users/galbwe/events{/privacy}",
"followers_url": "https://api.github.com/users/galbwe/followers",
"following_url": "https://api.github.com/users/galbwe/following{/other_user}",
"gists_url": "https://api.github.com/users/galbwe/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/galbwe",
"id": 20004072,
"login": "galbwe",
"node_id": "MDQ6VXNlcjIwMDA0MDcy",
"organizations_url": "https://api.github.com/users/galbwe/orgs",
"received_events_url": "https://api.github.com/users/galbwe/received_events",
"repos_url": "https://api.github.com/users/galbwe/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/galbwe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/galbwe/subscriptions",
"type": "User",
"url": "https://api.github.com/users/galbwe",
"user_view_type": "public"
}
] |
[
"Hi, can I pick up this issue?",
"#self-assign",
"Looks like the top-level `datasource` directory was removed when https://github.com/huggingface/datasets/pull/4974 was merged, so there are 3 source files to fix."
] | 2022-10-05T14:03:55
| 2022-10-07T14:33:11
| 2022-10-07T14:33:11
|
COLLABORATOR
| null | null | null | null |
Replace the AssertionErrors with more meaningful errors such as ValueError, TypeError, etc.
The files with AssertionErrors that need to be replaced:
```
src/datasets/arrow_reader.py
src/datasets/builder.py
src/datasets/utils/version.py
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5074/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5074/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 2 days, 0:29:16
|
https://api.github.com/repos/huggingface/datasets/issues/5070
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5070/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5070/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5070/events
|
https://github.com/huggingface/datasets/issues/5070
| 1,396,765,647
|
I_kwDODunzps5TQPPP
| 5,070
|
Support default config name when no builder configs
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] |
[
"Thank you for creating this feature request, Albert.\r\n\r\nFor context this is the datatest where Albert has been helping me to switch to on-the-fly split config https://huggingface.co/datasets/HuggingFaceM4/cm4-synthetic-testing\r\n\r\nand the attempt to switch on-the-fly splits was here: https://huggingface.co/datasets/HuggingFaceM4/cm4-synthetic-testing/discussions/2/files\r\n\r\nbut which I had to revert since providing no split breaks at run time.\r\n"
] | 2022-10-04T19:49:35
| 2022-10-06T14:40:26
| 2022-10-06T14:40:26
|
MEMBER
| null | null | null | null |
**Is your feature request related to a problem? Please describe.**
As discussed with @stas00, we could support defining a default config name, even if no predefined allowed config names are set. That is, support `DEFAULT_CONFIG_NAME`, even when `BUILDER_CONFIGS` is not defined.
**Additional context**
In order to support creating configs on the fly **by name** (not using kwargs), the list of allowed builder configs `BUILDER_CONFIGS` must not be set.
However, if so, then `DEFAULT_CONFIG_NAME` is not supported.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5070/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5070/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 1 day, 18:50:51
|
https://api.github.com/repos/huggingface/datasets/issues/5061
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5061/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5061/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5061/events
|
https://github.com/huggingface/datasets/issues/5061
| 1,395,476,770
|
I_kwDODunzps5TLUki
| 5,061
|
`_pickle.PicklingError: logger cannot be pickled` in multiprocessing `map`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/11954789?v=4",
"events_url": "https://api.github.com/users/ZhaofengWu/events{/privacy}",
"followers_url": "https://api.github.com/users/ZhaofengWu/followers",
"following_url": "https://api.github.com/users/ZhaofengWu/following{/other_user}",
"gists_url": "https://api.github.com/users/ZhaofengWu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ZhaofengWu",
"id": 11954789,
"login": "ZhaofengWu",
"node_id": "MDQ6VXNlcjExOTU0Nzg5",
"organizations_url": "https://api.github.com/users/ZhaofengWu/orgs",
"received_events_url": "https://api.github.com/users/ZhaofengWu/received_events",
"repos_url": "https://api.github.com/users/ZhaofengWu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ZhaofengWu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZhaofengWu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ZhaofengWu",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] |
[
"This is maybe related to python 3.10, do you think you could try on 3.8 ?\r\n\r\nIn the meantime we'll keep improving the support for 3.10. Let me add a dedicated CI",
"I did some binary search and seems like the root cause is either `multiprocess` or `dill`. python 3.10 is fine. Specifically:\r\n- `multiprocess==0.70.12.2, dill==0.3.4`: works\r\n- `multiprocess==0.70.12.2, dill==0.3.5.1`: doesn't work\r\n- `multiprocess==0.70.13, dill==0.3.5.1`: doesn't work\r\n- `multiprocess==0.70.13, dill==0.3.4`: can't test, `multiprocess==0.70.13` requires `dill>=0.3.5.1`\r\n\r\nI will pin their versions on my end. I don't have enough knowledge of how python multiprocessing works to debug this, but ideally there could be a fix. It's also possible that I'm doing something wrong in my code, but again the `.name` of the logger that failed to pickle is `datasets.fingerprint`, which I'm not using directly.",
"Do you know which logger fails at being pickled ?",
"I'm not 100% sure how to figure it out -- the stack trace above doesn't clearly give me a place where I can print out who owns the logger, etc. I only found out its `.name` is `datasets.fingerprint` by printing right before\r\n```\r\n File \".../logging/__init__.py\", line 1774, in __reduce__\r\n raise pickle.PicklingError('logger cannot be pickled')\r\n```\r\nIf you have any idea on how to find it out, please let me know.",
"Ok I see, not sure why it triggers this error though, in `logging.py` the code is\r\n\r\nhttps://github.com/python/cpython/blob/c9da063e32725a66495e4047b8a5ed13e72d9e8e/Lib/logging/__init__.py#L1769-L1775\r\n\r\nand on my side it works on 3.10 with dill 0.3.5.1 and multiprocess 0.70.13\r\n```python\r\n>>> datasets.fingerprint.logger.__reduce__() \r\n(<function logging.getLogger(name=None)>, ('datasets.fingerprint',))\r\n```\r\nCould you try to run this code ?\r\n\r\nAre you in an environment where the loggers are instantiated differently ? Can you check the source code of `logging.Logger.__reduce__` in `\".../logging/__init__.py\", line 1774` ?",
"Closing due to inactivity."
] | 2022-10-03T23:51:38
| 2023-07-21T14:43:35
| 2023-07-21T14:43:34
|
NONE
| null | null | null | null |
## Describe the bug
When I `map` with multiple processes, this error occurs. The `.name` of the `logger` that fails to pickle in the final line is `datasets.fingerprint`.
```
File "~/project/dataset.py", line 204, in <dictcomp>
split: dataset.map(
File ".../site-packages/datasets/arrow_dataset.py", line 2489, in map
transformed_shards[index] = async_result.get()
File ".../site-packages/multiprocess/pool.py", line 771, in get
raise self._value
File ".../site-packages/multiprocess/pool.py", line 537, in _handle_tasks
put(task)
File ".../site-packages/multiprocess/connection.py", line 214, in send
self._send_bytes(_ForkingPickler.dumps(obj))
File ".../site-packages/multiprocess/reduction.py", line 54, in dumps
cls(buf, protocol, *args, **kwds).dump(obj)
File ".../site-packages/dill/_dill.py", line 620, in dump
StockPickler.dump(self, obj)
File ".../pickle.py", line 487, in dump
self.save(obj)
File ".../pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File ".../pickle.py", line 902, in save_tuple
save(element)
File ".../pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File ".../site-packages/dill/_dill.py", line 1963, in save_function
_save_with_postproc(pickler, (_create_function, (
File ".../site-packages/dill/_dill.py", line 1140, in _save_with_postproc
pickler.save_reduce(*reduction, obj=obj)
File ".../pickle.py", line 717, in save_reduce
save(state)
File ".../pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File ".../pickle.py", line 887, in save_tuple
save(element)
File ".../pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File ".../site-packages/dill/_dill.py", line 1251, in save_module_dict
StockPickler.save_dict(pickler, obj)
File ".../pickle.py", line 972, in save_dict
self._batch_setitems(obj.items())
File ".../pickle.py", line 998, in _batch_setitems
save(v)
File ".../pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File ".../site-packages/dill/_dill.py", line 1963, in save_function
_save_with_postproc(pickler, (_create_function, (
File ".../site-packages/dill/_dill.py", line 1140, in _save_with_postproc
pickler.save_reduce(*reduction, obj=obj)
File ".../pickle.py", line 717, in save_reduce
save(state)
File ".../pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File ".../pickle.py", line 887, in save_tuple
save(element)
File ".../pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File ".../site-packages/dill/_dill.py", line 1251, in save_module_dict
StockPickler.save_dict(pickler, obj)
File ".../pickle.py", line 972, in save_dict
self._batch_setitems(obj.items())
File ".../pickle.py", line 998, in _batch_setitems
save(v)
File ".../pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File ".../site-packages/dill/_dill.py", line 1963, in save_function
_save_with_postproc(pickler, (_create_function, (
File ".../site-packages/dill/_dill.py", line 1154, in _save_with_postproc
pickler._batch_setitems(iter(source.items()))
File ".../pickle.py", line 998, in _batch_setitems
save(v)
File ".../pickle.py", line 578, in save
rv = reduce(self.proto)
File ".../logging/__init__.py", line 1774, in __reduce__
raise pickle.PicklingError('logger cannot be pickled')
_pickle.PicklingError: logger cannot be pickled
```
## Steps to reproduce the bug
Sorry I failed to have a minimal reproducible example, but the offending line on my end is
```python
dataset.map(
lambda examples: self.tokenize(examples), # this doesn't matter, lambda e: [1] * len(...) also breaks. In fact I'm pretty sure it breaks before executing this lambda
batched=True,
num_proc=4,
)
```
This does work when `num_proc=1`, so it's likely a multiprocessing thing.
## Expected results
`map` succeeds
## Actual results
The error trace above.
## Environment info
- `datasets` version: 1.16.1 and 2.5.1 both failed
- Platform: Ubuntu 20.04.4 LTS
- Python version: 3.10.4
- PyArrow version: 9.0.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5061/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5061/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 290 days, 14:51:56
|
https://api.github.com/repos/huggingface/datasets/issues/5060
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5060/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5060/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5060/events
|
https://github.com/huggingface/datasets/issues/5060
| 1,395,382,940
|
I_kwDODunzps5TK9qc
| 5,060
|
Unable to Use Custom Dataset Locally
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/33707069?v=4",
"events_url": "https://api.github.com/users/zanussbaum/events{/privacy}",
"followers_url": "https://api.github.com/users/zanussbaum/followers",
"following_url": "https://api.github.com/users/zanussbaum/following{/other_user}",
"gists_url": "https://api.github.com/users/zanussbaum/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/zanussbaum",
"id": 33707069,
"login": "zanussbaum",
"node_id": "MDQ6VXNlcjMzNzA3MDY5",
"organizations_url": "https://api.github.com/users/zanussbaum/orgs",
"received_events_url": "https://api.github.com/users/zanussbaum/received_events",
"repos_url": "https://api.github.com/users/zanussbaum/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/zanussbaum/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zanussbaum/subscriptions",
"type": "User",
"url": "https://api.github.com/users/zanussbaum",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] |
[
"Hi ! I opened a PR in your repo to fix this :)\r\nhttps://huggingface.co/datasets/zpn/pubchem_selfies/discussions/7\r\n\r\nbasically you need to use `open` for streaming to work properly",
"Thank you so much for this! Naive question, is this a feature of `open` or have you all overloaded it to be able to read from a URL? Any links to code/documentation would be greatly appreciated, I'd love to learn more",
"`datasets` extends `open` in dataset scripts to work with URLs. The builtin `open` from python only works with local files.\r\n\r\nYou can find the extension here: https://github.com/huggingface/datasets/blob/6ad430ba0cdeeb601170f732d4bd977f5c04594d/src/datasets/download/streaming_download_manager.py#L435-L451\r\n\r\nI think we can create a docs section dedicated to streaming to explain how this works",
"Closing this one - feel free to reopen if you have more questions"
] | 2022-10-03T21:55:16
| 2022-10-06T14:29:18
| 2022-10-06T14:29:17
|
CONTRIBUTOR
| null | null | null | null |
## Describe the bug
I have uploaded a [dataset](https://huggingface.co/datasets/zpn/pubchem_selfies) and followed the instructions from the [dataset_loader](https://huggingface.co/docs/datasets/dataset_script#download-data-files-and-organize-splits) tutorial. In that tutorial, it says
```
If the data files live in the same folder or repository of the dataset script,
you can just pass the relative paths to the files instead of URLs.
```
Accordingly, I put the [relative path](https://huggingface.co/datasets/zpn/pubchem_selfies/blob/main/pubchem_selfies.py#L76) to the data to be used. I was able to test the dataset and generate the metadata locally with `datasets-cli test path/to/<your-dataset-loading-script> --save_infos --all_configs`
However, if I try to load the data using `load_dataset`, I get the following error
```
with gzip.open(filepath, mode="rt") as f:
File "/usr/local/Cellar/python@3.9/3.9.7_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/gzip.py", line 58, in open
binary_file = GzipFile(filename, gz_mode, compresslevel)
File "/usr/local/Cellar/python@3.9/3.9.7_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/gzip.py", line 173, in __init__
fileobj = self.myfileobj = builtins.open(filename, mode or 'rb')
FileNotFoundError: [Errno 2] No such file or directory: 'https://huggingface.co/datasets/zpn/pubchem_selfies/resolve/main/data/Compound_021000001_021500000/Compound_021000001_021500000_SELFIES.jsonl.gz'
```
## Steps to reproduce the bug
```python
>>> from datasets import load_dataset
>>> dataset = load_dataset("zpn/pubchem_selfies", streaming=True)
>>> t = dataset["train"]
>>> for item in t:
...... print(item)
...... break
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/zachnussbaum/env/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 723, in __iter__
for key, example in self._iter():
File "/Users/zachnussbaum/env/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 713, in _iter
yield from ex_iterable
File "/Users/zachnussbaum/env/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 113, in __iter__
yield from self.generate_examples_fn(**self.kwargs)
File "/Users/zachnussbaum/.cache/huggingface/modules/datasets_modules/datasets/zpn--pubchem_selfies/d2571f35996765aea70fd3f3f8e3882d59c401fb738615c79282e2eb1d9f7a25/pubchem_selfies.py", line 475, in _generate_examples
with gzip.open(filepath, mode="rt") as f:
File "/usr/local/Cellar/python@3.9/3.9.7_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/gzip.py", line 58, in open
binary_file = GzipFile(filename, gz_mode, compresslevel)
File "/usr/local/Cellar/python@3.9/3.9.7_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/gzip.py", line 173, in __init__
fileobj = self.myfileobj = builtins.open(filename, mode or 'rb')
FileNotFoundError: [Errno 2] No such file or directory: 'https://huggingface.co/datasets/zpn/pubchem_selfies/resolve/main/data/Compound_021000001_021500000/Compound_021000001_021500000_SELFIES.jsonl.gz'
````
```
## Expected results
A clear and concise description of the expected results.
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.5.1
- Platform: macOS-12.5.1-x86_64-i386-64bit
- Python version: 3.9.7
- PyArrow version: 9.0.0
- Pandas version: 1.5.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5060/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5060/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 2 days, 16:34:01
|
https://api.github.com/repos/huggingface/datasets/issues/5053
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5053/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5053/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5053/events
|
https://github.com/huggingface/datasets/issues/5053
| 1,393,739,882
|
I_kwDODunzps5TEshq
| 5,053
|
Intermittent JSON parse error when streaming the Pile
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/77788841?v=4",
"events_url": "https://api.github.com/users/neelnanda-io/events{/privacy}",
"followers_url": "https://api.github.com/users/neelnanda-io/followers",
"following_url": "https://api.github.com/users/neelnanda-io/following{/other_user}",
"gists_url": "https://api.github.com/users/neelnanda-io/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/neelnanda-io",
"id": 77788841,
"login": "neelnanda-io",
"node_id": "MDQ6VXNlcjc3Nzg4ODQx",
"organizations_url": "https://api.github.com/users/neelnanda-io/orgs",
"received_events_url": "https://api.github.com/users/neelnanda-io/received_events",
"repos_url": "https://api.github.com/users/neelnanda-io/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/neelnanda-io/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/neelnanda-io/subscriptions",
"type": "User",
"url": "https://api.github.com/users/neelnanda-io",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
open
| false
| null |
[] |
[
"Maybe #2838 can help. In this PR we allow to skip bad chunks of JSON data to not crash the training\r\n\r\nDid you have warning messages before the error ?\r\n\r\nsomething like this maybe ?\r\n```\r\n03/24/2022 02:19:46 - WARNING - datasets.utils.streaming_download_manager - Got disconnected from remote data host. Retrying in 5sec [1/20]\r\n03/24/2022 02:20:01 - WARNING - datasets.utils.streaming_download_manager - Got disconnected from remote data host. Retrying in 5sec [2/20]\r\n03/24/2022 02:20:09 - ERROR - datasets.packaged_modules.json.json - Failed to read file 'gzip://file-000000000007.json::https://huggingface.co/datasets/lvwerra/codeparrot-clean-train/resolve/1d740acb9d09cf7a3307553323e2c677a6535407/file-000000000007.json.gz' with error <class 'pyarrow.lib.ArrowInvalid'>: JSON parse error: Invalid value. in row 0\r\n```",
"Ah, thanks! I did get errors like that. Sad that PR wasn't merged in! \r\n\r\nI'm currently just downloading 200GB of the Pile locally to avoid streaming (I have space and it's faster anyway), but that's really useful! I can probably apply the dumb patch of just commenting out the bits that raise the JSON Parse Error lol, based on your code - if I continue the loop should it be fine?",
"Yup you can get some inspiration from this PR. It simply ignores the bad chunks (a chunk is ~a few MBs of data).\r\nWe'll try to merge this PR soon"
] | 2022-10-02T11:56:46
| 2022-10-04T17:59:03
| null |
NONE
| null | null | null | null |
## Describe the bug
I have an intermittent error when streaming the Pile, where I get a JSON parse error which causes my program to crash.
This is intermittent - when I rerun the program with the same random seed it does not crash in the same way. The exact point this happens also varied - it happened to me 11B tokens and 4 days into a training run, and now just happened 2 minutes into one, but I can't reliably reproduce it.
I'm using a remote machine with 8 A6000 GPUs via runpod.io
## Expected results
I have a DataLoader which can iterate through the whole Pile
## Actual results
Stack trace:
```
FailedΒ toΒ readΒ fileΒ 'zstd://12.jsonl::https://the-eye.eu/public/AI/pile/train/12.jsonl.zst'Β withΒ errorΒ <classΒ 'pyarrow.lib.ArrowInvalid'>:Β JSONΒ parseΒ error:Β InvalidΒ value.Β inΒ rowΒ 0
```
I'm currently using HuggingFace accelerate, which also gave me the following stack trace, but I've also experienced this problem intermittently when using DataParallel, so I don't think it's to do with parallelisation
```
Traceback (most recent call last):
File "ddp_script.py", line 1258, in <module>
main()
File "ddp_script.py", line 1143, in main
for c, batch in tqdm.tqdm(enumerate(data_iter)):
File "/opt/conda/lib/python3.7/site-packages/tqdm/std.py", line 1195, in __iter__
for obj in iterable:
File "/opt/conda/lib/python3.7/site-packages/accelerate/data_loader.py", line 503, in __iter__
next_batch, next_batch_info, next_skip = self._fetch_batches(main_iterator)
File "/opt/conda/lib/python3.7/site-packages/accelerate/data_loader.py", line 454, in _fetch_batches
broadcast_object_list(batch_info)
File "/opt/conda/lib/python3.7/site-packages/accelerate/utils/operations.py", line 333, in broadcast_object_list
torch.distributed.broadcast_object_list(object_list, src=from_process)
File "/opt/conda/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 1900, in broadcast_object_list
object_list[i] = _tensor_to_object(obj_view, obj_size)
File "/opt/conda/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 1571, in _tensor_to_object
return _unpickler(io.BytesIO(buf)).load()
_pickle.UnpicklingError: invalid load key, '@'.
```
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset(
cfg["dataset_name"], streaming=True, split="train")
dataset = dataset.remove_columns("meta")
dataset = dataset.map(tokenize_and_concatenate, batched=True)
dataset = dataset.with_format(type="torch")
train_data_loader = DataLoader(
dataset, batch_size=cfg["batch_size"], num_workers=3)
for batch in train_data_loader:
continue
```
`tokenize_and_concatenate` is a custom tokenization function I defined on the GPT-NeoX tokenizer to tokenize the text, separated by endoftext tokens, and reshape to have length batch_size, I don't think this is related to tokenization:
```
import numpy as np
import einops
import torch
def tokenize_and_concatenate(examples):
texts = examples["text"]
full_text = tokenizer.eos_token.join(texts)
div = 20
length = len(full_text) // div
text_list = [full_text[i * length: (i + 1) * length]
for i in range(div)]
tokens = tokenizer(text_list, return_tensors="np", padding=True)[
"input_ids"
].flatten()
tokens = tokens[tokens != tokenizer.pad_token_id]
n = len(tokens)
curr_batch_size = n // (seq_len - 1)
tokens = tokens[: (seq_len - 1) * curr_batch_size]
tokens = einops.rearrange(
tokens,
"(batch_size seq) -> batch_size seq",
batch_size=curr_batch_size,
seq=seq_len - 1,
)
prefix = np.ones((curr_batch_size, 1), dtype=np.int64) * \
tokenizer.bos_token_id
return {
"text": np.concatenate([prefix, tokens], axis=1)
}
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.4.0
- Platform: Linux-5.4.0-105-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.13
- PyArrow version: 9.0.0
- Pandas version: 1.3.5
ZStandard data:
Version: 0.18.0
Summary: Zstandard bindings for Python
Home-page: https://github.com/indygreg/python-zstandard
Author: Gregory Szorc
Author-email: gregory.szorc@gmail.com
License: BSD
Location: /opt/conda/lib/python3.7/site-packages
Requires:
Required-by:
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5053/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5053/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/5050
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5050/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5050/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5050/events
|
https://github.com/huggingface/datasets/issues/5050
| 1,392,381,882
|
I_kwDODunzps5S_g-6
| 5,050
|
Restore saved format state in `load_from_disk`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
},
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/74454835?v=4",
"events_url": "https://api.github.com/users/asofiaoliveira/events{/privacy}",
"followers_url": "https://api.github.com/users/asofiaoliveira/followers",
"following_url": "https://api.github.com/users/asofiaoliveira/following{/other_user}",
"gists_url": "https://api.github.com/users/asofiaoliveira/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/asofiaoliveira",
"id": 74454835,
"login": "asofiaoliveira",
"node_id": "MDQ6VXNlcjc0NDU0ODM1",
"organizations_url": "https://api.github.com/users/asofiaoliveira/orgs",
"received_events_url": "https://api.github.com/users/asofiaoliveira/received_events",
"repos_url": "https://api.github.com/users/asofiaoliveira/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/asofiaoliveira/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/asofiaoliveira/subscriptions",
"type": "User",
"url": "https://api.github.com/users/asofiaoliveira",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/74454835?v=4",
"events_url": "https://api.github.com/users/asofiaoliveira/events{/privacy}",
"followers_url": "https://api.github.com/users/asofiaoliveira/followers",
"following_url": "https://api.github.com/users/asofiaoliveira/following{/other_user}",
"gists_url": "https://api.github.com/users/asofiaoliveira/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/asofiaoliveira",
"id": 74454835,
"login": "asofiaoliveira",
"node_id": "MDQ6VXNlcjc0NDU0ODM1",
"organizations_url": "https://api.github.com/users/asofiaoliveira/orgs",
"received_events_url": "https://api.github.com/users/asofiaoliveira/received_events",
"repos_url": "https://api.github.com/users/asofiaoliveira/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/asofiaoliveira/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/asofiaoliveira/subscriptions",
"type": "User",
"url": "https://api.github.com/users/asofiaoliveira",
"user_view_type": "public"
}
] |
[
"Hi, can I work on this?",
"Hi, sure! Let us know if you need some pointers/help."
] | 2022-09-30T12:40:07
| 2022-10-11T16:49:24
| 2022-10-11T16:49:24
|
COLLABORATOR
| null | null | null | null |
Even though we save the `format` state in `save_to_disk`, we don't restore it in `load_from_disk`. We should fix that.
Reported here: https://discuss.huggingface.co/t/save-to-disk-loses-formatting-information/23815
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5050/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5050/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 11 days, 4:09:17
|
https://api.github.com/repos/huggingface/datasets/issues/5046
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5046/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5046/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5046/events
|
https://github.com/huggingface/datasets/issues/5046
| 1,391,372,519
|
I_kwDODunzps5S7qjn
| 5,046
|
Audiofolder creates empty Dataset if files same level as metadata
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/577139?v=4",
"events_url": "https://api.github.com/users/msis/events{/privacy}",
"followers_url": "https://api.github.com/users/msis/followers",
"following_url": "https://api.github.com/users/msis/following{/other_user}",
"gists_url": "https://api.github.com/users/msis/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/msis",
"id": 577139,
"login": "msis",
"node_id": "MDQ6VXNlcjU3NzEzOQ==",
"organizations_url": "https://api.github.com/users/msis/orgs",
"received_events_url": "https://api.github.com/users/msis/received_events",
"repos_url": "https://api.github.com/users/msis/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/msis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/msis/subscriptions",
"type": "User",
"url": "https://api.github.com/users/msis",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
},
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
},
{
"color": "DF8D62",
"default": false,
"description": "",
"id": 4614514401,
"name": "hacktoberfest",
"node_id": "LA_kwDODunzps8AAAABEwvm4Q",
"url": "https://api.github.com/repos/huggingface/datasets/labels/hacktoberfest"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/9295277?v=4",
"events_url": "https://api.github.com/users/riccardobucco/events{/privacy}",
"followers_url": "https://api.github.com/users/riccardobucco/followers",
"following_url": "https://api.github.com/users/riccardobucco/following{/other_user}",
"gists_url": "https://api.github.com/users/riccardobucco/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/riccardobucco",
"id": 9295277,
"login": "riccardobucco",
"node_id": "MDQ6VXNlcjkyOTUyNzc=",
"organizations_url": "https://api.github.com/users/riccardobucco/orgs",
"received_events_url": "https://api.github.com/users/riccardobucco/received_events",
"repos_url": "https://api.github.com/users/riccardobucco/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/riccardobucco/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/riccardobucco/subscriptions",
"type": "User",
"url": "https://api.github.com/users/riccardobucco",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/9295277?v=4",
"events_url": "https://api.github.com/users/riccardobucco/events{/privacy}",
"followers_url": "https://api.github.com/users/riccardobucco/followers",
"following_url": "https://api.github.com/users/riccardobucco/following{/other_user}",
"gists_url": "https://api.github.com/users/riccardobucco/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/riccardobucco",
"id": 9295277,
"login": "riccardobucco",
"node_id": "MDQ6VXNlcjkyOTUyNzc=",
"organizations_url": "https://api.github.com/users/riccardobucco/orgs",
"received_events_url": "https://api.github.com/users/riccardobucco/received_events",
"repos_url": "https://api.github.com/users/riccardobucco/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/riccardobucco/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/riccardobucco/subscriptions",
"type": "User",
"url": "https://api.github.com/users/riccardobucco",
"user_view_type": "public"
}
] |
[
"Hi! Unfortunately, I can't reproduce this behavior. Instead, I get `ValueError: audio at 2063_fe9936e7-62b2-4e62-a276-acbd344480ce_1.wav doesn't have metadata in /audio-data/metadata.csv`, which can be fixed by removing the `./` from the file name.\r\n\r\n(Link to a Colab that tries to reproduce this behavior: https://colab.research.google.com/drive/1IhQzULYi0Van1xLrN_SddBX1JF7mLZZK?usp=sharing)",
"I think we can make the file name matching part more robust by replacing `file_name` with `os.path.normpath(file_name)`, to ignore \"./\" among other things, in these two places:\r\n* https://github.com/huggingface/datasets/blob/85cd129bde605cd9acacdff0d065fc02e39e09b1/src/datasets/packaged_modules/folder_based_builder/folder_based_builder.py#L319\r\n* https://github.com/huggingface/datasets/blob/85cd129bde605cd9acacdff0d065fc02e39e09b1/src/datasets/packaged_modules/folder_based_builder/folder_based_builder.py#L388",
"@mariosasko Some tests failed (see my PR). Any thoughts on that?",
"Yes, I mentioned the solution in my review.",
"I realized what I was doing wrong.\r\n\r\nThe documentation puts the files in a subfolder.\r\nOnce I have done that, it worked.\r\n\r\nBut l agree that this should be handled better if possible."
] | 2022-09-29T19:17:23
| 2022-10-28T13:05:07
| 2022-10-28T13:05:07
|
NONE
| null | null | null | null |
## Describe the bug
When audio files are at the same level as the metadata (`metadata.csv` or `metadata.jsonl` ), the `load_dataset` returns a `DatasetDict` with no rows but the correct columns.
https://github.com/huggingface/datasets/blob/1ea4d091b7a4b83a85b2eeb8df65115d39af3766/docs/source/audio_dataset.mdx?plain=1#L88
## Steps to reproduce the bug
`metadata.csv`:
```csv
file_name,duration,transcription
./2063_fe9936e7-62b2-4e62-a276-acbd344480ce_1.wav,10.768,hello
```
```python
>>> audio_dataset = load_dataset("audiofolder", data_dir="/audio-data/")
>>> audio_dataset
DatasetDict({
train: Dataset({
features: ['audio', 'duration', 'transcription'],
num_rows: 0
})
validation: Dataset({
features: ['audio', 'duration', 'transcription'],
num_rows: 0
})
})
```
I've tried, with no success,:
- setting `split` to something else so I don't get a `DatasetDict`,
- removing the `./`,
- using `.jsonl`.
## Expected results
```
Dataset({
features: ['audio', 'duration', 'transcription'],
num_rows: 1
})
```
## Actual results
```
DatasetDict({
train: Dataset({
features: ['audio', 'duration', 'transcription'],
num_rows: 0
})
validation: Dataset({
features: ['audio', 'duration', 'transcription'],
num_rows: 0
})
})
```
## Environment info
- `datasets` version: 2.5.1
- Platform: Linux-5.13.0-1025-aws-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 9.0.0
- Pandas version: 1.5.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5046/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5046/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 28 days, 17:47:44
|
https://api.github.com/repos/huggingface/datasets/issues/5045
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5045/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5045/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5045/events
|
https://github.com/huggingface/datasets/issues/5045
| 1,391,287,609
|
I_kwDODunzps5S7V05
| 5,045
|
Automatically revert to last successful commit to hub when a push_to_hub is interrupted
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/13120204?v=4",
"events_url": "https://api.github.com/users/jorahn/events{/privacy}",
"followers_url": "https://api.github.com/users/jorahn/followers",
"following_url": "https://api.github.com/users/jorahn/following{/other_user}",
"gists_url": "https://api.github.com/users/jorahn/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jorahn",
"id": 13120204,
"login": "jorahn",
"node_id": "MDQ6VXNlcjEzMTIwMjA0",
"organizations_url": "https://api.github.com/users/jorahn/orgs",
"received_events_url": "https://api.github.com/users/jorahn/received_events",
"repos_url": "https://api.github.com/users/jorahn/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jorahn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jorahn/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jorahn",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
| null |
[] |
[
"Could you share the error you got please ? Maybe the full stack trace if you have it ?\r\n\r\nMaybe `push_to_hub` be implemented as a single commit @Wauplin ? This way if it fails, the repo is still at the previous (valid) state instead of ending-up in an invalid/incimplete state.",
"> Maybe push_to_hub be implemented as a single commit ? \r\n\r\nI think that would definitely be the way to go. Do you know the reasons why not implementing it like this in the first place ? I guess it is because of not been able to upload all at once with `huggingface_hub` but if there was another reason, please let me know.\r\nAbout pushing all at once, it seems to be a more and more requested feature. I have created this issue https://github.com/huggingface/huggingface_hub/issues/1085 recently but other discussions already happened in the past. The `moon-landing` team is working on it (cc @coyotte508). The `huggingface_hub` integration will come afterwards.\r\n\r\nFor now, maybe it's best to wait for a proper implementation instead of creating a temporary workaround :)\r\n",
"> I think that would definitely be the way to go. Do you know the reasons why not implementing it like this in the first place ? I guess it is because of not been able to upload all at once with huggingface_hub but if there was another reason, please let me know.\r\n\r\nIdeally we would want to upload the files iteratively - and then once everything is uploaded we proceed to commit. When we implemented `push_to_hub`, using `upload_file` for each shard was the only option.\r\n\r\nFor more context: for each shard to upload we do:\r\n1. load the arrow shard in memory\r\n2. convert to parquet\r\n3. upload\r\n\r\nSo to avoid OOM we need to upload the files iteratively.\r\n\r\n> For now, maybe it's best to wait for a proper implementation instead of creating a temporary workaround :)\r\n\r\nLet us know if we can help !",
"> Ideally we would want to upload the files iteratively - and then once everything is uploaded we proceed to commit. \r\n\r\nOh I see. So maybe this has to be done in an implementation specific to `datasets/` as it is not a very common case (upload a bunch of files on the fly).\r\n\r\nYou can maybe have a look at how `huggingface_hub` is implemented for LFS files (arrow shards are LFS anyway, right?).\r\nIn [`upload_lfs_files`](https://github.com/huggingface/huggingface_hub/blob/e28646c977fc9304a4c3576ce61ff07f9778950b/src/huggingface_hub/_commit_api.py#L164) LFS files are uploaded 1 by 1 (multithreaded) and then [the commit is pushed](https://github.com/huggingface/huggingface_hub/blob/e28646c977fc9304a4c3576ce61ff07f9778950b/src/huggingface_hub/hf_api.py#L1926) to the Hub once all files have been uploaded. This is pretty much what you need, right ?\r\n\r\nI can help you if you have questions how to do it in `datasets`. If that makes sense we could then move the implementation from `datasets` to `huggingface_hub` once it's mature. Next week I'm on holidays but feel free to start without my input.\r\n\r\n(also cc @coyotte508 and @SBrandeis who implemented LFS upload in `hfh`)",
"> Could you share the error you got please ? Maybe the full stack trace if you have it ?\r\n\r\nHereβs part of the stack trace, that I can reproduce at the moment from a photo I took (potential typos from OCR):\r\n```\r\nValueError\r\nTraceback (most recent call last)\r\n<ipython-input-4-274613b7d3f5> in <module>\r\nfrom datasets import load dataset\r\nds = load_dataset('jrahn/chessv6', use_auth_token-True)\r\n\r\n/us/local/1ib/python3.7/dist-packages/datasets/table.py in cast_table _to_schema (table, schema)\r\nLine 2005 raise ValueError()\r\n\r\nValueError: Couldn't cast \r\nfen: string \r\nmove: string \r\nres: string \r\neco: string \r\nmove_id: int64\r\nres_num: int64 to\r\n{ 'fen': Value(dtype='string', id=None), \r\n'move': Value(dtype=' string', id=None),\r\n'res': Value(dtype='string', id=None),\r\n'eco': Value(dtype='string', id=None), \r\n'hc': Value(dtype='string', id=None), \r\n'move_ id': Value(dtype='int64', id=None),\r\n'res_num': Value(dtype= 'int64' , id=None) }\r\nbecause column names don't match \r\n```\r\n\r\nThe column 'hc' was removed before the interrupted push_to_hub(). It appears in the column list in curly brackets but not in the column list above.\r\n\r\nLet me know, if I can be of any help."
] | 2022-09-29T18:08:12
| 2023-10-16T13:30:49
| 2023-10-16T13:30:49
|
NONE
| null | null | null | null |
**Is your feature request related to a problem? Please describe.**
I pushed a modification of a large dataset (remove a column) to the hub. The push was interrupted after some files were committed to the repo. This left the dataset to raise an error on load_dataset() (ValueError couldnβt cast β¦ because column names donβt match). Only by specifying the previous (complete) commit as revision=commit_hash in load_data(), I was able to repair this and after a successful, complete push, the dataset loads without error again.
**Describe the solution you'd like**
Would it make sense to detect an incomplete push_to_hub() and automatically revert to the previous commit/revision?
**Describe alternatives you've considered**
Leave everything as is, the revision parameter in load_dataset() allows to manually fix this problem.
**Additional context**
Provide useful defaults
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5045/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5045/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 381 days, 19:22:37
|
https://api.github.com/repos/huggingface/datasets/issues/5044
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5044/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5044/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5044/events
|
https://github.com/huggingface/datasets/issues/5044
| 1,391,242,908
|
I_kwDODunzps5S7K6c
| 5,044
|
integrate `load_from_disk` into `load_dataset`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stas00",
"id": 10676103,
"login": "stas00",
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"repos_url": "https://api.github.com/users/stas00/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stas00",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] |
[
"I agree the situation is not ideal and it would be awesome to use `load_dataset` to reload a dataset saved locally !\r\n\r\nFor context:\r\n\r\n- `load_dataset` works in three steps: download the dataset, then prepare it as an arrow dataset, and finally return a memory mapped arrow dataset. In particular it creates a cache directory to store the arrow data and the subsequent cache files for `map`.\r\n\r\n- `load_from_disk` directly returns a memory mapped dataset from the arrow file (similar to `Dataset.from_file`). It doesn't create a cache diretory, instead all the subsequent `map` calls write in the same directory as the original data. \r\n\r\nIf we want to keep the download_and_prepare step for consistency, it would unnecessarily copy the arrow data into the datasets cache. On the other hand if we don't do this step, the cache directory doesn't exist which is inconsistent.\r\n\r\nI'm curious, what would you expect to happen in this situation ?",
"Thank you for the detailed breakdown, @lhoestq \r\n\r\n> I'm curious, what would you expect to happen in this situation ?\r\n\r\n1. the simplest solution is to add a flag to the dataset saved by `save_to_disk` and have `load_dataset` check that flag - if it's set simply switch control to `load_from_disk` behind the scenes. So `load_dataset` detects it's a local filesystem, looks inside to see whether it's something it can cache or whether it should use it directly as is and continues accordingly with one of the 2 dataset-type specific APIs.\r\n\r\n2. the more evolved solution is to look at a dataset produced by `save_to_disk` as a remote resource like hub. So the first time `load_dataset` sees it, it'll take a fingerprint and create a normal cached dataset. On subsequent uses it'll again discover it as a remote resource, validate that it has it cached via the fingerprint and serve as a normal dataset. \r\n\r\nAs you said the cons of approach 2 is that if the dataset is huge it'll make 2 copies on the same machine. So it's possible that both approaches can be integrated. Say if `save_to_disc(do_not_cache=True)` is passed it'll use solution 1, otherwise solution 2. or could even symlink the huge arrow files to the cache instead? or perhaps it's more intuitive to use `load_dataset(do_not_cache=True)` instead. So that one can choose whether to make a cached copy or not for the locally saved dataset. i.e. a simple at use point user control.\r\n\r\nSurely there are other ways to handle it, this is just one possibility.\r\n",
"I think the simplest is to always memory map the local file without copy, but still have a cached directory in the cache at `~/.cache/huggingface` instead of saving `map` results next to the original data.\r\n\r\nIn practice we can even use symlinks if it makes the implementation simpler",
"Yes, so that you always have the cached entry for any dataset, but the \"payload\" doesn't have to be physically in the cache if it's already on the local filesystem. As you said a symlink will do. ",
"Any updates?",
"We haven't had the bandwidth to implement this so far. Let me know if you'd be interested in contributing this feature :)",
"@lhoestq I can jump into that. What I don't like is having functions with many parameters input. Even though they are optional, it's always harder to reason about and test such cases.\r\nIf there are more features worth to work on, feel free to ping me. It's a lot of fun to help :smile: ",
"Thanks a lot for your help @mariusz-jachimowicz-83 :)\r\n\r\nI think as a first step we could implement an Arrow dataset builder to be able to load and stream Arrow datasets locally or from Hugging Face. Maybe something similar to the Parquet builder at [src/datasets/packaged_modules/parquet/parquet.py](https://github.com/huggingface/datasets/blob/main/src/datasets/packaged_modules/parquet/parquet.py) ?\r\n\r\nAnd we can deal with the disk space optimization as a second step. What do you think ?\r\n\r\n(this issue is also related to https://github.com/huggingface/datasets/issues/3035)",
"@lhoestq I made a PR based on suggestion https://github.com/huggingface/datasets/pull/5944. Could you please review it?",
"@lhoestq Let me know if you have further recommendations or anything that you would like to add but you don't have bandwith for. ",
"Any update on this issue? It makes existing scripts and examples fall flat when provided with a customized/preprocessed dataset saved to disk.",
"This would be a really useful in terms of user experience. ",
"Is there any update on this? This would improves the clarity and consistency of the implementations.",
"Not yet ! Though we do have an Arrow loader in `load_dataset` now, so the remaining items are:\n\n1. update `load_dataset()` to support the old `save_to_disk()` structure with a Warning message that it's not the structure it generally uses and it's enabled for compatibility purposes\n\n(Q: `load_dataset()` works using a cache that contains cached Arrow files of any dataset, so if the dataset is already in Arrow we can optionally make it symlink the files in the cache instead of copying them ? for consistency I would still copy the data. Especially for cases where the dataset location is on a slow disk, it can be better to copy the data once to the fast cache)\n\n2. update `save_to_disk()` to export in a `load_dataset()` compatible structure",
"Hi! Quick update β I just opened [PR #7653](https://github.com/huggingface/datasets/pull/7653) to address this UX inconsistency.\n\nIt adds a fallback in `load_dataset()` that auto-detects when the path is a directory saved via `save_to_disk()`, and internally redirects to `load_from_disk()`, with a warning.\n\n```python\n# This now works as expected\nds = load_dataset(\"/path/to/saved_dataset\")\n````\n\nThis avoids loading `_data_files` metadata rows by mistake, which confused many users (e.g. in #7503).\n\nItβs aligned with @lhoestqβs comment β to detect saved datasets and memory-map them directly instead of reprocessing.\n\nThe PR keeps things simple for now without introducing ArrowBuilder or new cache logic β just improves reliability where `load_dataset()` is hardcoded (like in TRL or `lighteval`).\n\nWould love feedback!"
] | 2022-09-29T17:37:12
| 2025-06-28T09:00:44
| null |
CONTRIBUTOR
| null | null | null | null |
**Is your feature request related to a problem? Please describe.**
Is it possible to make `load_dataset` more universal similar to `from_pretrained` in `transformers` so that it can handle the hub, and the local path datasets of all supported types?
Currently one has to choose a different loader depending on how the dataset has been created.
e.g. this won't work:
```
$ git clone https://huggingface.co/datasets/severo/test-parquet
$ python -c 'from datasets import load_dataset; ds=load_dataset("test-parquet"); \
ds.save_to_disk("my_dataset"); load_dataset("my_dataset")'
[...]
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/stas/anaconda3/envs/py38-pt112/lib/python3.8/site-packages/datasets/load.py", line 1746, in load_dataset
builder_instance.download_and_prepare(
File "/home/stas/anaconda3/envs/py38-pt112/lib/python3.8/site-packages/datasets/builder.py", line 704, in download_and_prepare
self._download_and_prepare(
File "/home/stas/anaconda3/envs/py38-pt112/lib/python3.8/site-packages/datasets/builder.py", line 793, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/stas/anaconda3/envs/py38-pt112/lib/python3.8/site-packages/datasets/builder.py", line 1277, in _prepare_split
writer.write_table(table)
File "/home/stas/anaconda3/envs/py38-pt112/lib/python3.8/site-packages/datasets/arrow_writer.py", line 524, in write_table
pa_table = table_cast(pa_table, self._schema)
File "/home/stas/anaconda3/envs/py38-pt112/lib/python3.8/site-packages/datasets/table.py", line 2005, in table_cast
return cast_table_to_schema(table, schema)
File "/home/stas/anaconda3/envs/py38-pt112/lib/python3.8/site-packages/datasets/table.py", line 1968, in cast_table_to_schema
raise ValueError(f"Couldn't cast\n{table.schema}\nto\n{features}\nbecause column names don't match")
ValueError: Couldn't cast
_data_files: list<item: struct<filename: string>>
child 0, item: struct<filename: string>
child 0, filename: string
```
both times the dataset is being loaded from disk. Why does it fail the second time?
Why can't `save_to_disk` generate a dataset that can be immediately loaded by `load_dataset`?
e.g. the simplest hack would be to have `save_to_disk` add some flag to the saved dataset, that tells `load_dataset` to internally call `load_from_disk`. like having `save_to_disk` create a `load_me_with_load_from_disk.txt` file ;) and `load_dataset` will support that feature from saved datasets from new `datasets` versions. The old ones will still need to use `load_from_disk` explicitly. Unless the flag is not needed and one can immediately tell by looking at the saved dataset that it was saved via `save_to_disk` and thus use `load_from_disk` internally.
The use-case is defining a simple API where the user only ever needs to pass a `dataset_name_or_path` and it will always just work. Currently one needs to manually add additional switches telling the system whether to use one loading method or the other which works but it's not smooth.
Thank you!
| null |
{
"+1": 7,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 7,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5044/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5044/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/5039
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5039/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5039/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5039/events
|
https://github.com/huggingface/datasets/issues/5039
| 1,390,353,315
|
I_kwDODunzps5S3xuj
| 5,039
|
Hendrycks Checksum
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/9974388?v=4",
"events_url": "https://api.github.com/users/DanielHesslow/events{/privacy}",
"followers_url": "https://api.github.com/users/DanielHesslow/followers",
"following_url": "https://api.github.com/users/DanielHesslow/following{/other_user}",
"gists_url": "https://api.github.com/users/DanielHesslow/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/DanielHesslow",
"id": 9974388,
"login": "DanielHesslow",
"node_id": "MDQ6VXNlcjk5NzQzODg=",
"organizations_url": "https://api.github.com/users/DanielHesslow/orgs",
"received_events_url": "https://api.github.com/users/DanielHesslow/received_events",
"repos_url": "https://api.github.com/users/DanielHesslow/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/DanielHesslow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DanielHesslow/subscriptions",
"type": "User",
"url": "https://api.github.com/users/DanielHesslow",
"user_view_type": "public"
}
|
[
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] |
[
"Thanks for reporting, @DanielHesslow. We are fixing it. ",
"@albertvillanova thanks for taking care of this so quickly!",
"The dataset metadata is fixed. You can download it normally."
] | 2022-09-29T06:56:20
| 2022-09-29T10:23:30
| 2022-09-29T10:04:20
|
NONE
| null | null | null | null |
Hi,
The checksum for [hendrycks_test](https://huggingface.co/datasets/hendrycks_test) does not compare correctly, I guess it has been updated on the remote.
```
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://people.eecs.berkeley.edu/~hendrycks/data.tar']
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5039/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5039/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 3:08:00
|
https://api.github.com/repos/huggingface/datasets/issues/5038
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5038/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5038/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5038/events
|
https://github.com/huggingface/datasets/issues/5038
| 1,389,631,122
|
I_kwDODunzps5S1BaS
| 5,038
|
`Dataset.unique` showing wrong output after filtering
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/4904985?v=4",
"events_url": "https://api.github.com/users/mxschmdt/events{/privacy}",
"followers_url": "https://api.github.com/users/mxschmdt/followers",
"following_url": "https://api.github.com/users/mxschmdt/following{/other_user}",
"gists_url": "https://api.github.com/users/mxschmdt/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mxschmdt",
"id": 4904985,
"login": "mxschmdt",
"node_id": "MDQ6VXNlcjQ5MDQ5ODU=",
"organizations_url": "https://api.github.com/users/mxschmdt/orgs",
"received_events_url": "https://api.github.com/users/mxschmdt/received_events",
"repos_url": "https://api.github.com/users/mxschmdt/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mxschmdt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxschmdt/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mxschmdt",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] |
[
"Hi! It seems like `flatten_indices` (called in `unique`) doesn't know how to handle empty indices mappings. I'm working on the fix.",
"Thanks, that was fast!"
] | 2022-09-28T16:20:35
| 2022-09-30T15:44:25
| 2022-09-30T15:44:25
|
CONTRIBUTOR
| null | null | null | null |
## Describe the bug
After filtering a dataset, and if no samples remain, `Dataset.unique` will return the unique values of the unfiltered dataset.
## Steps to reproduce the bug
```python
from datasets import Dataset
dataset = Dataset.from_dict({'id': [0]})
dataset = dataset.filter(lambda _: False)
print(dataset.unique('id'))
```
## Expected results
The above code should return an empty list since the dataset is empty.
## Actual results
```bash
[0]
```
## Environment info
- `datasets` version: 2.5.1
- Platform: Linux-5.18.19-100.fc35.x86_64-x86_64-with-glibc2.34
- Python version: 3.9.14
- PyArrow version: 7.0.0
- Pandas version: 1.3.5
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5038/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5038/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 1 day, 23:23:50
|
https://api.github.com/repos/huggingface/datasets/issues/5032
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5032/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5032/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5032/events
|
https://github.com/huggingface/datasets/issues/5032
| 1,388,270,935
|
I_kwDODunzps5Sv1VX
| 5,032
|
new dataset type: single-label and multi-label video classification
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/34196005?v=4",
"events_url": "https://api.github.com/users/fcakyon/events{/privacy}",
"followers_url": "https://api.github.com/users/fcakyon/followers",
"following_url": "https://api.github.com/users/fcakyon/following{/other_user}",
"gists_url": "https://api.github.com/users/fcakyon/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/fcakyon",
"id": 34196005,
"login": "fcakyon",
"node_id": "MDQ6VXNlcjM0MTk2MDA1",
"organizations_url": "https://api.github.com/users/fcakyon/orgs",
"received_events_url": "https://api.github.com/users/fcakyon/received_events",
"repos_url": "https://api.github.com/users/fcakyon/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/fcakyon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fcakyon/subscriptions",
"type": "User",
"url": "https://api.github.com/users/fcakyon",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] |
[
"Hi ! You can in the `features` folder how we implemented the audio and image feature types.\r\n\r\nWe can have something similar to videos. What we need to decide:\r\n- the video loading library to use\r\n- the output format when a user accesses a video type object\r\n- what parameters a `Video()` feature type needs\r\n\r\nalso cc @nateraw who also took a look at what we can do for video",
"@lhoestq @nateraw is there any progress on adding video classification datasets? ",
"Hi ! I think we just missing which lib we're going to use to decode the videos + which parameters must go in the `Video` type",
"Hmm. `decord` could be nice but it's no longer maintained [it seems](https://github.com/dmlc/decord/issues/214). ",
"pytorchvideo uses [pyav](https://github.com/PyAV-Org/PyAV) as the default decoder: https://github.com/facebookresearch/pytorchvideo/blob/c8d23d8b7e597586a9e2d18f6ed31ad8aa379a7a/pytorchvideo/data/labeled_video_dataset.py#L37\r\n\r\nAlso it would be great if `optionally` audio can also be decoded from the video as in pytorchvideo: https://github.com/facebookresearch/pytorchvideo/blob/c8d23d8b7e597586a9e2d18f6ed31ad8aa379a7a/pytorchvideo/data/labeled_video_dataset.py#L35\r\n\r\nHere are the other decoders supported in pytorchvideo: https://github.com/facebookresearch/pytorchvideo/blob/c8d23d8b7e597586a9e2d18f6ed31ad8aa379a7a/pytorchvideo/data/encoded_video.py#L17\r\n",
"@sayakpaul I did do quite a bit of work on [this PR](https://github.com/huggingface/datasets/pull/4532) a while back to add a video feature. It's outdated, but uses my `encoded_video` [package](https://github.com/nateraw/encoded-video) under the hood, which is basically a wrapper around PyAV stolen from [pytorchvideo](https://github.com/facebookresearch/pytorchvideo/) that gets rid of the `torch` dependency. \r\n\r\nwould be really great to get something like this in...it's just a really tricky and time consuming feature to add. "
] | 2022-09-27T19:40:11
| 2022-11-02T19:10:13
| null |
NONE
| null | null | null | null |
**Is your feature request related to a problem? Please describe.**
In my research, I am dealing with multi-modal (audio+text+frame sequence) video classification. It would be great if the datasets library supported generating multi-modal batches from a video dataset.
**Describe the solution you'd like**
Assume I have video files having single/multiple labels. I want to train a single/multi-label video classification model. I want datasets to support generating multi-modal batches (audio+frame sequence) from video files. Audio waveform and frame sequence can be extracted from each video clip then I can use any audio, image and video model from transformers library to extract features which will be fed into my model.
**Describe alternatives you've considered**
Currently, I am using https://github.com/facebookresearch/pytorchvideo dataloaders. There seems to be not much alternative.
**Additional context**
I am wiling to open a PR but don't know where to start.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5032/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5032/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/5028
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5028/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5028/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5028/events
|
https://github.com/huggingface/datasets/issues/5028
| 1,386,272,533
|
I_kwDODunzps5SoNcV
| 5,028
|
passing parameters to the method passed to Dataset.from_generator()
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/64276129?v=4",
"events_url": "https://api.github.com/users/Basir-mahmood/events{/privacy}",
"followers_url": "https://api.github.com/users/Basir-mahmood/followers",
"following_url": "https://api.github.com/users/Basir-mahmood/following{/other_user}",
"gists_url": "https://api.github.com/users/Basir-mahmood/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Basir-mahmood",
"id": 64276129,
"login": "Basir-mahmood",
"node_id": "MDQ6VXNlcjY0Mjc2MTI5",
"organizations_url": "https://api.github.com/users/Basir-mahmood/orgs",
"received_events_url": "https://api.github.com/users/Basir-mahmood/received_events",
"repos_url": "https://api.github.com/users/Basir-mahmood/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Basir-mahmood/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Basir-mahmood/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Basir-mahmood",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
| null |
[] |
[
"Hi! Yes, you can either use the `gen_kwargs` param in `Dataset.from_generator` (`ds = Dataset.from_generator(gen, gen_kwargs={\"param1\": val})`) or wrap the generator function with `functools.partial`\r\n(`ds = Dataset.from_generator(functools.partial(gen, param1=\"val\"))`) to pass custom parameters to it.\r\n"
] | 2022-09-26T15:20:06
| 2022-10-03T13:00:00
| 2022-10-03T13:00:00
|
NONE
| null | null | null | null |
Big thanks for providing dataset creation via a generator.
I want to ask whether there is any way that parameters can be passed to the method Dataset.from_generator() method, like as follows.
```
from datasets import Dataset
def gen(param1):
for idx in len(custom_dataset):
yield custom_dataset[idx] + param1
ds = Dataset.from_generator(gen(param1))
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5028/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5028/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 6 days, 21:39:54
|
https://api.github.com/repos/huggingface/datasets/issues/5025
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5025/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5025/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5025/events
|
https://github.com/huggingface/datasets/issues/5025
| 1,386,011,239
|
I_kwDODunzps5SnNpn
| 5,025
|
Custom Json Dataset Throwing Error when batch is False
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/21245519?v=4",
"events_url": "https://api.github.com/users/jmandivarapu1/events{/privacy}",
"followers_url": "https://api.github.com/users/jmandivarapu1/followers",
"following_url": "https://api.github.com/users/jmandivarapu1/following{/other_user}",
"gists_url": "https://api.github.com/users/jmandivarapu1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jmandivarapu1",
"id": 21245519,
"login": "jmandivarapu1",
"node_id": "MDQ6VXNlcjIxMjQ1NTE5",
"organizations_url": "https://api.github.com/users/jmandivarapu1/orgs",
"received_events_url": "https://api.github.com/users/jmandivarapu1/received_events",
"repos_url": "https://api.github.com/users/jmandivarapu1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jmandivarapu1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmandivarapu1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jmandivarapu1",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] |
[
"Hi! Our processors are meant to be used in `batched` mode, so if `batched` is `False`, you need to drop the batch dimension (the error message warns you that the array has an extra dimension meaning it's 4D instead of 3D) to avoid the error:\r\n```python\r\ndef prepare_examples(examples):\r\n #Some preporcessing for each image and text as all my data saved in cloud\r\n #For this reason I couldn't set the batch to True. \r\n encoding = processor(img_as_tensor, words, boxes=boxes, word_labels=labels,\r\n truncation=True, padding=\"max_length\", return_tensors=\"np\")\r\n # drop extra dim\r\n for k in encoding.items():\r\n encoding[k]=encoding[k][0]\r\n return encoding\r\n```",
"> Hi! Our processors are meant to be used in `batched` mode, so if `batched` is `False`, you need to drop the batch dimension (the error message warns you that the array has an extra dimension meaning it's 4D instead of 3D) to avoid the error:\r\n> \r\n> ```python\r\n> def prepare_examples(examples):\r\n> #Some preporcessing for each image and text as all my data saved in cloud\r\n> #For this reason I couldn't set the batch to True. \r\n> encoding = processor(img_as_tensor, words, boxes=boxes, word_labels=labels,\r\n> truncation=True, padding=\"max_length\", return_tensors=\"np\")\r\n> # drop extra dim\r\n> for k in encoding.items():\r\n> encoding[k]=encoding[k][0]\r\n> return encoding\r\n> ```\r\n\r\nThank you it did work\r\n\r\n```\r\nfor k,v in encoding.items():\r\n encoding[k]=encoding[k][0]\r\n```"
] | 2022-09-26T12:38:39
| 2022-09-27T19:50:00
| 2022-09-27T19:50:00
|
NONE
| null | null | null | null |
## Describe the bug
A clear and concise description of what the bug is.
I tried to create my custom dataset using below code
```
from datasets import Features, Sequence, ClassLabel, Value, Array2D, Array3D
from torchvision import transforms
from transformers import AutoProcessor
# we'll use the Auto API here - it will load LayoutLMv3Processor behind the scenes,
# based on the checkpoint we provide from the hub
from datasets import load_dataset
def prepare_examples(examples):
#Some preporcessing for each image and text as all my data saved in cloud
#For this reason I couldn't set the batch to True.
encoding = processor(img_as_tensor, words, boxes=boxes, word_labels=labels,
truncation=True, padding="max_length")
# encoding['pixel_values']=np.array(encoding['pixel_values'])
return encoding
dataset = load_dataset("json", data_files='issues.jsonl')
processor = AutoProcessor.from_pretrained("microsoft/layoutlmv3-base", apply_ocr=False)
features = dataset["train"].features
column_names = dataset["train"].column_names
# we need to define custom features for `set_format` (used later on) to work properly
features = Features({
'pixel_values': Array3D(dtype="float32", shape=(3, 224, 224)),
'input_ids': Sequence(feature=Value(dtype='int64')),
'attention_mask': Sequence(Value(dtype='int64')),
'bbox': Array2D(dtype="int64", shape=(512, 4)),
'labels': Sequence(feature=Value(dtype='int64')),
})
train_dataset = dataset["train"].map(
prepare_examples,
batched=False,
remove_columns=column_names,
features=features
)
```
It throws below error.
```
/opt/conda/lib/python3.7/site-packages/datasets/arrow_writer.py in __arrow_array__(self, type)
172 storage = to_pyarrow_listarray(data, pa_type)
--> 173 return pa.ExtensionArray.from_storage(pa_type, storage)
174
/opt/conda/lib/python3.7/site-packages/pyarrow/array.pxi in pyarrow.lib.ExtensionArray.from_storage()
TypeError: Incompatible storage type list<item: list<item: list<item: list<item: float>>>> for extension type extension<arrow.py_extension_type<Array3DExtensionType>>
```
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
```
rom datasets import Features, Sequence, ClassLabel, Value, Array2D, Array3D
from torchvision import transforms
from transformers import AutoProcessor
# we'll use the Auto API here - it will load LayoutLMv3Processor behind the scenes,
# based on the checkpoint we provide from the hub
from datasets import load_dataset
def prepare_examples(examples):
#Some preporcessing for each image and text as all my data saved in cloud
encoding = processor(img_as_tensor, words, boxes=boxes, word_labels=labels,
truncation=True, padding="max_length")
# encoding['pixel_values']=np.array(encoding['pixel_values'])
return encoding
dataset = load_dataset("json", data_files='issues.jsonl')
processor = AutoProcessor.from_pretrained("microsoft/layoutlmv3-base", apply_ocr=False)
features = dataset["train"].features
column_names = dataset["train"].column_names
# we need to define custom features for `set_format` (used later on) to work properly
features = Features({
'pixel_values': Array3D(dtype="float32", shape=(3, 224, 224)),
'input_ids': Sequence(feature=Value(dtype='int64')),
'attention_mask': Sequence(Value(dtype='int64')),
'bbox': Array2D(dtype="int64", shape=(512, 4)),
'labels': Sequence(feature=Value(dtype='int64')),
})
train_dataset = dataset["train"].map(
prepare_examples,
batched=False,
remove_columns=column_names,
features=features
)
## Expected results
A clear and concise description of the expected results.
Expected would be similar to all the otherdatasets with no error.
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform: Unix
- Python version: 3.9
- PyArrow version: 9.0.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/21245519?v=4",
"events_url": "https://api.github.com/users/jmandivarapu1/events{/privacy}",
"followers_url": "https://api.github.com/users/jmandivarapu1/followers",
"following_url": "https://api.github.com/users/jmandivarapu1/following{/other_user}",
"gists_url": "https://api.github.com/users/jmandivarapu1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jmandivarapu1",
"id": 21245519,
"login": "jmandivarapu1",
"node_id": "MDQ6VXNlcjIxMjQ1NTE5",
"organizations_url": "https://api.github.com/users/jmandivarapu1/orgs",
"received_events_url": "https://api.github.com/users/jmandivarapu1/received_events",
"repos_url": "https://api.github.com/users/jmandivarapu1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jmandivarapu1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmandivarapu1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jmandivarapu1",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5025/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5025/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 1 day, 7:11:21
|
https://api.github.com/repos/huggingface/datasets/issues/5023
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5023/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5023/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5023/events
|
https://github.com/huggingface/datasets/issues/5023
| 1,385,881,112
|
I_kwDODunzps5Smt4Y
| 5,023
|
Text strings are split into lists of characters in xcsr dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] |
[] | 2022-09-26T11:11:50
| 2022-09-28T07:54:20
| 2022-09-28T07:54:20
|
MEMBER
| null | null | null | null |
## Describe the bug
Text strings are split into lists of characters.
Example for "X-CSQA-en":
```
{'id': 'd3845adc08414fda',
'lang': 'en',
'question': {'stem': ['T',
'h',
'e',
' ',
'd',
'e',
'n',
't',
'a',
'l',
' ',
'o',
'f',
'f',
'i',
'c',
'e',
' ',
'h',
'a',
'n',
'd',
'l',
'e',
'd',
' ',
'a',
' ',
'l',
'o',
't',
' ',
'o',
'f',
' ',
'p',
'a',
't',
'i',
'e',
'n',
't',
's',
' ',
'w',
'h',
'o',
' ',
'e',
'x',
'p',
'e',
'r',
'i',
'e',
'n',
'c',
'e',
'd',
' ',
't',
'r',
'a',
'u',
'm',
'a',
't',
'i',
'c',
' ',
'm',
'o',
'u',
't',
'h',
' ',
'i',
'n',
'j',
'u',
'r',
'y',
',',
' ',
'w',
'h',
'e',
'r',
'e',
' ',
'w',
'e',
'r',
'e',
' ',
't',
'h',
'e',
's',
'e',
' ',
'p',
'a',
't',
'i',
'e',
'n',
't',
's',
' ',
'c',
'o',
'm',
'i',
'n',
'g',
' ',
'f',
'r',
'o',
'm',
'?'],
'choices': [{'label': ['A'], 'text': ['t', 'o', 'w', 'n']},
{'label': ['B'], 'text': ['m', 'i', 'c', 'h', 'i', 'g', 'a', 'n']},
{'label': ['C'], 'text': ['h', 'o', 's', 'p', 'i', 't', 'a', 'l']},
{'label': ['D'], 'text': ['s', 'c', 'h', 'o', 'o', 'l', 's']},
{'label': ['E'],
'text': ['o',
'f',
'f',
'i',
'c',
'e',
' ',
'b',
'u',
'i',
'l',
'd',
'i',
'n',
'g']}]},
'answerKey': 'C'}
## Steps to reproduce the bug
```python
ds = load_dataset("datasets/xcsr", "X-CSQA-en", split="validation", streaming=True)
item = next(iter(ds))
item
```
## Expected results
```
{'id': 'd3845adc08414fda',
'lang': 'en',
'question': {'stem': 'The dental office handled a lot of patients who experienced traumatic mouth injury, where were these patients coming from?',
'choices': {'label': ['A', 'B', 'C', 'D', 'E'],
'text': ['town', 'michigan', 'hospital', 'schools', 'office building']}},
'answerKey': 'C'}
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5023/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5023/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 1 day, 20:42:30
|
https://api.github.com/repos/huggingface/datasets/issues/5021
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5021/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5021/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5021/events
|
https://github.com/huggingface/datasets/issues/5021
| 1,385,351,250
|
I_kwDODunzps5SkshS
| 5,021
|
Split is inferred from filename and overrides metadata.jsonl
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/102226344?v=4",
"events_url": "https://api.github.com/users/float-trip/events{/privacy}",
"followers_url": "https://api.github.com/users/float-trip/followers",
"following_url": "https://api.github.com/users/float-trip/following{/other_user}",
"gists_url": "https://api.github.com/users/float-trip/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/float-trip",
"id": 102226344,
"login": "float-trip",
"node_id": "U_kgDOBhfZqA",
"organizations_url": "https://api.github.com/users/float-trip/orgs",
"received_events_url": "https://api.github.com/users/float-trip/received_events",
"repos_url": "https://api.github.com/users/float-trip/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/float-trip/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/float-trip/subscriptions",
"type": "User",
"url": "https://api.github.com/users/float-trip",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
},
{
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists",
"id": 1935892865,
"name": "duplicate",
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate"
}
] |
closed
| false
| null |
[] |
[
"Hi! What's the structure of your image folder? `datasets` by default tries to infer to what split each file belongs based on directory/file names. If it's OK to load all the images inside the `dataset` folder in the `train` split, you can do the following:\r\n```python\r\ndataset = load_dataset(\"imagefolder\", data_files=\"dataset/**\")\r\n```",
"Thanks! Specifying `data_files` worked for that case.\r\n\r\nI'm new to the library, so let me try rephrasing the issue. If there's no actual bug here, sorry for the trouble.\r\n\r\nI've uploaded an example [here](https://files.catbox.moe/nfj2pd.zip) with the following files: \r\n\r\n```\r\n.\r\nβββ bug.py\r\nβββ imagefolder\r\n βββ test\r\n β βββ metadata.jsonl\r\n β βββ dog.jpg\r\n β βββ personal trainer.jpg\r\n βββ train\r\n βββ metadata.jsonl\r\n βββ cat.jpg\r\n βββ testing center.jpg\r\n```\r\n\r\n`bug.py`\r\n```\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"imagefolder\")\r\n\r\nprint(dataset)\r\n# DatasetDict({\r\n# test: Dataset({\r\n# features: ['image', 'text'],\r\n# num_rows: 1\r\n# })\r\n# })\r\n\r\nfor split in dataset:\r\n print(\"Split:\", split)\r\n for n in dataset[split]:\r\n print(n['text'])\r\n\r\n\r\n# Split: test\r\n# testing center\r\n```\r\n\r\nAs far as I can tell, this conforms with the example given here: https://huggingface.co/docs/datasets/image_dataset#imagefolder. It appears to me that, even though `metadata.jsonl` is present, the inferred labels from the path are taking precedent. Does this sound like a bug/undocumented behavior?",
"This looks like a duplicate of https://github.com/huggingface/datasets/issues/4895 (the problem is explained in this comment: https://github.com/huggingface/datasets/issues/4895#issuecomment-1248269550).\r\n\r\nIn the meantime, you can do the following to fetch all the splits:\r\n```python\r\ndataset = load_dataset(\"imagefolder\", data_files={\"train\": \"imagefolder/train/**\", \"test\": \"imagefolder/test/**\"})\r\n```\r\n"
] | 2022-09-26T03:22:14
| 2022-09-29T08:07:50
| 2022-09-29T08:07:50
|
NONE
| null | null | null | null |
## Describe the bug
Including the strings "test" or "train" anywhere in a filename causes `datasets` to infer the split and silently ignore all other files.
This behavior is documented for directory names but not filenames: https://huggingface.co/docs/datasets/image_dataset#imagefolder
## Steps to reproduce the bug
`metadata.jsonl`
```json
{"file_name": "photo of a cat.jpg", "text": "a photo of a cat"}
{"file_name": "photo of a dog.jpg", "text": "a photo of a dog"}
{"file_name": "photo of a train.jpg", "text": "a photo of a train"}
{"file_name": "photo of test tubes.jpg", "text": "a photo of test tubes"}
```
`bug.py`
```python
from datasets import load_dataset
dataset = load_dataset("dataset")
print(dataset)
# DatasetDict({
# train: Dataset({
# features: ['image', 'text'],
# num_rows: 1
# })
# test: Dataset({
# features: ['image', 'text'],
# num_rows: 1
# })
# })
for split in dataset:
for n in dataset[split]:
print(n['text'])
# a photo of a train
# a photo of test tubes
```
## Expected results
One single dataset with all four images / a warning for unused files / documentation of this behavior
## Actual results
Only the images with "test" or "train" in the name are loaded
## Environment info
- `datasets` version: 2.5.1
- Platform: macOS-12.5.1-x86_64-i386-64bit
- Python version: 3.10.4
- PyArrow version: 9.0.0
- Pandas version: 1.5.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5021/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5021/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 3 days, 4:45:36
|
https://api.github.com/repos/huggingface/datasets/issues/5017
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5017/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5017/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5017/events
|
https://github.com/huggingface/datasets/issues/5017
| 1,384,022,463
|
I_kwDODunzps5SfoG_
| 5,017
|
xcsr: X-CSQA simply uses english for all alleged non-english data
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/26286291?v=4",
"events_url": "https://api.github.com/users/thesofakillers/events{/privacy}",
"followers_url": "https://api.github.com/users/thesofakillers/followers",
"following_url": "https://api.github.com/users/thesofakillers/following{/other_user}",
"gists_url": "https://api.github.com/users/thesofakillers/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/thesofakillers",
"id": 26286291,
"login": "thesofakillers",
"node_id": "MDQ6VXNlcjI2Mjg2Mjkx",
"organizations_url": "https://api.github.com/users/thesofakillers/orgs",
"received_events_url": "https://api.github.com/users/thesofakillers/received_events",
"repos_url": "https://api.github.com/users/thesofakillers/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/thesofakillers/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thesofakillers/subscriptions",
"type": "User",
"url": "https://api.github.com/users/thesofakillers",
"user_view_type": "public"
}
|
[
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] |
[
"Thanks for reporting, @thesofakillers. Good catch. We are fixing this. "
] | 2022-09-23T16:11:54
| 2022-09-26T10:57:31
| 2022-09-26T10:57:31
|
NONE
| null | null | null | null |
## Describe the bug
All the alleged non-english subcollections for the X-CSQA task in the [xcsr benchmark dataset ](https://huggingface.co/datasets/xcsr) seem to be copies of the english subcollection, rather than translations. This is in contrast to the data description:
> we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR
## Steps to reproduce the bug
```python
# let's say you want to load the french X-CSQA subcollection
french = datasets.load_dataset("xcsr", "X-CSQA-fr")
# for good measure, let's load english too
english = datasets.load_dataset("xcsr", "X-CSQA-en")
# let's inspect
"".join(english['test'][0]['question']['stem'])
# output: 'The people wanted to stop the parade, so what did they set up to thwart it?'
"".join(french['test'][0]['question']['stem'])
# output: 'The people wanted to stop the parade, so what did they set up to thwart it?'
# what? Why are they both in english?
# I've checked this for validation and train splits too, across many datapoints. It's all the same english dataset
# maybe i need to look better?
french['test'].unique('lang')
# output: ['en']
# no, it's all english
```
## Expected results
Accessing a subcollection in language X should return a subcollection containg samples in language X
## Actual results
Accessing a subcollection in language X returns a subcollection containing samples in English.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.5.1
- Platform: macOS-10.15.7-x86_64-i386-64bit
- Python version: 3.8.13
- PyArrow version: 9.0.0
- Pandas version: 1.4.3
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5017/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5017/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 2 days, 18:45:37
|
https://api.github.com/repos/huggingface/datasets/issues/5015
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5015/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5015/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5015/events
|
https://github.com/huggingface/datasets/issues/5015
| 1,383,485,558
|
I_kwDODunzps5SdlB2
| 5,015
|
Transfer dataset scripts to Hub
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] |
[
"Sounds good ! Can I help with anything ?"
] | 2022-09-23T08:48:10
| 2022-10-05T07:15:57
| 2022-10-05T07:15:57
|
MEMBER
| null | null | null | null |
Before merging:
- #4974
TODO:
- [x] Create label: ["dataset contribution"](https://github.com/huggingface/datasets/pulls?q=label%3A%22dataset+contribution%22)
- [x] Create project: [Datasets: Transfer datasets to Hub](https://github.com/orgs/huggingface/projects/22/)
- [x] PRs:
- [x] Add dataset: we should recommend transfer all additions of datasets to the Hub, under the appropriate namespace; no more additions of datasets on GitHub
- [x] Update dataset: in general, we should merge bug fixes; enhancements should be considered on a case-by-case basis, depending on whether there is a more suitable namespace on the Hub
- [ ] Issues
Finally:
- [x] #4974
Let me know what you think! :hugs:
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 1,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5015/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5015/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 11 days, 22:27:47
|
https://api.github.com/repos/huggingface/datasets/issues/5014
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5014/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5014/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5014/events
|
https://github.com/huggingface/datasets/issues/5014
| 1,383,422,639
|
I_kwDODunzps5SdVqv
| 5,014
|
I need to read the custom dataset in conll format
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/39985245?v=4",
"events_url": "https://api.github.com/users/shell-nlp/events{/privacy}",
"followers_url": "https://api.github.com/users/shell-nlp/followers",
"following_url": "https://api.github.com/users/shell-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/shell-nlp/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/shell-nlp",
"id": 39985245,
"login": "shell-nlp",
"node_id": "MDQ6VXNlcjM5OTg1MjQ1",
"organizations_url": "https://api.github.com/users/shell-nlp/orgs",
"received_events_url": "https://api.github.com/users/shell-nlp/received_events",
"repos_url": "https://api.github.com/users/shell-nlp/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/shell-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shell-nlp/subscriptions",
"type": "User",
"url": "https://api.github.com/users/shell-nlp",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] |
[
"Hi! We don't currently have a builder for parsing custom `conll` datasets, but I guess we could add one as a packaged module (similarly to what [TFDS](https://github.com/tensorflow/datasets/blob/master/tensorflow_datasets/core/dataset_builders/conll/conll_dataset_builder.py) did). @lhoestq @albertvillanova WDYT?\r\n\r\nIn the meantime, you can use `Dataset.from_generator` to create a dataset as follows:\r\n```python\r\nfrom datasets import Dataset\r\n\r\n# 2009 version\r\nINPUT_COLUMNS = \"ID FORM LEMMA PLEMMA POS PPOS FEAT PFEAT HEAD PHEAD DEPREL PDEPREL\".split()\r\n\r\ndef read_conll(file):\r\n example = {col: [] for col in INPUT_COLUMNS}\r\n idx = 0\r\n with open(file) as f:\r\n for line in f:\r\n if line.startswith(\"-DOCSTART-\") or line == \"\\n\" or not line:\r\n if example[next(iter(example))]:\r\n yield idx, example\r\n idx += 1\r\n example = {col: [] for col in INPUT_COLUMNS}\r\n else:\r\n row_cols = line.split()\r\n for i, col in enumerate(example):\r\n example[col] = row_cols[i].rstrip()\r\n\r\n# (optional) pass custom features with `features=Features(...)`\r\ndset = Dataset.from_generator(read_conll, gen_kwargs={\"file\": \"path/to/conll/file\"}) \r\n``` ",
"I think we could add a dedicated builder if you think this format is general enough.",
"\r\n\r\n\r\n> I think we could add a dedicated builder if you think this format is general enough.\r\n\r\nI think its functions are incomplete. It should have to_ Conll and from_ There are two methods of conll."
] | 2022-09-23T07:49:42
| 2022-11-02T11:57:15
| null |
NONE
| null | null | null | null |
I need to read the custom dataset in conll format
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/39985245?v=4",
"events_url": "https://api.github.com/users/shell-nlp/events{/privacy}",
"followers_url": "https://api.github.com/users/shell-nlp/followers",
"following_url": "https://api.github.com/users/shell-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/shell-nlp/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/shell-nlp",
"id": 39985245,
"login": "shell-nlp",
"node_id": "MDQ6VXNlcjM5OTg1MjQ1",
"organizations_url": "https://api.github.com/users/shell-nlp/orgs",
"received_events_url": "https://api.github.com/users/shell-nlp/received_events",
"repos_url": "https://api.github.com/users/shell-nlp/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/shell-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shell-nlp/subscriptions",
"type": "User",
"url": "https://api.github.com/users/shell-nlp",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5014/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5014/timeline
| null |
reopened
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/5013
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5013/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5013/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5013/events
|
https://github.com/huggingface/datasets/issues/5013
| 1,383,415,971
|
I_kwDODunzps5SdUCj
| 5,013
|
would huggingface like publish cpp binding for datasets package ?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/6143404?v=4",
"events_url": "https://api.github.com/users/mullerhai/events{/privacy}",
"followers_url": "https://api.github.com/users/mullerhai/followers",
"following_url": "https://api.github.com/users/mullerhai/following{/other_user}",
"gists_url": "https://api.github.com/users/mullerhai/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mullerhai",
"id": 6143404,
"login": "mullerhai",
"node_id": "MDQ6VXNlcjYxNDM0MDQ=",
"organizations_url": "https://api.github.com/users/mullerhai/orgs",
"received_events_url": "https://api.github.com/users/mullerhai/received_events",
"repos_url": "https://api.github.com/users/mullerhai/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mullerhai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mullerhai/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mullerhai",
"user_view_type": "public"
}
|
[
{
"color": "ffffff",
"default": true,
"description": "This will not be worked on",
"id": 1935892913,
"name": "wontfix",
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEz",
"url": "https://api.github.com/repos/huggingface/datasets/labels/wontfix"
}
] |
closed
| false
| null |
[] |
[
"Hi ! Can you share more information about your use case ? How could it help you to have cpp bindings versus using the python libraries ?",
"> Hi ! Can you share more information about your use case ? How could it help you to have cpp bindings versus using the python libraries ?\r\n\r\nfor example ,the huggingface load_model() and load_dataset() can execute in cpp env",
"If it's a viable option for you, you can check [tch-rs](https://github.com/LaurentMazare/tch-rs) to load models in Rust. Regarding datasets, you can first download them in python and then use Arrow C++ or Rust to load them",
"If you are more adventurous, another option is to embed python calls inside c++ e.g. with `pybind11`.",
"> pybind11\r\n\r\nI think it is not the best solution"
] | 2022-09-23T07:42:49
| 2023-02-24T16:20:57
| 2023-02-24T16:20:57
|
NONE
| null | null | null | null |
HI:
I use cpp env libtorch, I like use hugggingface ,but huggingface not cpp binding, would you like publish cpp binding for it.
thanks
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5013/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5013/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 154 days, 8:38:08
|
https://api.github.com/repos/huggingface/datasets/issues/5012
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5012/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5012/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5012/events
|
https://github.com/huggingface/datasets/issues/5012
| 1,382,851,096
|
I_kwDODunzps5SbKIY
| 5,012
|
Force JSON format regardless of file naming on S3
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/112650299?v=4",
"events_url": "https://api.github.com/users/junwang-wish/events{/privacy}",
"followers_url": "https://api.github.com/users/junwang-wish/followers",
"following_url": "https://api.github.com/users/junwang-wish/following{/other_user}",
"gists_url": "https://api.github.com/users/junwang-wish/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/junwang-wish",
"id": 112650299,
"login": "junwang-wish",
"node_id": "U_kgDOBrboOw",
"organizations_url": "https://api.github.com/users/junwang-wish/orgs",
"received_events_url": "https://api.github.com/users/junwang-wish/received_events",
"repos_url": "https://api.github.com/users/junwang-wish/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/junwang-wish/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/junwang-wish/subscriptions",
"type": "User",
"url": "https://api.github.com/users/junwang-wish",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
| null |
[] |
[
"Hi ! Support for URIs like `s3://...` is not implemented yet in `data_files=`. You can use the HTTP URL instead if your data is public in the meantime",
"Hi,\r\nI want to make sure I understand this response. I have a set of files on S3 that are private for security reasons. Because they are not public files I cannot read those files (many are parquet) into my hf notebooks in Kaggle? That can't be correct, can it? ",
"Hi ! There is a discussion at https://github.com/huggingface/datasets/issues/5281\r\n\r\nUsing the latest `datasets` 2.11 you can try passing fsspec URLs to private buckets to `data_files` in `load_dataset()`. Though this is still experimental and undocumented, so feedback is welcome. You may not have the best experience though, since anything related to performance and caching hasn't been tested properly yet.",
"closing this one since data_files supports fsspec (still experimental/untested/undocumented for s3 though)"
] | 2022-09-22T18:28:15
| 2023-08-16T09:58:36
| 2023-08-16T09:58:36
|
NONE
| null | null | null | null |
I have a file on S3 created by Data Version Control, it looks like `s3://dvc/ac/badff5b134382a0f25248f1b45d7b2` but contains a json file. If I run
```python
dataset = load_dataset(
"json",
data_files='s3://dvc/ac/badff5b134382a0f25248f1b45d7b2'
)
```
It gives me
```
InvalidSchema: No connection adapters were found for 's3://dvc/ac/badff5b134382a0f25248f1b45d7b2'
```
However, I cannot go ahead and change the names of the s3 file. Is there a way to "force" load a S3 url with certain decoder (JSON, CSV, etc.) regardless of s3 URL naming?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5012/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5012/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 327 days, 15:30:21
|
https://api.github.com/repos/huggingface/datasets/issues/5011
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5011/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5011/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5011/events
|
https://github.com/huggingface/datasets/issues/5011
| 1,382,609,587
|
I_kwDODunzps5SaPKz
| 5,011
|
Audio: `encode_example` fails with IndexError
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sanchit-gandhi",
"id": 93869735,
"login": "sanchit-gandhi",
"node_id": "U_kgDOBZhWpw",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sanchit-gandhi",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] |
[
"Sorry bug on my part π
Closing "
] | 2022-09-22T15:07:27
| 2022-09-23T09:05:18
| 2022-09-23T09:05:18
|
CONTRIBUTOR
| null | null | null | null |
## Describe the bug
Loading the dataset [earnings-22](https://huggingface.co/datasets/sanchit-gandhi/earnings22_split) from the Hub yields an Index Error. I created this dataset locally and then pushed to hub at the specified URL. Thus, I expect the dataset should work out-of-the-box! Indeed, the dataset viewer functions correctly, and there were no issues when I had the dataset locally.
Don't think it's a sound file bug as the version matches what worked previously.
Update: the bug appeared for me on a GPU, mysteriously on a TPU I can't repro and it downloads correctly...
## Steps to reproduce the bug
```python
from datasets import load_dataset
earnings22 = load_dataset("sanchit-gandhi/earnings22_split")
```
## Expected results
```
>>> earnings22
DatasetDict({
validation: Dataset({
features: ['source_id', 'audio', 'segment_id', 'sentence', 'start_ts', 'end_ts', 'id'],
num_rows: 2650
})
train: Dataset({
features: ['source_id', 'audio', 'segment_id', 'sentence', 'start_ts', 'end_ts', 'id'],
num_rows: 52006
})
test: Dataset({
features: ['source_id', 'audio', 'segment_id', 'sentence', 'start_ts', 'end_ts', 'id'],
num_rows: 2735
})
})
```
## Actual results
```
Traceback (most recent call last):
File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2764, in _map_single
writer.write(example)
File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/arrow_writer.py", line 451, in write
self.write_examples_on_file()
File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/arrow_writer.py", line 409, in write_examples_on_file
self.write_batch(batch_examples=batch_examples)
File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/arrow_writer.py", line 508, in write_batch
arrays.append(pa.array(typed_sequence))
File "pyarrow/array.pxi", line 231, in pyarrow.lib.array
File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol
File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/arrow_writer.py", line 197, in __arrow_array__
out = cast_array_to_feature(out, type, allow_number_to_str=not self.trying_type)
File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/table.py", line 1683, in wrapper
return func(array, *args, **kwargs)
File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/table.py", line 1795, in cast_array_to_feature
return feature.cast_storage(array)
File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/features/audio.py", line 190, in cast_storage
storage = pa.array([Audio().encode_example(x) if x is not None else None for x in storage.to_pylist()])
File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/features/audio.py", line 190, in <listcomp>
storage = pa.array([Audio().encode_example(x) if x is not None else None for x in storage.to_pylist()])
File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/features/audio.py", line 92, in encode_example
sf.write(buffer, value["array"], value["sampling_rate"], format="wav")
File "/opt/conda/envs/hf/lib/python3.8/site-packages/soundfile.py", line 313, in write
channels = data.shape[1]
IndexError: tuple index out of range
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.4.0
- Platform: Linux-4.19.0-21-cloud-amd64-x86_64-with-glibc2.10
- Python version: 3.8.13
- PyArrow version: 9.0.0
- Pandas version: 1.4.3
Plus:
- SoundFile version: 0.10.3.post1
cc @lhoestq @polinaeterna
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sanchit-gandhi",
"id": 93869735,
"login": "sanchit-gandhi",
"node_id": "U_kgDOBZhWpw",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sanchit-gandhi",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5011/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5011/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 17:57:51
|
https://api.github.com/repos/huggingface/datasets/issues/5009
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5009/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5009/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5009/events
|
https://github.com/huggingface/datasets/issues/5009
| 1,381,194,067
|
I_kwDODunzps5SU1lT
| 5,009
|
Error loading StonyBrookNLP/tellmewhy dataset from hub even though local copy loads correctly
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/4996184?v=4",
"events_url": "https://api.github.com/users/ykl7/events{/privacy}",
"followers_url": "https://api.github.com/users/ykl7/followers",
"following_url": "https://api.github.com/users/ykl7/following{/other_user}",
"gists_url": "https://api.github.com/users/ykl7/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ykl7",
"id": 4996184,
"login": "ykl7",
"node_id": "MDQ6VXNlcjQ5OTYxODQ=",
"organizations_url": "https://api.github.com/users/ykl7/orgs",
"received_events_url": "https://api.github.com/users/ykl7/received_events",
"repos_url": "https://api.github.com/users/ykl7/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ykl7/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ykl7/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ykl7",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] |
[
"I think this is because some columns are mostly empty lists. In particular the train and validation splits only have empty lists for `val_ann`. Therefore the type inference doesn't know which type is inside (or it would have to scan the other splits first before knowing).\r\n\r\nYou can fix that by specifying the features types explicitly.\r\nThen you can save the feature types inside the dataset repository, so that you won't need to specify the features in subsequent calls:\r\n```python\r\nfrom datasets import load_dataset, Features, Sequence, Value\r\nfrom datasets.info import DatasetInfosDict\r\n\r\nfeatures = Features({\r\n 'narrative': Value('string'),\r\n 'question': Value('string'),\r\n 'original_sentence_for_question': Value('string'),\r\n 'narrative_lexical_overlap': Value('float64'),\r\n 'is_ques_answerable': Value('string'),\r\n 'answer': Value('string'),\r\n 'is_ques_answerable_annotator': Value('string'),\r\n 'original_narrative_form': Sequence(Value('string')),\r\n 'question_meta': Value('string'),\r\n 'helpful_sentences': Sequence(Value('int64')),\r\n 'human_eval': Value('bool'),\r\n 'val_ann': Sequence(Value('int64')),\r\n 'gram_ann': Sequence(Value('int64'))\r\n})\r\nds = load_dataset('StonyBrookNLP/tellmewhy', features=features)\r\nDatasetInfosDict({\"default\": ds[\"train\"].info}).write_to_directory(\"path/to/local/tellmewhy\")\r\n```\r\nand then after pushing the change to the dataset repository on the Hub, `load_dataset(\"StonyBrookNLP/tellmewhy\")` will work directly`",
"(Note that specifying explicit types will be made easier with https://github.com/huggingface/datasets/pull/4926)",
"`gram_ann` and `val_ann` are annotations that only exist for part of the test set. I wanted to keep all the columns consistent across all files, so I added them to train and validation as well. I'll check if removing them from those files is still compliant with this repo. Otherwise, I will do as you suggested. Thanks @lhoestq !",
"@lhoestq I followed the exact steps you described but it seems like I'm getting the same error unfortunately. Any other ideas? Thanks in advance",
"Hi ! If you move `dataset_infos.json` from `data/` to the root of your dataset repository if should work :)",
"I tried that and pushed to the [hub](https://huggingface.co/datasets/StonyBrookNLP/tellmewhy/tree/main). Now, there is a new error.\r\n```\r\n File \"/home/yklal95/tellmewhy/src/prepare_data.py\", line 67, in main\r\n dataset = load_dataset('StonyBrookNLP/tellmewhy')\r\n File \"/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/load.py\", line 1746, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/builder.py\", line 704, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/builder.py\", line 775, in _download_and_prepare\r\n verify_checksums(\r\n File \"/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/utils/info_utils.py\", line 33, in verify_checksums\r\n raise ExpectedMoreDownloadedFiles(str(set(expected_checksums) - set(recorded_checksums)))\r\ndatasets.utils.info_utils.ExpectedMoreDownloadedFiles: {'/home/yklal95/tellmewhy/data/test.json', '/home/yklal95/tellmewhy/data/validation.json', '/home/yklal95/tellmewhy/data/train.json'}\r\n```\r\nNo changes were made to any of the other files and they are still on the hub. Let me know if you have any ideas @lhoestq Thanks!",
"Oh I see - the code I gave you returns local paths instead of URLs to store metadata about files to download.\r\nI opened a PR in your repo here to remove this: https://huggingface.co/datasets/StonyBrookNLP/tellmewhy/discussions/1\r\nsorry for the inconvenience !",
"It works now! Thanks a lot @lhoestq "
] | 2022-09-21T16:23:06
| 2022-09-29T13:07:29
| 2022-09-29T13:07:29
|
NONE
| null | null | null | null |
## Describe the bug
I have added a new dataset with the identifier `StonyBrookNLP/tellmewhy` to the hub. When I load the individual files using my local copy using `dataset = datasets.load_dataset("json", data_files="data/train.jsonl")`, it loads the dataset correctly. However, when I try to load it from the hub, I get an error (pasted below). Additionally, `dataset = datasets.load_dataset("json", data_dir="data/")` throws the same error.
## Steps to reproduce the bug
```python
dataset = datasets.load_dataset('StonyBrookNLP/tellmewhy')
```
## Expected results
Successfully load the `StonyBrookNLP/tellmewhy` dataset.
## Actual results
```
Using custom data configuration StonyBrookNLP--tellmewhy-82712924092694ff
Downloading and preparing dataset json/StonyBrookNLP--tellmewhy to /home/yklal95/.cache/huggingface/datasets/StonyBrookNLP___json/StonyBrookNLP--tellmewhy-82712924092694ff/0.0.0/a3e658c4731e59120d44081ac10bf85dc7e1388126b92338344ce9661907f253...
Downloading data files: 100%|ββββββββββββββββββββββββββββββ| 3/3 [00:00<00:00, 957.46it/s]
Extracting data files: 100%|βββββββββββββββββββββββββββββββ| 3/3 [00:00<00:00, 299.14it/s]
Traceback (most recent call last):
File "/home/yklal95/tmw-generalization/src/load_datasets.py", line 17, in <module>
main(args)
File "/home/yklal95/tmw-generalization/src/load_datasets.py", line 11, in main
dataset = datasets.load_dataset(args.dataset_name)
File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/load.py", line 1746, in load_dataset
builder_instance.download_and_prepare(
File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/builder.py", line 704, in download_and_prepare
self._download_and_prepare(
File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/builder.py", line 793, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/builder.py", line 1277, in _prepare_split
writer.write_table(table)
File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/arrow_writer.py", line 524, in write_table
pa_table = table_cast(pa_table, self._schema)
File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/table.py", line 2005, in table_cast
return cast_table_to_schema(table, schema)
File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/table.py", line 1969, in cast_table_to_schema
arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/table.py", line 1969, in <listcomp>
arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/table.py", line 1681, in wrapper
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/table.py", line 1681, in <listcomp>
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/table.py", line 1822, in cast_array_to_feature
casted_values = _c(array.values, feature.feature)
File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/table.py", line 1683, in wrapper
return func(array, *args, **kwargs)
File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/table.py", line 1853, in cast_array_to_feature
return array_cast(array, feature(), allow_number_to_str=allow_number_to_str)
File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/table.py", line 1683, in wrapper
return func(array, *args, **kwargs)
File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/table.py", line 1761, in array_cast
raise TypeError(f"Couldn't cast array of type {array.type} to {pa_type}")
TypeError: Couldn't cast array of type int64 to null
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.4.0
- Platform: Linux-4.15.0-121-generic-x86_64-with-glibc2.27
- Python version: 3.9.13
- PyArrow version: 9.0.0
- Pandas version: 1.5.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/4996184?v=4",
"events_url": "https://api.github.com/users/ykl7/events{/privacy}",
"followers_url": "https://api.github.com/users/ykl7/followers",
"following_url": "https://api.github.com/users/ykl7/following{/other_user}",
"gists_url": "https://api.github.com/users/ykl7/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ykl7",
"id": 4996184,
"login": "ykl7",
"node_id": "MDQ6VXNlcjQ5OTYxODQ=",
"organizations_url": "https://api.github.com/users/ykl7/orgs",
"received_events_url": "https://api.github.com/users/ykl7/received_events",
"repos_url": "https://api.github.com/users/ykl7/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ykl7/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ykl7/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ykl7",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5009/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5009/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 7 days, 20:44:23
|
https://api.github.com/repos/huggingface/datasets/issues/5005
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5005/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5005/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5005/events
|
https://github.com/huggingface/datasets/issues/5005
| 1,380,952,960
|
I_kwDODunzps5ST6uA
| 5,005
|
Release 2.5.0 breaks transformers CI
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
] |
[
"Shall we revert https://github.com/huggingface/datasets/pull/4971 @mariosasko ?\r\n\r\nAnd for consistency we can update IterableDataset.map later"
] | 2022-09-21T13:39:19
| 2022-09-21T14:11:57
| 2022-09-21T14:11:57
|
MEMBER
| null | null | null | null |
## Describe the bug
As reported by @lhoestq:
> see https://app.circleci.com/pipelines/github/huggingface/transformers/47634/workflows/b491886b-e66e-4edb-af96-8b459e72aa25/jobs/564563
this is used here: [https://github.com/huggingface/transformers/blob/3b19c0317b6909e2d7f11b5053895ac55[β¦]torch/speech-pretraining/run_wav2vec2_pretraining_no_trainer.py](https://github.com/huggingface/transformers/blob/3b19c0317b6909e2d7f11b5053895ac55250e7da/examples/pytorch/speech-pretraining/run_wav2vec2_pretraining_no_trainer.py#L482-L488)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5005/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5005/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 0:32:38
|
https://api.github.com/repos/huggingface/datasets/issues/5002
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5002/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5002/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5002/events
|
https://github.com/huggingface/datasets/issues/5002
| 1,380,589,402
|
I_kwDODunzps5SSh9a
| 5,002
|
Dataset Viewer issue for loubnabnl/humaneval-x
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/44069155?v=4",
"events_url": "https://api.github.com/users/loubnabnl/events{/privacy}",
"followers_url": "https://api.github.com/users/loubnabnl/followers",
"following_url": "https://api.github.com/users/loubnabnl/following{/other_user}",
"gists_url": "https://api.github.com/users/loubnabnl/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/loubnabnl",
"id": 44069155,
"login": "loubnabnl",
"node_id": "MDQ6VXNlcjQ0MDY5MTU1",
"organizations_url": "https://api.github.com/users/loubnabnl/orgs",
"received_events_url": "https://api.github.com/users/loubnabnl/received_events",
"repos_url": "https://api.github.com/users/loubnabnl/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/loubnabnl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/loubnabnl/subscriptions",
"type": "User",
"url": "https://api.github.com/users/loubnabnl",
"user_view_type": "public"
}
|
[
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo",
"user_view_type": "public"
}
] |
[
"It's a bug! Thanks for reporting, I'm looking at it",
"Fixed."
] | 2022-09-21T09:06:17
| 2022-09-21T11:49:49
| 2022-09-21T11:49:49
|
NONE
| null | null | null | null |
### Link
https://huggingface.co/datasets/loubnabnl/humaneval-x/viewer/
### Description
The dataset has subsets but the viewer gets stuck in the default subset even when I select another one (the data loading of the subsets works fine)
### Owner
Yes
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5002/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5002/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 2:43:32
|
https://api.github.com/repos/huggingface/datasets/issues/5000
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5000/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5000/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5000/events
|
https://github.com/huggingface/datasets/issues/5000
| 1,379,709,398
|
I_kwDODunzps5SPLHW
| 5,000
|
Dataset Viewer issue for asapp/slue
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/56092571?v=4",
"events_url": "https://api.github.com/users/fwu-asapp/events{/privacy}",
"followers_url": "https://api.github.com/users/fwu-asapp/followers",
"following_url": "https://api.github.com/users/fwu-asapp/following{/other_user}",
"gists_url": "https://api.github.com/users/fwu-asapp/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/fwu-asapp",
"id": 56092571,
"login": "fwu-asapp",
"node_id": "MDQ6VXNlcjU2MDkyNTcx",
"organizations_url": "https://api.github.com/users/fwu-asapp/orgs",
"received_events_url": "https://api.github.com/users/fwu-asapp/received_events",
"repos_url": "https://api.github.com/users/fwu-asapp/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/fwu-asapp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fwu-asapp/subscriptions",
"type": "User",
"url": "https://api.github.com/users/fwu-asapp",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"<img width=\"519\" alt=\"Capture dβeΜcran 2022-09-20 aΜ 22 33 47\" src=\"https://user-images.githubusercontent.com/1676121/191358952-1220cb7d-745a-4203-a66b-3c707b25038f.png\">\r\n\r\n```\r\nNot found.\r\n\r\nError code: SplitsResponseNotFound\r\n```\r\n\r\nhttps://datasets-server.huggingface.co/splits?dataset=asapp/slue\r\n\r\n```json\r\n{\"error\":\"Not found.\"}\r\n```",
"I just launched a refresh. It's weird, I don't see any entry for this dataset in the cache, it's a bug on our side. In order to try to understand what happened, did you change the visibility status from private to public, by any chance?",
"The dataset is being refreshed, please retry later.\r\n\r\n<img width=\"802\" alt=\"Capture dβeΜcran 2022-09-20 aΜ 22 39 46\" src=\"https://user-images.githubusercontent.com/1676121/191360072-7cc86486-4e84-4b47-8f9a-4a69fe84a5ac.png\">\r\n",
"OK. We now have an issue because the dataset cannot be streamed, and the dataset viewer relies on it.\r\n\r\nMaybe @huggingface/datasets can help:\r\n\r\n```\r\nError code: StreamingRowsError\r\nException: NotImplementedError\r\nMessage: Extraction protocol for TAR archives like 'https://public-dataset-model-store.awsdev.asapp.com/users/sshon/public/slue/slue-voxpopuli_v0.2_blind.tar.gz' is not implemented in streaming mode. Please use `dl_manager.iter_archive` instead.\r\nTraceback: Traceback (most recent call last):\r\n File \"/src/services/worker/src/worker/responses/first_rows.py\", line 337, in get_first_rows_response\r\n rows = get_rows(dataset, config, split, streaming=True, rows_max_number=rows_max_number, hf_token=hf_token)\r\n File \"/src/services/worker/src/worker/utils.py\", line 123, in decorator\r\n return func(*args, **kwargs)\r\n File \"/src/services/worker/src/worker/responses/first_rows.py\", line 65, in get_rows\r\n ds = load_dataset(\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py\", line 1739, in load_dataset\r\n return builder_instance.as_streaming_dataset(split=split)\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py\", line 1025, in as_streaming_dataset\r\n splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)}\r\n File \"/tmp/modules-cache/datasets_modules/datasets/asapp--slue/adaa0c78233e1a1df9c2f054e690ec5fc3eaf453bd76b80fe5cbe5728e55d9b1/slue.py\", line 189, in _split_generators\r\n dl_dir = dl_manager.download_and_extract(_DL_URLS[config_name])\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py\", line 944, in download_and_extract\r\n return self.extract(self.download(url_or_urls))\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py\", line 907, in extract\r\n urlpaths = map_nested(self._extract, path_or_paths, map_tuple=True)\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py\", line 385, in map_nested\r\n return function(data_struct)\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py\", line 912, in _extract\r\n protocol = _get_extraction_protocol(urlpath, use_auth_token=self.download_config.use_auth_token)\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py\", line 390, in _get_extraction_protocol\r\n raise NotImplementedError(\r\n NotImplementedError: Extraction protocol for TAR archives like 'https://public-dataset-model-store.awsdev.asapp.com/users/sshon/public/slue/slue-voxpopuli_v0.2_blind.tar.gz' is not implemented in streaming mode. Please use `dl_manager.iter_archive` instead.\r\n```",
"Thanks @severo, \r\n\r\nDo I have to modify the python script to support streaming so that it can be previewed?\r\nIs there a document somewhere that I can follow?\r\n",
"Hi @fwu-asapp thanks for reporting, and thanks @severo for the investigation.\r\n\r\nAs explained by @severo, the preview requires that your dataset loading script supports streaming.\r\n\r\nThere are several options here:\r\n- the easiest would be to replace the source files, archived using ZIP instead TAR: the TAR format does not allow random access while streaming, but only sequential access; the ZIP files support streaming out of the box.\r\n- alternatively, to stream TAR archives you can use `dl_manager.iter_archive`: the only prerequisite is that your \"index\" files (.tsv) should have been archived before their corresponding audio files, so while iterating the content of the TAR archive, the metadata files appear first. I think this is the case for voxpopuli tar but not for voxceleb.\r\n- if your .tsv files were not archived before their corresponding audio files (I think this is the case for voxceleb), then you should extract the .tsv files and host them separately (you can host them on the same Hugging Face Hub).\r\n - you can take as example, e.g.: https://huggingface.co/datasets/vivos/blob/main/vivos.py\r\n\r\nAs an advanced approach, you can handle both streaming and non-streaming cases separately.\r\n- as for example: https://huggingface.co/datasets/librispeech_asr/blob/main/librispeech_asr.py or https://huggingface.co/datasets/google/fleurs/blob/main/fleurs.py\r\n\r\nSee related discussion:\r\n- https://github.com/huggingface/datasets/issues/4697#issuecomment-1191502492",
"Thanks @albertvillanova for your clarification. I'll talk to my collaborators to see if we can replace those files. Let me just close this issue for now.",
"FYI, after replacing the source files with the ZIP ones, the dataset viewer works well. Thanks again to @severo and @albertvillanova for your help!",
"Great! And thank you for sharing that interesting dataset!"
] | 2022-09-20T16:45:45
| 2022-09-27T07:04:03
| 2022-09-21T07:24:07
|
NONE
| null | null | null | null |
### Link
https://huggingface.co/datasets/asapp/slue/viewer/
### Description
Hi,
I wonder how to get the dataset viewer of our slue dataset to work.
Best,
Felix
### Owner
Yes
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/56092571?v=4",
"events_url": "https://api.github.com/users/fwu-asapp/events{/privacy}",
"followers_url": "https://api.github.com/users/fwu-asapp/followers",
"following_url": "https://api.github.com/users/fwu-asapp/following{/other_user}",
"gists_url": "https://api.github.com/users/fwu-asapp/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/fwu-asapp",
"id": 56092571,
"login": "fwu-asapp",
"node_id": "MDQ6VXNlcjU2MDkyNTcx",
"organizations_url": "https://api.github.com/users/fwu-asapp/orgs",
"received_events_url": "https://api.github.com/users/fwu-asapp/received_events",
"repos_url": "https://api.github.com/users/fwu-asapp/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/fwu-asapp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fwu-asapp/subscriptions",
"type": "User",
"url": "https://api.github.com/users/fwu-asapp",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5000/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5000/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 14:38:22
|
https://api.github.com/repos/huggingface/datasets/issues/4996
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4996/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4996/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4996/events
|
https://github.com/huggingface/datasets/issues/4996
| 1,379,345,161
|
I_kwDODunzps5SNyMJ
| 4,996
|
Dataset Viewer issue for Jean-Baptiste/wikiner_fr
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"The script uses `Dataset.load_from_disk`, which as you can expect, doesn't work in streaming mode.\r\n\r\nIt would probably be more practical to load the dataset locally using `Dataset.load_from_disk` first and then `push_to_hub` to upload it in Parquet on the Hub",
"I've transferred this issue to the Hub repo: https://huggingface.co/datasets/Jean-Baptiste/wikiner_fr/discussions/3\r\n\r\nI'm closing this."
] | 2022-09-20T12:32:07
| 2022-09-27T12:35:44
| 2022-09-27T12:35:44
|
COLLABORATOR
| null | null | null | null |
### Link
https://huggingface.co/datasets/Jean-Baptiste/wikiner_fr
### Description
```
Error code: StreamingRowsError
Exception: FileNotFoundError
Message: [Errno 2] No such file or directory: 'zip:/data/train::https:/huggingface.co/datasets/Jean-Baptiste/wikiner_fr/resolve/main/data.zip/state.json'
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/responses/first_rows.py", line 337, in get_first_rows_response
rows = get_rows(dataset, config, split, streaming=True, rows_max_number=rows_max_number, hf_token=hf_token)
File "/src/services/worker/src/worker/utils.py", line 123, in decorator
return func(*args, **kwargs)
File "/src/services/worker/src/worker/responses/first_rows.py", line 77, in get_rows
rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 718, in __iter__
for key, example in self._iter():
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 708, in _iter
yield from ex_iterable
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 112, in __iter__
yield from self.generate_examples_fn(**self.kwargs)
File "/tmp/modules-cache/datasets_modules/datasets/Jean-Baptiste--wikiner_fr/683a580ba6ec769d508f7dfc603a651667b0ed3817b1ae5bfd45f97cc024923f/wikiner_fr.py", line 165, in _generate_examples
dataset = Dataset.load_from_disk(filepath)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 1210, in load_from_disk
with open(Path(dataset_path, config.DATASET_STATE_JSON_FILENAME).as_posix(), encoding="utf-8") as state_file:
FileNotFoundError: [Errno 2] No such file or directory: 'zip:/data/train::https:/huggingface.co/datasets/Jean-Baptiste/wikiner_fr/resolve/main/data.zip/state.json'
```
Is it an error with the dataset script, or the data itself, @huggingface/datasets?
https://huggingface.co/datasets/Jean-Baptiste/wikiner_fr/tree/main
### Owner
No
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4996/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4996/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 7 days, 0:03:37
|
https://api.github.com/repos/huggingface/datasets/issues/4995
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4995/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4995/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4995/events
|
https://github.com/huggingface/datasets/issues/4995
| 1,379,108,482
|
I_kwDODunzps5SM4aC
| 4,995
|
Get a specific Exception when the dataset has no data
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
] |
[] | 2022-09-20T09:31:59
| 2022-09-21T12:21:25
| 2022-09-21T12:21:25
|
COLLABORATOR
| null | null | null | null |
In the dataset viewer on the Hub (https://huggingface.co/datasets/glue/viewer), we would like (https://github.com/huggingface/moon-landing/issues/3882) to show a specific message when the repository lacks any data files.
In that case, instead of showing a complex traceback, we want to show a call to action to help the user upload data.
To do that, it would be very helpful to know for sure that the repository is missing any (supported) data files.
It could be done by raising a custom exception, for example, `NoDataError`.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4995/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4995/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 1 day, 2:49:26
|
https://api.github.com/repos/huggingface/datasets/issues/4994
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4994/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4994/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4994/events
|
https://github.com/huggingface/datasets/issues/4994
| 1,379,084,015
|
I_kwDODunzps5SMybv
| 4,994
|
delete the hardcoded license list in `datasets`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/julien-c",
"id": 326577,
"login": "julien-c",
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"repos_url": "https://api.github.com/users/julien-c/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"type": "User",
"url": "https://api.github.com/users/julien-c",
"user_view_type": "public"
}
|
[] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] |
[] | 2022-09-20T09:14:41
| 2022-09-22T11:45:47
| 2022-09-22T11:45:47
|
MEMBER
| null | null | null | null |
> Feel free to delete the license list in `datasets` [...]
>
> Also FYI in #4926 I also removed all the validation steps anyway (language, license, types etc.)
_Originally posted by @lhoestq in https://github.com/huggingface/datasets/issues/4930#issuecomment-1238401662_
> [...], in my opinion we can just delete this file from `datasets`, the validation is happening hub-side anyways now?
_Originally posted by @julien-c in https://github.com/huggingface/datasets/issues/4930#issuecomment-1238390659_
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4994/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4994/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 2 days, 2:31:06
|
https://api.github.com/repos/huggingface/datasets/issues/4990
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4990/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4990/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4990/events
|
https://github.com/huggingface/datasets/issues/4990
| 1,378,120,806
|
I_kwDODunzps5SJHRm
| 4,990
|
"no-token" is passed to `huggingface_hub` when token is `None`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4",
"events_url": "https://api.github.com/users/Wauplin/events{/privacy}",
"followers_url": "https://api.github.com/users/Wauplin/followers",
"following_url": "https://api.github.com/users/Wauplin/following{/other_user}",
"gists_url": "https://api.github.com/users/Wauplin/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Wauplin",
"id": 11801849,
"login": "Wauplin",
"node_id": "MDQ6VXNlcjExODAxODQ5",
"organizations_url": "https://api.github.com/users/Wauplin/orgs",
"received_events_url": "https://api.github.com/users/Wauplin/received_events",
"repos_url": "https://api.github.com/users/Wauplin/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Wauplin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Wauplin/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Wauplin",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
] |
[
"Hi @Wauplin, thanks for raising this potential issue.\r\n\r\nThe choice of passing `\"no-token\"` instead of `None` was made in this PR:\r\n- #4536 \r\n\r\nAccording to the PR description, the reason why it is passed is to avoid that `HfApi.dataset_info` uses the local token when no token should be used.",
"Hi @albertvillanova , thanks for finding the original issue :+1: \r\n\r\nAs of next release of `huggingface_hub`, the `token` argument will be deprecated in favor of the `use_auth_token` argument in `dataset_info` method. This change as been done by @SBrandeis in https://github.com/huggingface/huggingface_hub/pull/928. `use_auth_token` is a bit different and allow the case \"don't sent the cached token by default\".\r\n\r\nIf you want to strictly avoid sending the cached token from `datasets`, you can use:\r\n```py\r\n# token=token if token else \"no-token\", <- will fail because token is not valid\r\n\r\nuse_auth_token=token if token else False, # using the new `use_auth_token` parameter\r\n```\r\n\r\nAnd as a note, I am currently updating the \"don't send the cached token by default\"-rule to \"don't send the cached token on public repos by default but use it in private ones\" in https://github.com/huggingface/huggingface_hub/pull/1064. This will not change the fact that `use_auth_token=False` doesn't send the token at all.\r\n",
"What is current strategy in term of updating `huggingface_hub` version in `datasets` ? I don't want to break stuff in the next release so let's find a proper solution :) ",
"As soon as `token` is deprecated and hfh has a new release, we'll update `datasets` to use the new argument instead. Does it sound good to you ?",
"Perfect :ok_hand: ",
"Hi @Wauplin, thanks for the warning about the deprecation of `token` in favor of `use_auth_token`.\r\n\r\nIndeed, in datasets we use internally `use_auth_token`, which in this case was transformed to `token` to call `HfApi.dataset_info`:\r\nhttps://github.com/huggingface/datasets/blob/1a9385d7cc8a3241b44015145ef56a230fdadc51/src/datasets/load.py#L747\r\n\r\nTherefore, for the new hfh release, the fix will be trivial: we will pass directly `use_auth_token`.\r\n\r\nAs discussed during our meeting yesterday, due to the fact that at datasets we support multiple hfh versions, I think we should handle passing `token` or `use_auth_token` depending on the hfh version."
] | 2022-09-19T15:14:40
| 2022-09-30T09:16:00
| 2022-09-30T09:16:00
|
CONTRIBUTOR
| null | null | null | null |
## Describe the bug
In the 2 lines listed below, a token is passed to `huggingface_hub` to get information from a dataset. If no token is provided, a "no-token" string is passed. What is the purpose of it ? If no real, I would prefer if the `None` value could be sent directly to be handle by `huggingface_hub`. I feel that here it is working because we assume the token will never be validated.
https://github.com/huggingface/datasets/blob/5b23f58535f14cc4dd7649485bce1ccc836e7bca/src/datasets/load.py#L753
https://github.com/huggingface/datasets/blob/5b23f58535f14cc4dd7649485bce1ccc836e7bca/src/datasets/load.py#L1121
## Expected results
Pass `token=None` to `huggingface_hub`.
## Actual results
`token="no-token"` is passed.
## Environment info
`huggingface_hub v0.10.0dev`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4990/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4990/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 10 days, 18:01:20
|
https://api.github.com/repos/huggingface/datasets/issues/4989
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4989/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4989/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4989/events
|
https://github.com/huggingface/datasets/issues/4989
| 1,376,832,233
|
I_kwDODunzps5SEMrp
| 4,989
|
Running add_column() seems to corrupt existing sequence-type column info
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/93728165?v=4",
"events_url": "https://api.github.com/users/derek-rocheleau/events{/privacy}",
"followers_url": "https://api.github.com/users/derek-rocheleau/followers",
"following_url": "https://api.github.com/users/derek-rocheleau/following{/other_user}",
"gists_url": "https://api.github.com/users/derek-rocheleau/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/derek-rocheleau",
"id": 93728165,
"login": "derek-rocheleau",
"node_id": "U_kgDOBZYtpQ",
"organizations_url": "https://api.github.com/users/derek-rocheleau/orgs",
"received_events_url": "https://api.github.com/users/derek-rocheleau/received_events",
"repos_url": "https://api.github.com/users/derek-rocheleau/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/derek-rocheleau/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/derek-rocheleau/subscriptions",
"type": "User",
"url": "https://api.github.com/users/derek-rocheleau",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] |
[
"Nevermind, I was incorrect."
] | 2022-09-17T17:42:05
| 2022-09-19T12:54:54
| 2022-09-19T12:54:54
|
NONE
| null | null | null | null |
I have a dataset that contains a column ("foo") that is a sequence type of length 4. So when I run .to_pandas() on it, the resulting dataframe correctly contains 4 columns - foo_0, foo_1, foo_2, foo_3. So the 1st row of the dataframe might look like:
ds = load_dataset(...)
df = ds.to_pandas()
df:
foo_0 | foo_1 | foo_2 | foo_3
0.0 | 1.0 | 2.0 | 3.0
If I run .add_column("new_col", data) on the dataset, and then .to_pandas() on the resulting new dataset, the resulting dataframe contains only 2 columns - foo, new_col. The values in column foo are lists of length 4, the 4 elements that should have been split into separate columns. Dataframe 1st row would be:
ds = load_dataset(...)
new_ds = ds.add_column("new_col", data)
df = new_ds.to_pandas()
df:
foo | new_col
[0.0, 1.0, 2.0, 3.0] | new_val
I've explored the 2 datasets in a debugger and haven't noticed any changes to any attributes related to the foo column, but I can't determine why the dataframes are so different.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/93728165?v=4",
"events_url": "https://api.github.com/users/derek-rocheleau/events{/privacy}",
"followers_url": "https://api.github.com/users/derek-rocheleau/followers",
"following_url": "https://api.github.com/users/derek-rocheleau/following{/other_user}",
"gists_url": "https://api.github.com/users/derek-rocheleau/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/derek-rocheleau",
"id": 93728165,
"login": "derek-rocheleau",
"node_id": "U_kgDOBZYtpQ",
"organizations_url": "https://api.github.com/users/derek-rocheleau/orgs",
"received_events_url": "https://api.github.com/users/derek-rocheleau/received_events",
"repos_url": "https://api.github.com/users/derek-rocheleau/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/derek-rocheleau/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/derek-rocheleau/subscriptions",
"type": "User",
"url": "https://api.github.com/users/derek-rocheleau",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4989/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4989/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 1 day, 19:12:49
|
https://api.github.com/repos/huggingface/datasets/issues/4988
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4988/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4988/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4988/events
|
https://github.com/huggingface/datasets/issues/4988
| 1,376,096,584
|
I_kwDODunzps5SBZFI
| 4,988
|
Add `IterableDataset.from_generator` to the API
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/56002455?v=4",
"events_url": "https://api.github.com/users/hamid-vakilzadeh/events{/privacy}",
"followers_url": "https://api.github.com/users/hamid-vakilzadeh/followers",
"following_url": "https://api.github.com/users/hamid-vakilzadeh/following{/other_user}",
"gists_url": "https://api.github.com/users/hamid-vakilzadeh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hamid-vakilzadeh",
"id": 56002455,
"login": "hamid-vakilzadeh",
"node_id": "MDQ6VXNlcjU2MDAyNDU1",
"organizations_url": "https://api.github.com/users/hamid-vakilzadeh/orgs",
"received_events_url": "https://api.github.com/users/hamid-vakilzadeh/received_events",
"repos_url": "https://api.github.com/users/hamid-vakilzadeh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hamid-vakilzadeh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hamid-vakilzadeh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hamid-vakilzadeh",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/56002455?v=4",
"events_url": "https://api.github.com/users/hamid-vakilzadeh/events{/privacy}",
"followers_url": "https://api.github.com/users/hamid-vakilzadeh/followers",
"following_url": "https://api.github.com/users/hamid-vakilzadeh/following{/other_user}",
"gists_url": "https://api.github.com/users/hamid-vakilzadeh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hamid-vakilzadeh",
"id": 56002455,
"login": "hamid-vakilzadeh",
"node_id": "MDQ6VXNlcjU2MDAyNDU1",
"organizations_url": "https://api.github.com/users/hamid-vakilzadeh/orgs",
"received_events_url": "https://api.github.com/users/hamid-vakilzadeh/received_events",
"repos_url": "https://api.github.com/users/hamid-vakilzadeh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hamid-vakilzadeh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hamid-vakilzadeh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hamid-vakilzadeh",
"user_view_type": "public"
}
] |
[
"#take",
"Thanks @hamid-vakilzadeh ! Let us know if you have some questions or if we can help",
"Thank you! I certainly will reach out if I need any help."
] | 2022-09-16T15:19:41
| 2022-10-05T12:10:49
| 2022-10-05T12:10:49
|
COLLABORATOR
| null | null | null | null |
We've just added `Dataset.from_generator` to the API. It would also be cool to add `IterableDataset.from_generator` to support creating an iterable dataset from a generator.
cc @lhoestq
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4988/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4988/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 18 days, 20:51:08
|
https://api.github.com/repos/huggingface/datasets/issues/4983
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4983/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4983/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4983/events
|
https://github.com/huggingface/datasets/issues/4983
| 1,375,667,654
|
I_kwDODunzps5R_wXG
| 4,983
|
How to convert torch.utils.data.Dataset to huggingface dataset?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/77595952?v=4",
"events_url": "https://api.github.com/users/DEROOCE/events{/privacy}",
"followers_url": "https://api.github.com/users/DEROOCE/followers",
"following_url": "https://api.github.com/users/DEROOCE/following{/other_user}",
"gists_url": "https://api.github.com/users/DEROOCE/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/DEROOCE",
"id": 77595952,
"login": "DEROOCE",
"node_id": "MDQ6VXNlcjc3NTk1OTUy",
"organizations_url": "https://api.github.com/users/DEROOCE/orgs",
"received_events_url": "https://api.github.com/users/DEROOCE/received_events",
"repos_url": "https://api.github.com/users/DEROOCE/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/DEROOCE/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DEROOCE/subscriptions",
"type": "User",
"url": "https://api.github.com/users/DEROOCE",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
| null |
[] |
[
"Hi! I think you can use the newly-added `from_generator` method for that:\r\n```python\r\nfrom datasets import Dataset\r\n\r\ndef gen():\r\n for idx in len(torch_dataset):\r\n yield torch_dataset[idx] # this has to be a dictionary\r\n ## or if it's an IterableDataset\r\n # for ex in torch_dataset:\r\n # yield ex\r\n\r\ndset = Dataset.from_generator(gen)\r\n```",
"Maybe `Dataset.from_list` can work as well no ?\r\n```python\r\nfrom datasets import Dataset\r\n\r\ndset = Dataset.from_list(torch_dataset)\r\n```",
"> ```python\r\n> from datasets import Dataset\r\n> \r\n> def gen():\r\n> for idx in len(torch_dataset):\r\n> yield torch_dataset[idx] # this has to be a dictionary\r\n> ## or if it's an IterableDataset\r\n> # for ex in torch_dataset:\r\n> # yield ex\r\n> \r\n> dset = Dataset.from_generator(gen)\r\n> ```\r\n\r\nI try to use `Dataset.from_generator()` method, and it returns an error:\r\n```bash\r\nAttributeError: type object 'Dataset' has no attribute 'from_generator'\r\n```\r\nAnd I think it maybe the version of my datasets package is out-of-date, so I update it\r\n```bash\r\npip install --upgrade datasets\r\n```\r\nBut after that, the code still return the above Error. ",
"> ```python\r\n> dset = Dataset.from_list(torch_dataset)\r\n> ```\r\n\r\nIt seems that Dataset also has no `from_list` method π\r\n```bash\r\nAttributeError: type object 'Dataset' has no attribute 'from_list'\r\n```",
"> I look through the huggingface dataset docs, and it seems that there is no offical support function to convert `torch.utils.data.Dataset` to huggingface dataset. However, there is a way to convert huggingface dataset to `torch.utils.data.Dataset`, like below:\r\n> \r\n> ```python\r\n> from datasets import Dataset\r\n> data = [[1, 2],[3, 4]]\r\n> ds = Dataset.from_dict({\"data\": data})\r\n> ds = ds.with_format(\"torch\")\r\n> ds[0]\r\n> ds[:2]\r\n> ```\r\n> \r\n> So is there something I miss, or there IS no function to convert `torch.utils.data.Dataset` to huggingface dataset. If so, is there any way to do this convert? Thanks.\r\n\r\nMy dummy code is like:\r\n```python\r\nimport os\r\nimport json\r\nfrom torch.utils import data\r\nimport datasets\r\n\r\ndef gen(torch_dataset):\r\n for idx in len(torch_dataset):\r\n yield torch_dataset[idx] # this has to be a dictionary\r\n\r\nclass MyDataset(data.Dataset):\r\n def __init__(self, path):\r\n self.dict = []\r\n for line in open(path, 'r', encoding='utf-8'):\r\n j_dict = json.loads(line)\r\n self.dict.append(j_dict['context'])\r\n \r\n def __getitem__(self, idx):\r\n return self.dict[idx]\r\n\r\n def __len__(self):\r\n return len(self.dict)\r\n\r\nroot_path = os.path.dirname(os.path.abspath(__file__))\r\npath = os.path.join(root_path, 'dataset', 'train.json')\r\ntorch_dataset = MyDataset(path)\r\n\r\ndit = []\r\nfor line in open(path, 'r', encoding='utf-8'):\r\n j_dict = json.loads(line)\r\n dit.append(j_dict['context'])\r\ndset1 = datasets.Dataset.from_list(dit)\r\nprint(dset1)\r\ndset2 = datasets.Dataset.from_generator(gen)\r\nprint(dset2)\r\n```",
"We're releasing `from_generator` and `from_list` today :)\r\nIn the meantime you can play with them by installing `datasets` from source",
"> We're releasing `from_generator` and `from_list` today :) In the meantime you can play with them by installing `datasets` from source\r\n\r\nThanks a lot for your work!",
"> > I look through the huggingface dataset docs, and it seems that there is no offical support function to convert `torch.utils.data.Dataset` to huggingface dataset. However, there is a way to convert huggingface dataset to `torch.utils.data.Dataset`, like below:\r\n> > ```python\r\n> > from datasets import Dataset\r\n> > data = [[1, 2],[3, 4]]\r\n> > ds = Dataset.from_dict({\"data\": data})\r\n> > ds = ds.with_format(\"torch\")\r\n> > ds[0]\r\n> > ds[:2]\r\n> > ```\r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > So is there something I miss, or there IS no function to convert `torch.utils.data.Dataset` to huggingface dataset. If so, is there any way to do this convert? Thanks.\r\n> \r\n> My dummy code is like:\r\n> \r\n> ```python\r\n> import os\r\n> import json\r\n> from torch.utils import data\r\n> import datasets\r\n> \r\n> def gen(torch_dataset):\r\n> for idx in len(torch_dataset):\r\n> yield torch_dataset[idx] # this has to be a dictionary\r\n> \r\n> class MyDataset(data.Dataset):\r\n> def __init__(self, path):\r\n> self.dict = []\r\n> for line in open(path, 'r', encoding='utf-8'):\r\n> j_dict = json.loads(line)\r\n> self.dict.append(j_dict['context'])\r\n> \r\n> def __getitem__(self, idx):\r\n> return self.dict[idx]\r\n> \r\n> def __len__(self):\r\n> return len(self.dict)\r\n> \r\n> root_path = os.path.dirname(os.path.abspath(__file__))\r\n> path = os.path.join(root_path, 'dataset', 'train.json')\r\n> torch_dataset = MyDataset(path)\r\n> \r\n> dit = []\r\n> for line in open(path, 'r', encoding='utf-8'):\r\n> j_dict = json.loads(line)\r\n> dit.append(j_dict['context'])\r\n> dset1 = datasets.Dataset.from_list(dit)\r\n> print(dset1)\r\n> dset2 = datasets.Dataset.from_generator(gen)\r\n> print(dset2)\r\n> ```\r\nHi, when I am using this code to build my own dataset, ` datasets.Dataset.from_generator(gen)` report `TypeError: cannot pickle generator object` whre MyDataset returns a dict like {'image': bytes, 'text': string}. How can I resolve this? Thanks a lot!",
"Hi ! Right now generator functions are expected to be picklable, so that `datasets` can hash it and use the hash to cache the resulting Dataset on disk. Maybe this can be improved.\r\n\r\nIn the meantime, can you check that you're not using unpickable objects. In your case it looks like you're using a generator object that is unpickable. It might come from an opened file, e.g. this doesn't work:\r\n```python\r\nwith open(...) as f:\r\n\r\n def gen():\r\n for x in f:\r\n yield json.loads(x)\r\n\r\n ds = Dataset.from_generator(gen)\r\n```\r\nbut this does work:\r\n```python\r\ndef gen():\r\n with open(...) as f:\r\n for x in f:\r\n yield json.loads(x)\r\n\r\nds = Dataset.from_generator(gen)\r\n```",
"> Hi ! Right now generator functions are expected to be picklable, so that `datasets` can hash it and use the hash to cache the resulting Dataset on disk. Maybe this can be improved.\r\n> \r\n> In the meantime, can you check that you're not using unpickable objects. In your case it looks like you're using a generator object that is unpickable. It might come from an opened file, e.g. this doesn't work:\r\n> \r\n> ```python\r\n> with open(...) as f:\r\n> \r\n> def gen():\r\n> for x in f:\r\n> yield json.loads(x)\r\n> \r\n> ds = Dataset.from_generator(gen)\r\n> ```\r\n> \r\n> but this does work:\r\n> \r\n> ```python\r\n> def gen():\r\n> with open(...) as f:\r\n> for x in f:\r\n> yield json.loads(x)\r\n> \r\n> ds = Dataset.from_generator(gen)\r\n> ```\r\n\r\nThanks a lot! That's the reason why I have encountered this issue. Sorry for bothering you again with another problem, since my dataset is large and I use IterableDataset.from_generator which has no attribute with_transform, how can I equip it with some customed preprocessings like Dataset.from_generator? Should I move the preprocessing to the my torch Dataset?",
"Iterable datasets are lazy: exactly like `with_transform` they apply processing on the fly when accessing the examples.\r\n\r\nTherefore you can use `my_iterable_dataset.map()` instead :)",
"@lhoestq thanks a lot and I have successfully made it work~",
"@lhoestq I am having a similar issue. Can you help me understand which kinds of generators are picklable? I previously thought that no generators are picklable so I'm intrigued to hear this.",
"Generator functions are generally picklable. E.g.\r\n```python\r\nimport dill as pickle\r\n\r\ndef generator_fn():\r\n for i in range(10):\r\n yield i\r\n\r\npickle.dumps(generator_fn)\r\n```\r\n\r\nhowever generators are not picklable\r\n```python\r\ngenerator = generator_fn()\r\npickle.dumps(generator)\r\n# TypeError: cannot pickle 'generator' object\r\n```\r\n\r\nThough it can happen that some generator functions are not recursively picklable if they use global objects that are not picklable:\r\n```python\r\ndef generator_fn_not_picklable():\r\n for i in generator:\r\n yield i\r\n\r\npickle.dumps(generator_fn_not_picklable, recurse=True)\r\n# TypeError: cannot pickle 'generator' object\r\n````",
"I'm trying to create an IterableDataset from a generator but I get this error:\r\n`PicklingError: Can't pickle <built-in function input>: it's not the same object as builtins.input`\r\n\r\nWhat can I do?"
] | 2022-09-16T09:15:10
| 2023-12-14T20:54:15
| 2022-09-20T11:23:43
|
NONE
| null | null | null | null |
I look through the huggingface dataset docs, and it seems that there is no offical support function to convert `torch.utils.data.Dataset` to huggingface dataset. However, there is a way to convert huggingface dataset to `torch.utils.data.Dataset`, like below:
```python
from datasets import Dataset
data = [[1, 2],[3, 4]]
ds = Dataset.from_dict({"data": data})
ds = ds.with_format("torch")
ds[0]
ds[:2]
```
So is there something I miss, or there IS no function to convert `torch.utils.data.Dataset` to huggingface dataset. If so, is there any way to do this convert?
Thanks.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/77595952?v=4",
"events_url": "https://api.github.com/users/DEROOCE/events{/privacy}",
"followers_url": "https://api.github.com/users/DEROOCE/followers",
"following_url": "https://api.github.com/users/DEROOCE/following{/other_user}",
"gists_url": "https://api.github.com/users/DEROOCE/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/DEROOCE",
"id": 77595952,
"login": "DEROOCE",
"node_id": "MDQ6VXNlcjc3NTk1OTUy",
"organizations_url": "https://api.github.com/users/DEROOCE/orgs",
"received_events_url": "https://api.github.com/users/DEROOCE/received_events",
"repos_url": "https://api.github.com/users/DEROOCE/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/DEROOCE/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DEROOCE/subscriptions",
"type": "User",
"url": "https://api.github.com/users/DEROOCE",
"user_view_type": "public"
}
|
{
"+1": 8,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 8,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4983/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4983/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 4 days, 2:08:33
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.