url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 48
51
| id
int64 600M
3.67B
| node_id
stringlengths 18
24
| number
int64 2
7.88k
| title
stringlengths 1
290
| user
dict | labels
listlengths 0
4
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
4
| comments
listlengths 0
30
| created_at
timestamp[s]date 2020-04-14 18:18:51
2025-11-26 16:16:56
| updated_at
timestamp[s]date 2020-04-29 09:23:05
2025-11-30 03:52:07
| closed_at
timestamp[s]date 2020-04-29 09:23:05
2025-11-21 12:31:19
⌀ | author_association
stringclasses 4
values | type
null | active_lock_reason
null | draft
null | pull_request
null | body
stringlengths 0
228k
⌀ | closed_by
dict | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | state_reason
stringclasses 4
values | sub_issues_summary
dict | issue_dependencies_summary
dict | is_pull_request
bool 1
class | closed_at_time_taken
duration[s] |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/5624
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5624/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5624/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5624/events
|
https://github.com/huggingface/datasets/issues/5624
| 1,617,400,192
|
I_kwDODunzps5gZ5GA
| 5,624
|
glue datasets returning -1 for test split
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8939967?v=4",
"events_url": "https://api.github.com/users/lithafnium/events{/privacy}",
"followers_url": "https://api.github.com/users/lithafnium/followers",
"following_url": "https://api.github.com/users/lithafnium/following{/other_user}",
"gists_url": "https://api.github.com/users/lithafnium/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lithafnium",
"id": 8939967,
"login": "lithafnium",
"node_id": "MDQ6VXNlcjg5Mzk5Njc=",
"organizations_url": "https://api.github.com/users/lithafnium/orgs",
"received_events_url": "https://api.github.com/users/lithafnium/received_events",
"repos_url": "https://api.github.com/users/lithafnium/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lithafnium/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lithafnium/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lithafnium",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi @lithafnium, thanks for reporting.\r\n\r\nPlease note that you can use the \"Community\" tab in the corresponding dataset page to start any discussion: https://huggingface.co/datasets/glue/discussions\r\n\r\nIndeed this issue was already raised there (https://huggingface.co/datasets/glue/discussions/5) and answered: https://huggingface.co/datasets/glue/discussions/5#63907885937867f0cb3cde31\r\n> The test labels are not public.\r\n>\r\n> Note this dataset belongs to a benchmark: people send their predictions for the test split to GLUE (https://gluebenchmark.com/) and then they get a score in their leaderboard...\r\n"
] | 2023-03-09T14:47:18
| 2023-03-09T16:49:29
| 2023-03-09T16:49:29
|
NONE
| null | null | null | null |
### Describe the bug
Downloading any dataset from GLUE has -1 as class labels for test split. Train and validation have regular 0/1 class labels. This is also present in the dataset card online.
### Steps to reproduce the bug
```
dataset = load_dataset("glue", "sst2")
for d in dataset:
# prints out -1
print(d["label"]
```
### Expected behavior
Expected behavior should be 0/1 instead of -1.
### Environment info
- `datasets` version: 2.4.0
- Platform: Linux-5.15.0-46-generic-x86_64-with-glibc2.17
- Python version: 3.8.16
- PyArrow version: 8.0.0
- Pandas version: 1.5.3
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5624/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5624/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 2:02:11
|
https://api.github.com/repos/huggingface/datasets/issues/5618
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5618/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5618/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5618/events
|
https://github.com/huggingface/datasets/issues/5618
| 1,612,977,934
|
I_kwDODunzps5gJBcO
| 5,618
|
Unpin fsspec < 2023.3.0 once issue fixed
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[] | 2023-03-07T08:41:51
| 2023-03-07T13:39:03
| 2023-03-07T13:39:03
|
MEMBER
| null | null | null | null |
Unpin `fsspec` upper version once root cause of our CI break is fixed.
See:
- #5614
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5618/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5618/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 4:57:12
|
https://api.github.com/repos/huggingface/datasets/issues/5616
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5616/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5616/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5616/events
|
https://github.com/huggingface/datasets/issues/5616
| 1,612,932,508
|
I_kwDODunzps5gI2Wc
| 5,616
|
CI is broken after fsspec-2023.3.0 release
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] |
[] | 2023-03-07T08:06:39
| 2023-03-07T08:37:29
| 2023-03-07T08:37:29
|
MEMBER
| null | null | null | null |
As reported by @lhoestq, our CI is broken after `fsspec` 2023.3.0 release:
```
FAILED tests/test_filesystem.py::test_compression_filesystems[Bz2FileSystem] - AssertionError: assert [{'created': ...: False, ...}] == ['file.txt']
At index 0 diff: {'name': 'file.txt', 'size': 70, 'type': 'file', 'created': 1678175677.1887748, 'islink': False, 'mode': 33188, 'uid': 1001, 'gid': 123, 'mtime': 1678175677.1887748, 'ino': 286957, 'nlink': 1} != 'file.txt'
Full diff:
[
- 'file.txt',
+ {'created': 1678175677.1887748,
+ 'gid': 123,
+ 'ino': 286957,
+ 'islink': False,
+ 'mode': 33188,
+ 'mtime': 1678175677.1887748,
+ 'name': 'file.txt',
+ 'nlink': 1,
+ 'size': 70,
+ 'type': 'file',
+ 'uid': 1001},
]
```
Also:
```
FAILED tests/test_filesystem.py::test_compression_filesystems[GzipFileSystem] - AssertionError: assert [{'created': ...: False, ...}] == ['file.txt']
FAILED tests/test_filesystem.py::test_compression_filesystems[Lz4FileSystem] - AssertionError: assert [{'created': ...: False, ...}] == ['file.txt']
FAILED tests/test_filesystem.py::test_compression_filesystems[XzFileSystem] - AssertionError: assert [{'created': ...: False, ...}] == ['file.txt']
FAILED tests/test_filesystem.py::test_compression_filesystems[ZstdFileSystem] - AssertionError: assert [{'created': ...: False, ...}] == ['file.txt']
===== 5 failed, 2134 passed, 18 skipped, 38 warnings in 157.21s (0:02:37) ======
```
See:
- fsspec/filesystem_spec#1205
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5616/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5616/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 0:30:50
|
https://api.github.com/repos/huggingface/datasets/issues/5615
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5615/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5615/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5615/events
|
https://github.com/huggingface/datasets/issues/5615
| 1,612,552,653
|
I_kwDODunzps5gHZnN
| 5,615
|
IterableDataset.add_column is unable to accept another IterableDataset as a parameter.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/6466389?v=4",
"events_url": "https://api.github.com/users/zsaladin/events{/privacy}",
"followers_url": "https://api.github.com/users/zsaladin/followers",
"following_url": "https://api.github.com/users/zsaladin/following{/other_user}",
"gists_url": "https://api.github.com/users/zsaladin/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/zsaladin",
"id": 6466389,
"login": "zsaladin",
"node_id": "MDQ6VXNlcjY0NjYzODk=",
"organizations_url": "https://api.github.com/users/zsaladin/orgs",
"received_events_url": "https://api.github.com/users/zsaladin/received_events",
"repos_url": "https://api.github.com/users/zsaladin/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/zsaladin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zsaladin/subscriptions",
"type": "User",
"url": "https://api.github.com/users/zsaladin",
"user_view_type": "public"
}
|
[
{
"color": "ffffff",
"default": true,
"description": "This will not be worked on",
"id": 1935892913,
"name": "wontfix",
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEz",
"url": "https://api.github.com/repos/huggingface/datasets/labels/wontfix"
}
] |
closed
| false
| null |
[] |
[
"Hi! You can use `concatenate_datasets([ids1, ids2], axis=1)` to do this."
] | 2023-03-07T01:52:00
| 2023-03-09T15:24:05
| 2023-03-09T15:23:54
|
NONE
| null | null | null | null |
### Describe the bug
`IterableDataset.add_column` occurs an exception when passing another `IterableDataset` as a parameter.
The method seems to accept only eager evaluated values.
https://github.com/huggingface/datasets/blob/35b789e8f6826b6b5a6b48fcc2416c890a1f326a/src/datasets/iterable_dataset.py#L1388-L1391
I wrote codes below to make it.
```py
def add_column(dataset: IterableDataset, name: str, add_dataset: IterableDataset, key: str) -> IterableDataset:
iter_add_dataset = iter(add_dataset)
def add_column_fn(example):
if name in example:
raise ValueError(f"Error when adding {name}: column {name} is already in the dataset.")
return {name: next(iter_add_dataset)[key]}
return dataset.map(add_column_fn)
```
Is there other way to do it? Or is it intended?
### Steps to reproduce the bug
Thie codes below occurs `NotImplementedError`
```py
from datasets import IterableDataset
def gen(num):
yield {f"col{num}": 1}
yield {f"col{num}": 2}
yield {f"col{num}": 3}
ids1 = IterableDataset.from_generator(gen, gen_kwargs={"num": 1})
ids2 = IterableDataset.from_generator(gen, gen_kwargs={"num": 2})
new_ids = ids1.add_column("new_col", ids1)
for row in new_ids:
print(row)
```
### Expected behavior
`IterableDataset.add_column` is able to task `IterableDataset` and lazy evaluated values as a parameter since IterableDataset is lazy evalued.
### Environment info
- `datasets` version: 2.8.0
- Platform: Linux-3.10.0-1160.36.2.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.9.7
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5615/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5615/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 2 days, 13:31:54
|
https://api.github.com/repos/huggingface/datasets/issues/5613
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5613/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5613/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5613/events
|
https://github.com/huggingface/datasets/issues/5613
| 1,611,875,473
|
I_kwDODunzps5gE0SR
| 5,613
|
Version mismatch with multiprocess and dill on Python 3.10
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1243668?v=4",
"events_url": "https://api.github.com/users/adampauls/events{/privacy}",
"followers_url": "https://api.github.com/users/adampauls/followers",
"following_url": "https://api.github.com/users/adampauls/following{/other_user}",
"gists_url": "https://api.github.com/users/adampauls/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/adampauls",
"id": 1243668,
"login": "adampauls",
"node_id": "MDQ6VXNlcjEyNDM2Njg=",
"organizations_url": "https://api.github.com/users/adampauls/orgs",
"received_events_url": "https://api.github.com/users/adampauls/received_events",
"repos_url": "https://api.github.com/users/adampauls/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/adampauls/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adampauls/subscriptions",
"type": "User",
"url": "https://api.github.com/users/adampauls",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[
"Sorry, I just found https://github.com/apache/beam/issues/24458. It seems this issue is being worked on. ",
"Reopening, since I think the docs should inform the user of this problem. For example, [this page](https://huggingface.co/docs/datasets/installation) says \r\n> Datasets is tested on Python 3.7+.\r\n\r\nbut it should probably say that Beam Datasets do not work with Python 3.10 (or link to a known issues page). ",
"Same problem on Colab using a vanilla setup running :\r\nPython 3.10.11 \r\napache-beam 2.47.0\r\ndatasets 2.12.0",
"Same problem, \r\npy 3.10.11\r\napache-beam==2.47.0\r\ndatasets==2.12.0",
"I have made a workaround by forcing an install of the version of `multiprocess` version `0.70.15` (after installing `datasets` and `apache-beam`). I can confirm that (on Python 3.10 in [this colab notebook](https://colab.research.google.com/drive/1PTeGlshamFcJZix_GiS3vMXX_YzAhGv0?usp=sharing)) `datasets` can download pre-processed Wikipedia dumps and can download non-pre-processed dumps using `beam_runner=\"DirectRunner\"`. I don't know if/how other `beam_runner`s can be made compatible.",
"Same problem.\r\n\r\n```\r\npython = \"^3.10\"\r\napache-beam = { extras = [\"gcp\"], version = \"2.54.0\" }\r\ndatasets = \"^2.18.0\"\r\n```"
] | 2023-03-06T17:14:41
| 2024-04-05T20:13:52
| null |
NONE
| null | null | null | null |
### Describe the bug
Grabbing the latest version of `datasets` and `apache-beam` with `poetry` using Python 3.10 gives a crash at runtime. The crash is
```
File "/Users/adpauls/sc/git/DSI-transformers/data/NQ/create_NQ_train_vali.py", line 1, in <module>
import datasets
File "/Users/adpauls/Library/Caches/pypoetry/virtualenvs/yyy-oPbZ7mKM-py3.10/lib/python3.10/site-packages/datasets/__init__.py", line 43, in <module>
from .arrow_dataset import Dataset
File "/Users/adpauls/Library/Caches/pypoetry/virtualenvs/yyy-oPbZ7mKM-py3.10/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 65, in <module>
from .arrow_reader import ArrowReader
File "/Users/adpauls/Library/Caches/pypoetry/virtualenvs/yyy-oPbZ7mKM-py3.10/lib/python3.10/site-packages/datasets/arrow_reader.py", line 30, in <module>
from .download.download_config import DownloadConfig
File "/Users/adpauls/Library/Caches/pypoetry/virtualenvs/yyy-oPbZ7mKM-py3.10/lib/python3.10/site-packages/datasets/download/__init__.py", line 9, in <module>
from .download_manager import DownloadManager, DownloadMode
File "/Users/adpauls/Library/Caches/pypoetry/virtualenvs/yyy-oPbZ7mKM-py3.10/lib/python3.10/site-packages/datasets/download/download_manager.py", line 35, in <module>
from ..utils.py_utils import NestedDataStructure, map_nested, size_str
File "/Users/adpauls/Library/Caches/pypoetry/virtualenvs/yyy-oPbZ7mKM-py3.10/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 40, in <module>
import multiprocess.pool
File "/Users/adpauls/Library/Caches/pypoetry/virtualenvs/yyy-oPbZ7mKM-py3.10/lib/python3.10/site-packages/multiprocess/pool.py", line 609, in <module>
class ThreadPool(Pool):
File "/Users/adpauls/Library/Caches/pypoetry/virtualenvs/yyy-oPbZ7mKM-py3.10/lib/python3.10/site-packages/multiprocess/pool.py", line 611, in ThreadPool
from .dummy import Process
File "/Users/adpauls/Library/Caches/pypoetry/virtualenvs/yyy-oPbZ7mKM-py3.10/lib/python3.10/site-packages/multiprocess/dummy/__init__.py", line 87, in <module>
class Condition(threading._Condition):
AttributeError: module 'threading' has no attribute '_Condition'. Did you mean: 'Condition'?
```
I think this is a bad interaction of versions from `dill`, `multiprocess`, `apache-beam`, and `threading` from the Python (3.10) standard lib. Upgrading `multiprocess` to a version that does not crash like this is not possible because `apache-beam` pins `dill` to and old version:
```
Because multiprocess (0.70.10) depends on dill (>=0.3.2)
and apache-beam (2.45.0) depends on dill (>=0.3.1.1,<0.3.2), multiprocess (0.70.10) is incompatible with apache-beam (2.45.0).
And because no versions of apache-beam match >2.45.0,<3.0.0, multiprocess (0.70.10) is incompatible with apache-beam (>=2.45.0,<3.0.0).
So, because yyy depends on both apache-beam (^2.45.0) and multiprocess (0.70.10), version solving failed.
```
Perhaps it is not right to file a bug here, but I'm not totally sure whose fault it is. And in any case, this is an immediate blocker to using `datasets` out of the box.
Possibly related to https://github.com/huggingface/datasets/issues/5232.
### Steps to reproduce the bug
Steps to reproduce:
1. Make a poetry project with this configuration
```
[tool.poetry]
name = "yyy"
version = "0.1.0"
description = ""
authors = ["Adam Pauls <adpauls@gmail.com>"]
readme = "README.md"
packages = [{ include = "xxx" }]
[tool.poetry.dependencies]
python = ">=3.10,<3.11"
datasets = "^2.10.1"
apache-beam = "^2.45.0"
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"
```
2. `poetry install`.
3. `poetry run python -c "import datasets"`.
### Expected behavior
Script runs.
### Environment info
Python 3.10. Here are the versions installed by `poetry`:
```
•• Installing frozenlist (1.3.3)
• Installing idna (3.4)
• Installing multidict (6.0.4)
• Installing aiosignal (1.3.1)
• Installing async-timeout (4.0.2)
• Installing attrs (22.2.0)
• Installing certifi (2022.12.7)
• Installing charset-normalizer (3.1.0)
• Installing six (1.16.0)
• Installing urllib3 (1.26.14)
• Installing yarl (1.8.2)
• Installing aiohttp (3.8.4)
• Installing dill (0.3.1.1)
• Installing docopt (0.6.2)
• Installing filelock (3.9.0)
• Installing numpy (1.22.4)
• Installing pyparsing (3.0.9)
• Installing protobuf (3.19.4)
• Installing packaging (23.0)
• Installing python-dateutil (2.8.2)
• Installing pytz (2022.7.1)
• Installing pyyaml (6.0)
• Installing requests (2.28.2)
• Installing tqdm (4.65.0)
• Installing typing-extensions (4.5.0)
• Installing cloudpickle (2.2.1)
• Installing crcmod (1.7)
• Installing fastavro (1.7.2)
• Installing fasteners (0.18)
• Installing fsspec (2023.3.0)
• Installing grpcio (1.51.3)
• Installing hdfs (2.7.0)
• Installing httplib2 (0.20.4)
• Installing huggingface-hub (0.12.1)
• Installing multiprocess (0.70.9)
• Installing objsize (0.6.1)
• Installing orjson (3.8.7)
• Installing pandas (1.5.3)
• Installing proto-plus (1.22.2)
• Installing pyarrow (9.0.0)
• Installing pydot (1.4.2)
• Installing pymongo (3.13.0)
• Installing regex (2022.10.31)
• Installing responses (0.18.0)
• Installing xxhash (3.2.0)
• Installing zstandard (0.20.0)
• Installing apache-beam (2.45.0)
• Installing datasets (2.10.1)
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1243668?v=4",
"events_url": "https://api.github.com/users/adampauls/events{/privacy}",
"followers_url": "https://api.github.com/users/adampauls/followers",
"following_url": "https://api.github.com/users/adampauls/following{/other_user}",
"gists_url": "https://api.github.com/users/adampauls/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/adampauls",
"id": 1243668,
"login": "adampauls",
"node_id": "MDQ6VXNlcjEyNDM2Njg=",
"organizations_url": "https://api.github.com/users/adampauls/orgs",
"received_events_url": "https://api.github.com/users/adampauls/received_events",
"repos_url": "https://api.github.com/users/adampauls/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/adampauls/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adampauls/subscriptions",
"type": "User",
"url": "https://api.github.com/users/adampauls",
"user_view_type": "public"
}
|
{
"+1": 10,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 10,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5613/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5613/timeline
| null |
reopened
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/5612
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5612/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5612/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5612/events
|
https://github.com/huggingface/datasets/issues/5612
| 1,611,262,510
|
I_kwDODunzps5gCeou
| 5,612
|
Arrow map type in parquet files unsupported
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4",
"events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}",
"followers_url": "https://api.github.com/users/TevenLeScao/followers",
"following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}",
"gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/TevenLeScao",
"id": 26709476,
"login": "TevenLeScao",
"node_id": "MDQ6VXNlcjI2NzA5NDc2",
"organizations_url": "https://api.github.com/users/TevenLeScao/orgs",
"received_events_url": "https://api.github.com/users/TevenLeScao/received_events",
"repos_url": "https://api.github.com/users/TevenLeScao/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions",
"type": "User",
"url": "https://api.github.com/users/TevenLeScao",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[
"I'm attaching a minimal reproducible example:\r\n```python\r\nfrom datasets import load_dataset\r\nimport pyarrow as pa\r\nimport pyarrow.parquet as pq\r\n\r\ntable_with_map = pa.Table.from_pydict(\r\n {\"a\": [1, 2], \"b\": [[(\"a\", 2)], [(\"b\", 4)]]},\r\n schema=pa.schema({\"a\": pa.int32(), \"b\": pa.map_(pa.string(), pa.int32())})\r\n)\r\npq.write_table(table_with_map, \"parquet_with_map.parquet\")\r\ndset = load_dataset(\"parquet\", data_files=\"parquet_with_map.parquet\", split=\"train\") # error unless streaming=True\r\n``` \r\n\r\nFor a dataset generated with the packaged loaders (CSV, JSON, Parquet), `streaming=True` sets the dataset's features to `None` (unless explicitly provided in `load_dataset`), hence no error will be thrown as long as the features stay \"unresolved\" (resolving the features with `_resolve_features` will lead to an error).",
"I've also been wondering about datasets support for Arrow Map datatypes. I had a situation where I had a pandas series of dict[str, float] with hundreds of different possible key values (ie. not bounded), and this got converted to a sequence of structs where every single struct had the entire set of keys.\r\n\r\nI worked around it, by explicitly creating a sequence of [str, float], but given that pyarrow has an explicit Map datatype, it would be good to be able to explicitly cast/force this data type combination.",
"(feel free to ignore) polars will not support this type: https://github.com/pola-rs/polars/issues/3942#issuecomment-1202331210\r\n\r\n> Polars will not add the map dtype. It's benefit do not outweigh the extra complexity. Maybe we can investigate conversion of maps to struct. But I will have to explore that.",
"Looks like they chose to convert every instance with https://github.com/pola-rs/polars/pull/4226"
] | 2023-03-06T12:03:24
| 2024-03-15T18:56:12
| null |
CONTRIBUTOR
| null | null | null | null |
### Describe the bug
When I try to load parquet files that were processed with Spark, I get the following issue:
`ValueError: Arrow type map<string, string ('warc_headers')> does not have a datasets dtype equivalent.`
Strangely, loading the dataset with `streaming=True` solves the issue.
### Steps to reproduce the bug
The dataset is private, but this can be reproduced with any dataset that has Arrow maps.
### Expected behavior
Loading the dataset no matter whether streaming is True or not.
### Environment info
- `datasets` version: 2.10.1
- Platform: Linux-5.15.0-1029-gcp-x86_64-with-glibc2.31
- Python version: 3.10.7
- PyArrow version: 8.0.0
- Pandas version: 1.4.2
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 5,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 5,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5612/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5612/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/5610
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5610/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5610/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5610/events
|
https://github.com/huggingface/datasets/issues/5610
| 1,610,698,006
|
I_kwDODunzps5gAU0W
| 5,610
|
use datasets streaming mode in trainer ddp mode cause memory leak
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/15223544?v=4",
"events_url": "https://api.github.com/users/gromzhu/events{/privacy}",
"followers_url": "https://api.github.com/users/gromzhu/followers",
"following_url": "https://api.github.com/users/gromzhu/following{/other_user}",
"gists_url": "https://api.github.com/users/gromzhu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gromzhu",
"id": 15223544,
"login": "gromzhu",
"node_id": "MDQ6VXNlcjE1MjIzNTQ0",
"organizations_url": "https://api.github.com/users/gromzhu/orgs",
"received_events_url": "https://api.github.com/users/gromzhu/received_events",
"repos_url": "https://api.github.com/users/gromzhu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gromzhu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gromzhu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gromzhu",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[
"Same problem, \r\ntransformers 4.28.1\r\ndatasets 2.12.0\r\n\r\nleak around 100Mb per 10 seconds when use dataloader_num_werker > 0 in training argumennts for transformer train, possile bug in transformers repo, but still not found solution :(\r\n",
"found an article described a problem, may be helpful for somebody:\r\nhttps://ppwwyyxx.com/blog/2022/Demystify-RAM-Usage-in-Multiprocess-DataLoader/\r\nI confirm, it`s not memory leak, after some time memory growing has stopped",
"\"After some time\" - from your description, it sounds like memory growth can happen for 12 hours+, even days, before it stops? That seems very scary."
] | 2023-03-06T05:26:49
| 2024-03-07T01:11:32
| null |
NONE
| null | null | null | null |
### Describe the bug
use datasets streaming mode in trainer ddp mode cause memory leak
### Steps to reproduce the bug
import os
import time
import datetime
import sys
import numpy as np
import random
import torch
from torch.utils.data import Dataset, DataLoader, random_split, RandomSampler, SequentialSampler,DistributedSampler,BatchSampler
torch.manual_seed(42)
from transformers import GPT2LMHeadModel, GPT2Tokenizer, GPT2Config, GPT2Model,DataCollatorForLanguageModeling,AutoModelForCausalLM
from transformers import AdamW, get_linear_schedule_with_warmup
hf_model_path ='./Wenzhong-GPT2-110M'
tokenizer = GPT2Tokenizer.from_pretrained(hf_model_path)
tokenizer.add_special_tokens({'pad_token': '<|pad|>'})
from datasets import load_dataset
gpus=8
max_len = 576
batch_size_node = 17
save_step = 5000
gradient_accumulation = 2
dataloader_num = 4
max_step = 351000*1000//batch_size_node//gradient_accumulation//gpus
#max_step = -1
print("total_step:%d"%(max_step))
import datasets
datasets.version
dataset = load_dataset("text", data_files="./gpt_data_v1/*",split='train',cache_dir='./dataset_cache',streaming=True)
print('load over')
shuffled_dataset = dataset.shuffle(seed=42)
print('shuffle over')
def dataset_tokener(example,max_lenth=max_len):
example['text'] = list(map(lambda x : x.strip()+'<|endoftext|>',example['text'] ))
return tokenizer(example['text'], truncation=True, max_length=max_lenth, padding="longest")
new_new_dataset = shuffled_dataset.map(dataset_tokener, batched=True, remove_columns=["text"])
print('map over')
configuration = GPT2Config.from_pretrained(hf_model_path, output_hidden_states=False)
model = AutoModelForCausalLM.from_pretrained(hf_model_path)
model.resize_token_embeddings(len(tokenizer))
seed_val = 42
random.seed(seed_val)
np.random.seed(seed_val)
torch.manual_seed(seed_val)
torch.cuda.manual_seed_all(seed_val)
from transformers import Trainer,TrainingArguments
import os
print("strat train")
training_args = TrainingArguments(output_dir="./test_trainer",
num_train_epochs=1.0,
report_to="none",
do_train=True,
dataloader_num_workers=dataloader_num,
local_rank=int(os.environ.get('LOCAL_RANK', -1)),
overwrite_output_dir=True,
logging_strategy='steps',
logging_first_step=True,
logging_dir="./logs",
log_on_each_node=False,
per_device_train_batch_size=batch_size_node,
warmup_ratio=0.03,
save_steps=save_step,
save_total_limit=5,
gradient_accumulation_steps=gradient_accumulation,
max_steps=max_step,
disable_tqdm=False,
data_seed=42
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=new_new_dataset,
eval_dataset=None,
tokenizer=tokenizer,
data_collator=DataCollatorForLanguageModeling(tokenizer,mlm=False),
#compute_metrics=compute_metrics if training_args.do_eval and not is_torch_tpu_available() else None,
#preprocess_logits_for_metrics=preprocess_logits_for_metrics
#if training_args.do_eval and not is_torch_tpu_available()
#else None,
)
trainer.train(resume_from_checkpoint=True)
### Expected behavior
use the train code uppper
my dataset ./gpt_data_v1 have 1000 files, each file size is 120mb
start cmd is : python -m torch.distributed.launch --nproc_per_node=8 my_train.py
here is result:

here is memory usage monitor in 12 hours

every dataloader work allocate over 24gb cpu memory
according to memory usage monitor in 12 hours,sometime small memory releases, but total memory usage is increase.
i think datasets streaming mode should not used so much memery,so maybe somewhere has memory leak.
### Environment info
pytorch 1.11.0
py 3.8
cuda 11.3
transformers 4.26.1
datasets 2.9.0
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5610/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5610/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/5609
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5609/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5609/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5609/events
|
https://github.com/huggingface/datasets/issues/5609
| 1,610,062,862
|
I_kwDODunzps5f95wO
| 5,609
|
`load_from_disk` vs `load_dataset` performance.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/4443482?v=4",
"events_url": "https://api.github.com/users/davidgilbertson/events{/privacy}",
"followers_url": "https://api.github.com/users/davidgilbertson/followers",
"following_url": "https://api.github.com/users/davidgilbertson/following{/other_user}",
"gists_url": "https://api.github.com/users/davidgilbertson/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/davidgilbertson",
"id": 4443482,
"login": "davidgilbertson",
"node_id": "MDQ6VXNlcjQ0NDM0ODI=",
"organizations_url": "https://api.github.com/users/davidgilbertson/orgs",
"received_events_url": "https://api.github.com/users/davidgilbertson/received_events",
"repos_url": "https://api.github.com/users/davidgilbertson/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/davidgilbertson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davidgilbertson/subscriptions",
"type": "User",
"url": "https://api.github.com/users/davidgilbertson",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[
"Hi! We've recently made some improvements to `save_to_disk`/`list_to_disk` (100x faster in some scenarios), so it would help if you could install `datasets` directly from `main` (`pip install git+https://github.com/huggingface/datasets.git`) and re-run the \"benchmark\".",
"Great to hear! I'll give it a try when I've got a moment.",
"@mariosasko is that fix released to pip in the meantime? Asking cause im facing still the same issue (regarding loading images from local paths):\r\n```\r\ndataset = load_dataset(\"csv\", cache_dir=\"cache\", data_files=[\"/STORAGE/DATA/mijam/vit/code/list_filtered.csv\"], num_proc=16, split=\"train\").cast_column(\"image\", Image())\r\ndataset = dataset.class_encode_column(\"label\")\r\n```\r\nquite fast. \r\n\r\nThen I do `save_to_disk()` and some time later:\r\n```\r\ndataset = load_from_disk('/STORAGE/DATA/mijam/accel/saved_arrow_big')\r\n```\r\nreally slow. In theory it should be quicked since it only loads arrow files, no conversions and so on.\r\n",
"@mjamroz I assume your CSV file stores image file paths. This means `save_to_disk` needs to embed the image bytes resulting in a much bigger Arrow file (than the initial one). Maybe specifying `num_shards` to make the Arrow files smaller can help (large Arrow files on some systems take a long time to load)."
] | 2023-03-05T05:27:15
| 2023-07-13T18:48:05
| null |
NONE
| null | null | null | null |
### Describe the bug
I have downloaded `openwebtext` (~12GB) and filtered out a small amount of junk (it's still huge). Now, I would like to use this filtered version for future work. It seems I have two choices:
1. Use `load_dataset` each time, relying on the cache mechanism, and re-run my filtering.
2. `save_to_disk` and then use `load_from_disk` to load the filtered version.
The performance of these two approaches is wildly different:
* Using `load_dataset` takes about 20 seconds to load the dataset, and a few seconds to re-filter (thanks to the brilliant filter/map caching)
* Using `load_from_disk` takes 14 minutes! And the second time I tried, the session just crashed (on a machine with 32GB of RAM)
I don't know if you'd call this a bug, but it seems like there shouldn't need to be two methods to load from disk, or that they should not take such wildly different amounts of time, or that one should not crash. Or maybe that the docs could offer some guidance about when to pick which method and why two methods exist, or just how do most people do it?
Something I couldn't work out from reading the docs was this: can I modify a dataset from the hub, save it (locally) and use `load_dataset` to load it? This [post seemed to suggest that the answer is no](https://discuss.huggingface.co/t/save-and-load-datasets/9260).
### Steps to reproduce the bug
See above
### Expected behavior
Load times should be about the same.
### Environment info
- `datasets` version: 2.9.0
- Platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.31
- Python version: 3.10.8
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
| null |
{
"+1": 4,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 4,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5609/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5609/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/5608
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5608/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5608/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5608/events
|
https://github.com/huggingface/datasets/issues/5608
| 1,609,996,563
|
I_kwDODunzps5f9pkT
| 5,608
|
audiofolder only creates dataset of 13 rows (files) when the data folder it's reading from has 20,000 mp3 files.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/107211437?v=4",
"events_url": "https://api.github.com/users/joseph-y-cho/events{/privacy}",
"followers_url": "https://api.github.com/users/joseph-y-cho/followers",
"following_url": "https://api.github.com/users/joseph-y-cho/following{/other_user}",
"gists_url": "https://api.github.com/users/joseph-y-cho/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/joseph-y-cho",
"id": 107211437,
"login": "joseph-y-cho",
"node_id": "U_kgDOBmPqrQ",
"organizations_url": "https://api.github.com/users/joseph-y-cho/orgs",
"received_events_url": "https://api.github.com/users/joseph-y-cho/received_events",
"repos_url": "https://api.github.com/users/joseph-y-cho/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/joseph-y-cho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joseph-y-cho/subscriptions",
"type": "User",
"url": "https://api.github.com/users/joseph-y-cho",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi!\r\n\r\n> naming convention of mp3 files\r\n\r\nYes, this could be the problem. MP3 files should end with `.mp3`/`.MP3` to be recognized as audio files.\r\n\r\nIf the file names are not the culprit, can you paste the audio folder's directory structure to help us reproduce the error (e.g., by running the `tree \"x\"` command)?",
"Hi! I'm sorry, I don't want to reveal my entire dataset, but here's a snippet (all of the mp3 files below are some of the ones not being recognized by audiofolder. Also, for another dataset, audiofolder loaded zero mp3 files because \"train\" was in the name of one of the mp3 files. \r\nmy_dataset\r\n├── data\r\n│ ├── VHA_Innovation_Stories_-_Day_2-123.mp3\r\n│ ├── VHA_Innovation_Stories_-_Day_2-124.mp3\r\n│ ├── ASSOCIATION_OF_GENERAL_PRACTITIONERS_OF_JAMAICA_NEPHROLOGY_CONFERENCE_-_JULY_3,_2022-93.mp3\r\n│ ├── ASSOCIATION_OF_GENERAL_PRACTITIONERS_OF_JAMAICA_NEPHROLOGY_CONFERENCE_-_JULY_3,_2022-94.mp3\r\n│ ├── ASSOCIATION_OF_GENERAL_PRACTITIONERS_OF_JAMAICA_NEPHROLOGY_CONFERENCE_-_JULY_3,_2022-95.mp3\r\n│ ├── Your_Impact\\357\\274\\232_Neurosurgery_equipment-5.mp3\r\n│ └── Your_Impact\\357\\274\\232_Neurosurgery_equipment-6.mp3\r\n└── metadata.csv\r\n\r\nHere's a few of the 13 files recognized by the dataset:\r\nBritish_Heart_Foundation_-_Your_guide_to_a_Coronary_Angiogram,_a_test_for_heart_disease-1.mp3\r\nBritish_Heart_Foundation_-_Your_guide_to_a_Coronary_Angiogram,_a_test_for_heart_disease-2.mp3\r\nBritish_Heart_Foundation_-_Your_guide_to_a_Coronary_Angiogram,_a_test_for_heart_disease-3.mp3\r\nIVP_⧸_IVU_test_Procedure_for_Kidneys_intravenous_pyelogram_-_medical_radiology_X-ray_ivp-1.mp3\r\nIVP_⧸_IVU_test_Procedure_for_Kidneys_intravenous_pyelogram_-_medical_radiology_X-ray_ivp-2.mp3"
] | 2023-03-05T00:14:45
| 2023-03-12T00:02:57
| 2023-03-12T00:02:57
|
NONE
| null | null | null | null |
### Describe the bug
x = load_dataset("audiofolder", data_dir="x")
When running this, x is a dataset of 13 rows (files) when it should be 20,000 rows (files) as the data_dir "x" has 20,000 mp3 files. Does anyone know what could possibly cause this (naming convention of mp3 files, etc.)
### Steps to reproduce the bug
x = load_dataset("audiofolder", data_dir="x")
### Expected behavior
x = load_dataset("audiofolder", data_dir="x") should create a dataset of 20,000 rows (files).
### Environment info
- `datasets` version: 2.9.0
- Platform: Linux-3.10.0-1160.80.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.9.16
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/107211437?v=4",
"events_url": "https://api.github.com/users/joseph-y-cho/events{/privacy}",
"followers_url": "https://api.github.com/users/joseph-y-cho/followers",
"following_url": "https://api.github.com/users/joseph-y-cho/following{/other_user}",
"gists_url": "https://api.github.com/users/joseph-y-cho/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/joseph-y-cho",
"id": 107211437,
"login": "joseph-y-cho",
"node_id": "U_kgDOBmPqrQ",
"organizations_url": "https://api.github.com/users/joseph-y-cho/orgs",
"received_events_url": "https://api.github.com/users/joseph-y-cho/received_events",
"repos_url": "https://api.github.com/users/joseph-y-cho/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/joseph-y-cho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joseph-y-cho/subscriptions",
"type": "User",
"url": "https://api.github.com/users/joseph-y-cho",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5608/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5608/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 6 days, 23:48:12
|
https://api.github.com/repos/huggingface/datasets/issues/5606
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5606/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5606/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5606/events
|
https://github.com/huggingface/datasets/issues/5606
| 1,608,911,632
|
I_kwDODunzps5f5gsQ
| 5,606
|
Add `Dataset.to_list` to the API
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/50972773?v=4",
"events_url": "https://api.github.com/users/kyoto7250/events{/privacy}",
"followers_url": "https://api.github.com/users/kyoto7250/followers",
"following_url": "https://api.github.com/users/kyoto7250/following{/other_user}",
"gists_url": "https://api.github.com/users/kyoto7250/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kyoto7250",
"id": 50972773,
"login": "kyoto7250",
"node_id": "MDQ6VXNlcjUwOTcyNzcz",
"organizations_url": "https://api.github.com/users/kyoto7250/orgs",
"received_events_url": "https://api.github.com/users/kyoto7250/received_events",
"repos_url": "https://api.github.com/users/kyoto7250/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kyoto7250/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kyoto7250/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kyoto7250",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/50972773?v=4",
"events_url": "https://api.github.com/users/kyoto7250/events{/privacy}",
"followers_url": "https://api.github.com/users/kyoto7250/followers",
"following_url": "https://api.github.com/users/kyoto7250/following{/other_user}",
"gists_url": "https://api.github.com/users/kyoto7250/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kyoto7250",
"id": 50972773,
"login": "kyoto7250",
"node_id": "MDQ6VXNlcjUwOTcyNzcz",
"organizations_url": "https://api.github.com/users/kyoto7250/orgs",
"received_events_url": "https://api.github.com/users/kyoto7250/received_events",
"repos_url": "https://api.github.com/users/kyoto7250/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kyoto7250/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kyoto7250/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kyoto7250",
"user_view_type": "public"
}
] |
[
"Hello, I have an interest in this issue.\r\nIs the `Dataset.to_dict` you are describing correct in the code here?\r\n\r\nhttps://github.com/huggingface/datasets/blob/35b789e8f6826b6b5a6b48fcc2416c890a1f326a/src/datasets/arrow_dataset.py#L4633-L4667",
"Yes, this is where `Dataset.to_dict` is defined.",
"#self-assign"
] | 2023-03-03T16:17:10
| 2023-03-27T13:26:40
| 2023-03-27T13:26:40
|
COLLABORATOR
| null | null | null | null |
Since there is `Dataset.from_list` in the API, we should also add `Dataset.to_list` to be consistent.
Regarding the implementation, we can re-use `Dataset.to_dict`'s code and replace the `to_pydict` calls with `to_pylist`.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5606/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5606/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 23 days, 21:09:30
|
https://api.github.com/repos/huggingface/datasets/issues/5604
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5604/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5604/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5604/events
|
https://github.com/huggingface/datasets/issues/5604
| 1,608,304,775
|
I_kwDODunzps5f3MiH
| 5,604
|
Problems with downloading The Pile
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/11065386?v=4",
"events_url": "https://api.github.com/users/sentialx/events{/privacy}",
"followers_url": "https://api.github.com/users/sentialx/followers",
"following_url": "https://api.github.com/users/sentialx/following{/other_user}",
"gists_url": "https://api.github.com/users/sentialx/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sentialx",
"id": 11065386,
"login": "sentialx",
"node_id": "MDQ6VXNlcjExMDY1Mzg2",
"organizations_url": "https://api.github.com/users/sentialx/orgs",
"received_events_url": "https://api.github.com/users/sentialx/received_events",
"repos_url": "https://api.github.com/users/sentialx/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sentialx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sentialx/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sentialx",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi! \r\n\r\n\r\nYou can specify `download_config=DownloadConfig(resume_download=True))` in `load_dataset` to resume the download when re-running the code after the timeout error:\r\n```python\r\nfrom datasets import load_dataset, DownloadConfig\r\ndataset = load_dataset('the_pile', split='train', cache_dir='F:\\datasets', download_config=DownloadConfig(resume_download=True))\r\n```\r\n\r\n",
"@mariosasko , I used your suggestion but its not saving anything , just stops and runs from the same point .\r\nbelow is the script to download and save on disk .\r\n\r\n```\r\nfrom datasets import load_dataset, DownloadConfig\r\n\r\n\r\n#load the Pile dataset from Hugging Face Datasets\r\n#dataset = load_dataset('the_pile')\r\ndataset = load_dataset('the_pile', split='train', cache_dir='datasets', download_config=DownloadConfig(resume_download=True))\r\n\r\n\r\n# save each file in the dataset to disk\r\nfor i, example in enumerate(dataset['train']):\r\n filename = f'pile_file_{i}.json'\r\n with open(filename, 'w') as f:\r\n f.write(str(example))\r\n\r\nprint(\"Finished saving Pile dataset files to disk.\")\r\n```\r\n",
"@mariosasko , it shows nothing in dataset folder\r\n\r\n```\r\n du -sh /mnt/nlp/hugging_face/*\r\n20K /mnt/nlp/hugging_face/datasets\r\n4.0K /mnt/nlp/hugging_face/download_pile.py\r\n```\r\n",
"@mariosasko \r\n\r\n```\r\nroot@d20f0ab8f4f8:/mnt/hugging_face# python3 download_pile.py\r\nNo config specified, defaulting to: the_pile/all\r\nDownloading and preparing dataset the_pile/all to /mnt/hugging_face/datasets/the_pile/all/0.0.0/6fadc480ecb32470826cbf5900a9558b791ce55d5e9a0fdc8ad653e7b64bb349...\r\nDownloading data files: 0%| | 0/3 [00:00<?, ?it/s]\r\n\r\n\r\n\r\n\r\n\r\nDownloading data: 70%|████████████████████████████████████████████████████████████████████▊ | 10.7G/15.2G [12:09<11:53, 6.36MB/s]\r\nDownloading data: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████| 15.2G/15.2G [22:15<00:00, 7.25MB/s]\r\nDownloading data: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████| 15.2G/15.2G [46:17<00:00, 5.48MB/s]\r\nDownloading data: 40%|██████████████████████████████████████▏ | 6.07G/15.3G [50:49<1:17:02, 1.99MB/s]\r\nTraceback (most recent call last):██████████████████████████▊ | 6.07G/15.3G [50:49<25:35:23, 99.9kB/s]\r\n File \"/usr/local/lib/python3.8/dist-packages/urllib3/response.py\", line 444, in _error_catcher\r\n yield\r\n File \"/usr/local/lib/python3.8/dist-packages/urllib3/response.py\", line 567, in read\r\n data = self._fp_read(amt) if not fp_closed else b\"\"\r\n File \"/usr/local/lib/python3.8/dist-packages/urllib3/response.py\", line 525, in _fp_read\r\n data = self._fp.read(chunk_amt)\r\n File \"/usr/lib/python3.8/http/client.py\", line 459, in read\r\n n = self.readinto(b)\r\n File \"/usr/lib/python3.8/http/client.py\", line 503, in readinto\r\n n = self.fp.readinto(b)\r\n File \"/usr/lib/python3.8/socket.py\", line 669, in readinto\r\n return self._sock.recv_into(b)\r\n File \"/usr/lib/python3.8/ssl.py\", line 1241, in recv_into\r\n return self.read(nbytes, buffer)\r\n File \"/usr/lib/python3.8/ssl.py\", line 1099, in read\r\n return self._sslobj.read(len, buffer)\r\nConnectionResetError: [Errno 104] Connection reset by peer\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.8/dist-packages/requests/models.py\", line 816, in generate\r\n yield from self.raw.stream(chunk_size, decode_content=True)\r\n File \"/usr/local/lib/python3.8/dist-packages/urllib3/response.py\", line 628, in stream\r\n data = self.read(amt=amt, decode_content=decode_content)\r\n File \"/usr/local/lib/python3.8/dist-packages/urllib3/response.py\", line 593, in read\r\n raise IncompleteRead(self._fp_bytes_read, self.length_remaining)\r\n File \"/usr/lib/python3.8/contextlib.py\", line 131, in __exit__\r\n self.gen.throw(type, value, traceback)\r\n File \"/usr/local/lib/python3.8/dist-packages/urllib3/response.py\", line 461, in _error_catcher\r\n raise ProtocolError(\"Connection broken: %r\" % e, e)\r\nurllib3.exceptions.ProtocolError: (\"Connection broken: ConnectionResetError(104, 'Connection reset by peer')\", ConnectionResetError(104, 'Connection reset by peer'))\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"download_pile.py\", line 6, in <module>\r\n dataset = load_dataset('the_pile', split='train', cache_dir='datasets', download_config=DownloadConfig(resume_download=True))\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/load.py\", line 1782, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/builder.py\", line 872, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/builder.py\", line 1649, in _download_and_prepare\r\n super()._download_and_prepare(\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/builder.py\", line 945, in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n File \"/root/.cache/huggingface/modules/datasets_modules/datasets/the_pile/6fadc480ecb32470826cbf5900a9558b791ce55d5e9a0fdc8ad653e7b64bb349/the_pile.py\", line 192, in _split_generators\r\n data_dir = dl_manager.download(_DATA_URLS[self.config.name])\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/download/download_manager.py\", line 427, in download\r\n downloaded_path_or_paths = map_nested(\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/utils/py_utils.py\", line 443, in map_nested\r\n mapped = [\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/utils/py_utils.py\", line 444, in <listcomp>\r\n _single_map_nested((function, obj, types, None, True, None))\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/utils/py_utils.py\", line 363, in _single_map_nested\r\n mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar]\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/utils/py_utils.py\", line 363, in <listcomp>\r\n mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar]\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/utils/py_utils.py\", line 346, in _single_map_nested\r\n return function(data_struct)\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/download/download_manager.py\", line 453, in _download\r\n return cached_path(url_or_filename, download_config=download_config)\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/utils/file_utils.py\", line 182, in cached_path\r\n output_path = get_from_cache(\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/utils/file_utils.py\", line 575, in get_from_cache\r\n http_get(\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/utils/file_utils.py\", line 379, in http_get\r\n for chunk in response.iter_content(chunk_size=1024):\r\n File \"/usr/local/lib/python3.8/dist-packages/requests/models.py\", line 818, in generate\r\n raise ChunkedEncodingError(e)\r\nrequests.exceptions.ChunkedEncodingError: (\"Connection broken: ConnectionResetError(104, 'Connection reset by peer')\", ConnectionResetError(104, 'Connection reset by peer'))\r\n```\r\n",
"Users with slow internet speed are doomed (4MB/s). The dataset downloads fine at minimum speed 10MB/s.\n\nAlso, when the train splits were generated and then I removed the downloads folder to save up disk space, it started redownloading the whole dataset. Is there any way to use the already generated splits instead?",
"@sentialx @mariosasko , anytime on my above script , am I downloading and saving dataset correctly . Please suggest :)",
"@sentialx probably worth noting that `resume_download=True` doesn't directly save the dataset to disk, but instead just helps in resuming the dataset resume on interruption as @mariosasko mentions. resolving resumptions after a crash is [an open issue](https://github.com/huggingface/datasets/issues/5380) at the moment."
] | 2023-03-03T09:52:08
| 2023-10-14T02:15:52
| 2023-03-24T12:44:25
|
NONE
| null | null | null | null |
### Describe the bug
The downloads in the screenshot seem to be interrupted after some time and the last download throws a "Read timed out" error.

Here are the downloaded files:

They should be all 14GB like here (https://the-eye.eu/public/AI/pile/train/).
Alternatively, can I somehow download the files by myself and use the datasets preparing script?
### Steps to reproduce the bug
dataset = load_dataset('the_pile', split='train', cache_dir='F:\datasets')
### Expected behavior
The files should be downloaded correctly.
### Environment info
- `datasets` version: 2.10.1
- Platform: Windows-10-10.0.22623-SP0
- Python version: 3.10.5
- PyArrow version: 9.0.0
- Pandas version: 1.4.2
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5604/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5604/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 21 days, 2:52:17
|
https://api.github.com/repos/huggingface/datasets/issues/5601
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5601/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5601/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5601/events
|
https://github.com/huggingface/datasets/issues/5601
| 1,606,685,976
|
I_kwDODunzps5fxBUY
| 5,601
|
Authorization error
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/107404835?v=4",
"events_url": "https://api.github.com/users/OleksandrKorovii/events{/privacy}",
"followers_url": "https://api.github.com/users/OleksandrKorovii/followers",
"following_url": "https://api.github.com/users/OleksandrKorovii/following{/other_user}",
"gists_url": "https://api.github.com/users/OleksandrKorovii/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/OleksandrKorovii",
"id": 107404835,
"login": "OleksandrKorovii",
"node_id": "U_kgDOBmbeIw",
"organizations_url": "https://api.github.com/users/OleksandrKorovii/orgs",
"received_events_url": "https://api.github.com/users/OleksandrKorovii/received_events",
"repos_url": "https://api.github.com/users/OleksandrKorovii/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/OleksandrKorovii/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/OleksandrKorovii/subscriptions",
"type": "User",
"url": "https://api.github.com/users/OleksandrKorovii",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi! \r\n\r\nIt's better to report this kind of issue in the `huggingface_hub` repo, so if you still haven't resolved it, I suggest you open an issue there.",
"Yeah, I solved it. Problem was in osxkeychain. When I do `hugginface-cli login` it's add token with default account (username)`hg_user` but my repo contain other username. When I changed username in keychain - it works now."
] | 2023-03-02T12:08:39
| 2023-03-14T16:55:35
| 2023-03-14T16:55:34
|
NONE
| null | null | null | null |
### Describe the bug
Get `Authorization error` when try to push data into hugginface datasets hub.
### Steps to reproduce the bug
I did all steps in the [tutorial](https://huggingface.co/docs/datasets/share),
1. `huggingface-cli login` with WRITE token
2. `git lfs install`
3. `git clone https://huggingface.co/datasets/namespace/your_dataset_name`
4.
```
cp /somewhere/data/*.json .
git lfs track *.json
git add .gitattributes
git add *.json
git commit -m "add json files"
```
but when I execute `git push` I got the error:
```
Uploading LFS objects: 0% (0/1), 0 B | 0 B/s, done.
batch response: Authorization error.
error: failed to push some refs to 'https://huggingface.co/datasets/zeusfsx/ukrainian-news'
```
Size of data ~100Gb. I have five json files - different parts.
### Expected behavior
All my data pushed into hub
### Environment info
- `datasets` version: 2.10.1
- Platform: macOS-13.2.1-arm64-arm-64bit
- Python version: 3.10.10
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/107404835?v=4",
"events_url": "https://api.github.com/users/OleksandrKorovii/events{/privacy}",
"followers_url": "https://api.github.com/users/OleksandrKorovii/followers",
"following_url": "https://api.github.com/users/OleksandrKorovii/following{/other_user}",
"gists_url": "https://api.github.com/users/OleksandrKorovii/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/OleksandrKorovii",
"id": 107404835,
"login": "OleksandrKorovii",
"node_id": "U_kgDOBmbeIw",
"organizations_url": "https://api.github.com/users/OleksandrKorovii/orgs",
"received_events_url": "https://api.github.com/users/OleksandrKorovii/received_events",
"repos_url": "https://api.github.com/users/OleksandrKorovii/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/OleksandrKorovii/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/OleksandrKorovii/subscriptions",
"type": "User",
"url": "https://api.github.com/users/OleksandrKorovii",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5601/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5601/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 12 days, 4:46:55
|
https://api.github.com/repos/huggingface/datasets/issues/5600
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5600/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5600/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5600/events
|
https://github.com/huggingface/datasets/issues/5600
| 1,606,585,596
|
I_kwDODunzps5fwoz8
| 5,600
|
Dataloader getitem not working for DreamboothDatasets
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/76955987?v=4",
"events_url": "https://api.github.com/users/salahiguiliz/events{/privacy}",
"followers_url": "https://api.github.com/users/salahiguiliz/followers",
"following_url": "https://api.github.com/users/salahiguiliz/following{/other_user}",
"gists_url": "https://api.github.com/users/salahiguiliz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/salahiguiliz",
"id": 76955987,
"login": "salahiguiliz",
"node_id": "MDQ6VXNlcjc2OTU1OTg3",
"organizations_url": "https://api.github.com/users/salahiguiliz/orgs",
"received_events_url": "https://api.github.com/users/salahiguiliz/received_events",
"repos_url": "https://api.github.com/users/salahiguiliz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/salahiguiliz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/salahiguiliz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/salahiguiliz",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi! \r\n\r\n> (see example of DreamboothDatasets)\r\n\r\n\r\nCould you please provide a link to it? If you are referring to the example in the `diffusers` repo, your issue is unrelated to `datasets` as that example uses `Dataset` from PyTorch to load data."
] | 2023-03-02T11:00:27
| 2023-03-13T17:59:35
| 2023-03-13T17:59:35
|
NONE
| null | null | null | null |
### Describe the bug
Dataloader getitem is not working as before (see example of [DreamboothDatasets](https://github.com/huggingface/peft/blob/main/examples/lora_dreambooth/train_dreambooth.py#L451C14-L529))
moving Datasets to 2.8.0 solved the issue.
### Steps to reproduce the bug
1- using DreamBoothDataset to load some images
2- error after loading when trying to visualise the images
### Expected behavior
I was expecting a numpy array of the image
### Environment info
- Platform: Linux-5.10.147+-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 9.0.0
- Pandas version: 1.3.5
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5600/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5600/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 11 days, 6:59:08
|
https://api.github.com/repos/huggingface/datasets/issues/5597
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5597/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5597/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5597/events
|
https://github.com/huggingface/datasets/issues/5597
| 1,604,928,721
|
I_kwDODunzps5fqUTR
| 5,597
|
in-place dataset update
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/3585459?v=4",
"events_url": "https://api.github.com/users/speedcell4/events{/privacy}",
"followers_url": "https://api.github.com/users/speedcell4/followers",
"following_url": "https://api.github.com/users/speedcell4/following{/other_user}",
"gists_url": "https://api.github.com/users/speedcell4/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/speedcell4",
"id": 3585459,
"login": "speedcell4",
"node_id": "MDQ6VXNlcjM1ODU0NTk=",
"organizations_url": "https://api.github.com/users/speedcell4/orgs",
"received_events_url": "https://api.github.com/users/speedcell4/received_events",
"repos_url": "https://api.github.com/users/speedcell4/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/speedcell4/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/speedcell4/subscriptions",
"type": "User",
"url": "https://api.github.com/users/speedcell4",
"user_view_type": "public"
}
|
[
{
"color": "ffffff",
"default": true,
"description": "This will not be worked on",
"id": 1935892913,
"name": "wontfix",
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEz",
"url": "https://api.github.com/repos/huggingface/datasets/labels/wontfix"
}
] |
closed
| false
| null |
[] |
[
"We won't support in-place modifications since `datasets` is based on the Apache Arrow format which doesn't support in-place modifications.\r\n\r\nIn your case the old dataset is garbage collected pretty quickly so you won't have memory issues.\r\n\r\nNote that datasets loaded from disk (memory mapped) are not loaded in memory, and therefore the new dataset actually use the same buffers as the old one.",
"Thank you for your detailed reply.\r\n\r\n> In your case the old dataset is garbage collected pretty quickly so you won't have memory issues.\r\n\r\nI understand this, but it still copies the old dataset to create the new one, is this correct? So maybe it is not memory-consuming, but time-consuming?",
"Indeed, and because of that it is more efficient to add multiple rows at once instead of one by one, using `concatenate_datasets` for example."
] | 2023-03-01T12:58:18
| 2023-03-02T13:30:41
| 2023-03-02T03:47:00
|
NONE
| null | null | null | null |
### Motivation
For the circumstance that I creat an empty `Dataset` and keep appending new rows into it, I found that it leads to creating a new dataset at each call. It looks quite memory-consuming. I just wonder if there is any more efficient way to do this.
```python
from datasets import Dataset
ds = Dataset.from_list([])
ds.add_item({'a': [1, 2, 3], 'b': 4})
print(ds)
>>> Dataset({
>>> features: [],
>>> num_rows: 0
>>> })
ds = ds.add_item({'a': [1, 2, 3], 'b': 4})
print(ds)
>>> Dataset({
>>> features: ['a', 'b'],
>>> num_rows: 1
>>> })
```
### Feature request
Call for in-place dataset update functions, that update the existing `Dataset` in place without creating a new copy. The interface is supposed to keep the same style as PyTorch, such as the in-place version of a `function` is named `function_`. For example, the in-pace version of `add_item`, i.e., `add_item_`, immediately updates the `Dataset`.
```python
from datasets import Dataset
ds = Dataset.from_list([])
ds.add_item({'a': [1, 2, 3], 'b': 4})
print(ds)
>>> Dataset({
>>> features: [],
>>> num_rows: 0
>>> })
ds.add_item_({'a': [1, 2, 3], 'b': 4})
print(ds)
>>> Dataset({
>>> features: ['a', 'b'],
>>> num_rows: 1
>>> })
```
### Related Functions
* `.map`
* `.filter`
* `.add_item`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/3585459?v=4",
"events_url": "https://api.github.com/users/speedcell4/events{/privacy}",
"followers_url": "https://api.github.com/users/speedcell4/followers",
"following_url": "https://api.github.com/users/speedcell4/following{/other_user}",
"gists_url": "https://api.github.com/users/speedcell4/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/speedcell4",
"id": 3585459,
"login": "speedcell4",
"node_id": "MDQ6VXNlcjM1ODU0NTk=",
"organizations_url": "https://api.github.com/users/speedcell4/orgs",
"received_events_url": "https://api.github.com/users/speedcell4/received_events",
"repos_url": "https://api.github.com/users/speedcell4/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/speedcell4/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/speedcell4/subscriptions",
"type": "User",
"url": "https://api.github.com/users/speedcell4",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5597/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5597/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 14:48:42
|
https://api.github.com/repos/huggingface/datasets/issues/5596
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5596/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5596/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5596/events
|
https://github.com/huggingface/datasets/issues/5596
| 1,604,919,993
|
I_kwDODunzps5fqSK5
| 5,596
|
[TypeError: Couldn't cast array of type] Can only load a subset of the dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/44069155?v=4",
"events_url": "https://api.github.com/users/loubnabnl/events{/privacy}",
"followers_url": "https://api.github.com/users/loubnabnl/followers",
"following_url": "https://api.github.com/users/loubnabnl/following{/other_user}",
"gists_url": "https://api.github.com/users/loubnabnl/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/loubnabnl",
"id": 44069155,
"login": "loubnabnl",
"node_id": "MDQ6VXNlcjQ0MDY5MTU1",
"organizations_url": "https://api.github.com/users/loubnabnl/orgs",
"received_events_url": "https://api.github.com/users/loubnabnl/received_events",
"repos_url": "https://api.github.com/users/loubnabnl/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/loubnabnl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/loubnabnl/subscriptions",
"type": "User",
"url": "https://api.github.com/users/loubnabnl",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Apparently some JSON objects have a `\"labels\"` field. Since this field is not present in every object, you must specify all the fields types in the README.md\r\n\r\nEDIT: actually specifying the feature types doesn’t solve the issue, it raises an error because “labels” is missing in the data",
"We've updated the dataset to remove the extra `labels` field from some files, closing this issue. Thanks!",
"A similar error occurs in the Pile dataset (EleutherAI/the_pile)\r\n\r\nLoading the dataset produces the following error.\r\n\r\n```\r\nTypeError: Couldn't cast array of type\r\nstruct<file: string, id: string>\r\nto\r\n{'id': Value(dtype='string', id=None)}\r\n```\r\n",
"I think this was fixed in https://huggingface.co/datasets/EleutherAI/the_pile/discussions/11",
"i have the same problem ,how to solve :\r\n raise TypeError(f\"Couldn't cast array of type\\n{array.type}\\nto\\n{feature}\")\r\nTypeError: Couldn't cast array of type\r\nlist<item: string>\r\nto\r\n{'content': Value(dtype='string', id=None), 'role': Value(dtype='string', id=None)}"
] | 2023-03-01T12:53:08
| 2023-12-05T03:22:00
| 2023-03-02T11:12:11
|
NONE
| null | null | null | null |
### Describe the bug
I'm trying to load this [dataset](https://huggingface.co/datasets/bigcode-data/the-stack-gh-issues) which consists of jsonl files and I get the following error:
```
casted_values = _c(array.values, feature[0])
File "/opt/conda/lib/python3.7/site-packages/datasets/table.py", line 1839, in wrapper
return func(array, *args, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/datasets/table.py", line 2132, in cast_array_to_feature
raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}")
TypeError: Couldn't cast array of type
struct<type: string, action: string, datetime: timestamp[s], author: string, title: string, description: string, comment_id: int64, comment: string, labels: list<item: string>>
to
{'type': Value(dtype='string', id=None), 'action': Value(dtype='string', id=None), 'datetime': Value(dtype='timestamp[s]', id=None), 'author': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None), 'description': Value(dtype='string', id=None), 'comment_id': Value(dtype='int64', id=None), 'comment': Value(dtype='string', id=None)}
```
But I can succesfully load a subset of the dataset, for example this works:
```python
ds = load_dataset('bigcode-data/the-stack-gh-issues', split="train", data_files=[f"data/data-{x}.jsonl" for x in range(10)])
```
and `ds.features` returns:
```
{'repo': Value(dtype='string', id=None),
'org': Value(dtype='string', id=None),
'issue_id': Value(dtype='int64', id=None),
'issue_number': Value(dtype='int64', id=None),
'pull_request': {'user_login': Value(dtype='string', id=None),
'repo': Value(dtype='string', id=None),
'number': Value(dtype='int64', id=None)},
'events': [{'type': Value(dtype='string', id=None),
'action': Value(dtype='string', id=None),
'datetime': Value(dtype='timestamp[s]', id=None),
'author': Value(dtype='string', id=None),
'title': Value(dtype='string', id=None),
'description': Value(dtype='string', id=None),
'comment_id': Value(dtype='int64', id=None),
'comment': Value(dtype='string', id=None)}]}
```
So I'm not sure if there's an issue with just some of the files. Grateful if you have any suggestions to fix the issue.
Side note:
I saw this related [issue](https://github.com/huggingface/datasets/issues/3637) and tried to write a loading script to have `events` as a `Sequence` and not `list` [here](https://huggingface.co/datasets/bigcode-data/the-stack-gh-issues/blob/main/loading.py) (the script was renamed). It worked with a subset locally but doesn't for the remote dataset it can't find https://huggingface.co/datasets/bigcode-data/the-stack-gh-issues/resolve/main/data.
### Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset('bigcode-data/the-stack-gh-issues', split="train")
```
### Expected behavior
Load the entire dataset succesfully.
### Environment info
- `datasets` version: 2.10.1
- Platform: Linux-4.19.0-23-cloud-amd64-x86_64-with-debian-10.13
- Python version: 3.7.12
- PyArrow version: 9.0.0
- Pandas version: 1.3.4
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/44069155?v=4",
"events_url": "https://api.github.com/users/loubnabnl/events{/privacy}",
"followers_url": "https://api.github.com/users/loubnabnl/followers",
"following_url": "https://api.github.com/users/loubnabnl/following{/other_user}",
"gists_url": "https://api.github.com/users/loubnabnl/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/loubnabnl",
"id": 44069155,
"login": "loubnabnl",
"node_id": "MDQ6VXNlcjQ0MDY5MTU1",
"organizations_url": "https://api.github.com/users/loubnabnl/orgs",
"received_events_url": "https://api.github.com/users/loubnabnl/received_events",
"repos_url": "https://api.github.com/users/loubnabnl/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/loubnabnl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/loubnabnl/subscriptions",
"type": "User",
"url": "https://api.github.com/users/loubnabnl",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5596/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5596/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 22:19:03
|
https://api.github.com/repos/huggingface/datasets/issues/5594
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5594/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5594/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5594/events
|
https://github.com/huggingface/datasets/issues/5594
| 1,603,980,995
|
I_kwDODunzps5fms7D
| 5,594
|
Error while downloading the xtreme udpos dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/24687672?v=4",
"events_url": "https://api.github.com/users/simran-khanuja/events{/privacy}",
"followers_url": "https://api.github.com/users/simran-khanuja/followers",
"following_url": "https://api.github.com/users/simran-khanuja/following{/other_user}",
"gists_url": "https://api.github.com/users/simran-khanuja/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/simran-khanuja",
"id": 24687672,
"login": "simran-khanuja",
"node_id": "MDQ6VXNlcjI0Njg3Njcy",
"organizations_url": "https://api.github.com/users/simran-khanuja/orgs",
"received_events_url": "https://api.github.com/users/simran-khanuja/received_events",
"repos_url": "https://api.github.com/users/simran-khanuja/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/simran-khanuja/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/simran-khanuja/subscriptions",
"type": "User",
"url": "https://api.github.com/users/simran-khanuja",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi! I cannot reproduce this error on my machine.\r\n\r\nThe raised error could mean that one of the downloaded files is corrupted. To verify this is not the case, you can run `load_dataset` as follows:\r\n```python\r\ntrain_dataset = load_dataset('xtreme', 'udpos.English', split=\"train\", cache_dir=args.cache_dir, download_mode=\"force_redownload\", verification_mode=\"all_checks\")\r\n```",
"Hi! Apologies for the delayed response! I tried the above and it doesn't solve the issue. Actually, the dataset gets downloaded most times, but sometimes this error occurs (at random afaik). Is it possible that there is a server issue for this particular dataset? I am able to download other datasets using the same code on the same machine with no issues :( I get this error now : \r\n```\r\nDownloading data: 16%|███████████████▌ | 55.9M/355M [04:45<25:25, 196kB/s]\r\nTraceback (most recent call last):\r\n File \"/home/skhanuja/Optimal-Resource-Allocation-for-Multilingual-Finetuning/src/train_al.py\", line 1107, in <module>\r\n main()\r\n File \"/home/skhanuja/Optimal-Resource-Allocation-for-Multilingual-Finetuning/src/train_al.py\", line 439, in main\r\n en_dataset = load_dataset(\"xtreme\", \"udpos.English\", split=\"train\", download_mode=\"force_redownload\", verification_mode=\"all_checks\")\r\n File \"/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/load.py\", line 1782, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/builder.py\", line 872, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/builder.py\", line 1649, in _download_and_prepare\r\n super()._download_and_prepare(\r\n File \"/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/builder.py\", line 949, in _download_and_prepare\r\n verify_checksums(\r\n File \"/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/utils/info_utils.py\", line 62, in verify_checksums\r\n raise NonMatchingChecksumError(\r\ndatasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https://lindat.mff.cuni.cz/repository/xmlui/bitstream/handle/11234/1-3105/ud-treebanks-v2.5.tgz']\r\nSet `verification_mode='no_checks'` to skip checksums verification and ignore this error\r\n```",
"If this happens randomly, then this means the data file from the error message is not always downloaded correctly. \r\n\r\nThe only solution in this scenario is to download the dataset again by passing `download_mode=\"force_redownload\"` to the `load_dataset` call.",
"Wow. I effectively have to redownload a dataset of 1TB because of this now?\r\nBecause 3% of its parts are broken?\r\n\r\nWhy is this downloader library so sh*t and badly documented also? I found almost nothing on the net, at least finally this issue about the problem here.\r\nNo words to express how disappointed I am by that dataset tool provided by Huggingface here, which I sadly have to use because HF is the only place where the Dataset I plan to work with is hosted....\r\n\r\nI mean... checksum check after download... or hitting timeout of a part... and redownload if not matching... that's content of every junior developer training session.\r\n\r\nI added `verification_mode=\"all_checks\"`. And it really calculated checksums for 4096 parts of ~350 MB... But then did nothing and tried to extract still, hitting the error again. \r\n\r\nEDIT: Apparently it is able to fix it by getting a little help: Just delete the broken parts and associated files from `~/.cache/huggingface/datasets/downloads`",
"I'm getting it too, although just retrying fixed it. Nevertheless, the dataset is too large to have re-downloaded the whole thing, for it's probably just one file with an issue. It would be good to know if there's a way people could manually examine the files (first for sizes, then possibly checksums)... going to the web or elsewhere to compare and correct it by hand, if ever needed.",
"Okay, no, it got further but it is repeatedly giving me:\r\n```/home/jaggz/.cache/huggingface/modules/datasets_modules/datasets/mozilla-foundation--common_voice_11_0/3f27acf10f303eac5b6fbbbe02495aeddb46ecffdb0a2fe3507fcfbf89094631/common_voice_11_0.py\", line 195, in _generate_examples\r\nresult[\"audio\"] = {\"path\": path, \"bytes\": file.read()}\r\n^^^^^^^^^^^\r\nFile \"/usr/lib/python3.11/tarfile.py\", line 687, in read\r\nraise ReadError(\"unexpected end of data\")\r\ntarfile.ReadError: unexpected end of data\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\nFile \"/home/jaggz/src/transformers/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py\", line 625, in <module>\r\nmain()\r\nFile \"/home/jaggz/src/transformers/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py\", line 360, in main\r\nraw_datasets[\"train\"] = load_dataset(\r\n^^^^^^^^^^^^^\r\nFile \"/home/jaggz/venvs/pynow/lib/python3.11/site-packages/datasets/load.py\", line 2153, in load_dataset\r\nbuilder_instance.download_and_prepare(\r\nFile \"/home/jaggz/venvs/pynow/lib/python3.11/site-packages/datasets/builder.py\", line 954, in download_and_prepare\r\nself._download_and_prepare(\r\nFile \"/home/jaggz/venvs/pynow/lib/python3.11/site-packages/datasets/builder.py\", line 1717, in _download_and_prepare\r\nsuper()._download_and_prepare(\r\nFile \"/home/jaggz/venvs/pynow/lib/python3.11/site-packages/datasets/builder.py\", line 1049, in _download_and_prepare\r\nself._prepare_split(split_generator, **prepare_split_kwargs)\r\nFile \"/home/jaggz/venvs/pynow/lib/python3.11/site-packages/datasets/builder.py\", line 1555, in _prepare_split\r\nfor job_id, done, content in self._prepare_split_single(\r\nFile \"/home/jaggz/venvs/pynow/lib/python3.11/site-packages/datasets/builder.py\", line 1712, in _prepare_split_single\r\nraise DatasetGenerationError(\"An error occurred while generating the dataset\") from e\r\ndatasets.builder.DatasetGenerationError: An error occurred while generating the datase\r\n",
"@RuntimeRacer \r\n> EDIT: Apparently it is able to fix it by getting a little help: Just delete the broken parts and associated files from `~/.cache/huggingface/datasets/downloads`\r\n\r\nHow do you know the broken parts?\r\nMine's consistently erroring and.. yeah, really this thing should be able to check the files (but where's that even done)...\r\n\r\n2023-11-02 00:14:09.846055: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.\r\nTo enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.\r\n/home/j/src/transformers/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py:299: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v4.34. Please use `token` instead.\r\n warnings.warn(\r\n11/02/2023 00:14:37 - WARNING - __main__ - Process rank: 0, device: cuda:0, n_gpu: 1, distributed training: False, 16-bits training: True\r\n11/02/2023 00:14:37 - INFO - __main__ - Training/evaluation parameters Seq2SeqTrainingArguments(\r\n_n_gpu=1,\r\nadafactor=False,\r\nadam_beta1=0.9,\r\nadam_beta2=0.999,\r\n...\r\nlogging_dir=./whisper-tiny-en/runs/Nov02_00-14-28_jsys,\r\n...\r\nrun_name=./whisper-tiny-en,\r\n...\r\nweight_decay=0.0,\r\n)\r\n11/02/2023 00:14:37 - INFO - __main__ - Training/evaluation parameters Seq2SeqTrainingArguments(\r\n_n_gpu=1,\r\nadafactor=False,\r\n...\r\nlogging_dir=./whisper-tiny-en/runs/Nov02_00-14-28_jsys,\r\n...\r\nweight_decay=0.0,\r\n)\r\n\r\nDownloading data files: 0%| | 0/5 [00:00<?, ?it/s]\r\nDownloading data files: 100%|██████████| 5/5 [00:00<00:00, 2426.42it/s]\r\n\r\nExtracting data files: 0%| | 0/5 [00:00<?, ?it/s]\r\nExtracting data files: 100%|██████████| 5/5 [00:00<00:00, 421.16it/s]\r\n\r\nDownloading data files: 0%| | 0/5 [00:00<?, ?it/s]\r\nDownloading data files: 100%|██████████| 5/5 [00:00<00:00, 18707.87it/s]\r\n\r\nExtracting data files: 0%| | 0/5 [00:00<?, ?it/s]\r\nExtracting data files: 100%|██████████| 5/5 [00:00<00:00, 3754.97it/s]\r\n\r\nGenerating train split: 0 examples [00:00, ? examples/s]\r\n\r\nReading metadata...: 0it [00:00, ?it/s]\u001b[A\r\n...\r\nReading metadata...: 948736it [00:23, 40632.92it/s] \r\n\r\nGenerating train split: 1 examples [00:23, 23.37s/ examples]\r\n...\r\nGenerating train split: 948736 examples [08:28, 1866.15 examples/s]\r\n\r\nGenerating validation split: 0 examples [00:00, ? examples/s]\r\n\r\nReading metadata...: 0it [00:00, ?it/s]\u001b[A\r\n\r\nReading metadata...: 16089it [00:00, 157411.88it/s]\u001b[A\r\nReading metadata...: 16354it [00:00, 158233.27it/s]\r\n\r\nGenerating validation split: 1 examples [00:00, 7.60 examples/s]\r\nGenerating validation split: 16354 examples [00:14, 1154.77 examples/s]\r\n\r\nGenerating test split: 0 examples [00:00, ? examples/s]\r\n\r\nReading metadata...: 0it [00:00, ?it/s]\u001b[A\r\nReading metadata...: 16354it [00:00, 194855.03it/s]\r\n\r\nGenerating test split: 1 examples [00:00, 4.53 examples/s]\r\nGenerating test split: 16354 examples [00:07, 2105.43 examples/s]\r\n\r\nGenerating other split: 0 examples [00:00, ? examples/s]\r\n\r\nReading metadata...: 0it [00:00, ?it/s]\u001b[A\r\nReading metadata...: 290846it [00:01, 235823.90it/s]\r\n\r\nGenerating other split: 1 examples [00:01, 1.27s/ examples]\r\n...\r\nGenerating other split: 290846 examples [02:12, 2196.96 examples/s]\r\nGenerating invalidated split: 0 examples [00:00, ? examples/s]\r\nReading metadata...: 252599it [00:01, 241965.85it/s]\r\n\r\nGenerating invalidated split: 1 examples [00:01, 1.08s/ examples]\r\n...\r\nGenerating invalidated split: 60130 examples [00:34, 1764.14 examples/s]\r\nTraceback (most recent call last):\r\n File \"/home/j/venvs/pycur/lib/python3.11/site-packages/datasets/builder.py\", line 1676, in _prepare_split_single\r\n for key, record in generator:\r\n File \"/home/j/.cache/huggingface/modules/datasets_modules/datasets/mozilla-foundation--common_voice_11_0/3f27acf10f303eac5b6fbbbe02495aeddb46ecffdb0a2fe3507fcfbf89094631/common_voice_11_0.py\", line 195, in _generate_examples\r\n result[\"audio\"] = {\"path\": path, \"bytes\": file.read()}\r\n ^^^^^^^^^^^\r\n File \"/usr/lib/python3.11/tarfile.py\", line 687, in read\r\n raise ReadError(\"unexpected end of data\")\r\ntarfile.ReadError: unexpected end of data\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"/home/j/src/transformers/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py\", line 625, in <module>\r\n main()\r\n File \"/home/j/src/transformers/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py\", line 360, in main\r\n raw_datasets[\"train\"] = load_dataset(\r\n ^^^^^^^^^^^^^\r\n File \"/home/j/venvs/pycur/lib/python3.11/site-packages/datasets/load.py\", line 2153, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/home/j/venvs/pycur/lib/python3.11/site-packages/datasets/builder.py\", line 954, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/j/venvs/pycur/lib/python3.11/site-packages/datasets/builder.py\", line 1717, in _download_and_prepare\r\n super()._download_and_prepare(\r\n File \"/home/j/venvs/pycur/lib/python3.11/site-packages/datasets/builder.py\", line 1049, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"/home/j/venvs/pycur/lib/python3.11/site-packages/datasets/builder.py\", line 1555, in _prepare_split\r\n for job_id, done, content in self._prepare_split_single(\r\n File \"/home/j/venvs/pycur/lib/python3.11/site-packages/datasets/builder.py\", line 1712, in _prepare_split_single\r\n raise DatasetGenerationError(\"An error occurred while generating the dataset\") from e\r\ndatasets.builder.DatasetGenerationError: An error occurred while generating the dataset\r\n",
"@jaggzh Hi, I actually came around with a fix for this, wasn't that easy to solve since there were a lot of hidden pitfalls in the code, and it's quite hacky, but I was able to download the full dataset.\r\n\r\nI just didn't create a PR for it yet since I was too lazy to create a fork and change my local repo's origin. 😅 \r\nLet me try to do this tonight, I'll give you a ping once it's up.\r\n\r\nEDIT: And no, what I wrote above about adding a param to the download config does NOT solve it apparently. A code fix is required here.",
"@jaggzh PR is up: https://github.com/huggingface/datasets/pull/6380\r\n\r\n🤞 on approval for merge to the main repo.",
"@mariosasko Can you re-open this? We really need some better diagnostics output, at the least, to locate which files are contributing, some checksum output, etc. I can't even tell if this is a mozilla...py issue or huggingface datasets or ....",
"@RuntimeRacer \r\nBeautiful, thank you so much. I patched with your PR and am re-running now.\r\n(I'm running this script: https://github.com/huggingface/transformers/blob/main/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py)\r\nOkay, actually it failed; so now I'm running with verification_mode='all_checks' added to the load_data() call and it's re-running now. Wish me luck.\r\n(Note: It's generating checksums; I don't see an option that handles anything between basic_checks and all_checks -- Something checking dl'ed files' lengths would be a good common fix I'd think; corruption is more rare nowadays than a short file (although maybe your patch helps prevent that in the first place.) :}",
"@RuntimeRacer \r\nNo luck. Sigh.\r\n[Edit: My tmux copy didn't get some data. That was weird. I'm adding in the initial part of the output:]\r\n```\r\nDownloading data files: 100%|██████████| 5/5 [00:00<00:00, 2190.69it/s]\r\nComputing checksums: 100%|██████████| 41/41 [11:39<00:00, 17.05s/it] Extracting data files: 100%|██████████| 5/5 [00:00<00:00, 12.37it/s]\r\nDownloading data files: 100%|██████████| 5/5 [00:00<00:00, 107.64it/s]\r\nExtracting data files: 100%|██████████| 5/5 [00:00<00:00, 3149.82it/s]\r\nReading metadata...: 948736it [00:03, 243227.36it/s]s/s]\r\n...\r\n```\r\n```\r\n...\r\nReading metadata...: 252599it [00:01, 249267.71it/s]xamples/s]\r\nGenerating invalidated split: 60130 examples [00:31, 1916.33 examples/s]\r\nTraceback (most recent call last):\r\nFile \"/home/j/src/py/datasets/src/datasets/builder.py\", line 1676, in _prepare_split_single\r\nfor key, record in generator:\r\nFile \"/home/j/.cache/huggingface/modules/datasets_modules/datasets/mozilla-foundation--common_voice_11_0/3f27acf10f303eac5b6fbbbe02495aeddb46ecffdb0a2fe3507fcfbf89094631/common_voice_11_0.py\", line 195, in _generate_examples\r\nresult[\"audio\"] = {\"path\": path, \"bytes\": file.read()}\r\n^^^^^^^^^^^\r\nFile \"/usr/lib/python3.11/tarfile.py\", line 687, in read\r\nraise ReadError(\"unexpected end of data\")\r\ntarfile.ReadError: unexpected end of data\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\nFile \"/home/j/src/transformers/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py\", line 627, in <module>\r\nmain()\r\nFile \"/home/j/src/transformers/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py\", line 360, in main\r\nraw_datasets[\"train\"] = load_dataset(\r\n^^^^^^^^^^^^^\r\nFile \"/home/j/src/py/datasets/src/datasets/load.py\", line 2153, in load_dataset\r\nbuilder_instance.download_and_prepare(\r\nFile \"/home/j/src/py/datasets/src/datasets/builder.py\", line 954, in download_and_prepare\r\nself._download_and_prepare(\r\nFile \"/home/j/src/py/datasets/src/datasets/builder.py\", line 1717, in _download_and_prepare\r\nsuper()._download_and_prepare(\r\nFile \"/home/j/src/py/datasets/src/datasets/builder.py\", line 1049, in _download_and_prepare\r\nself._prepare_split(split_generator, **prepare_split_kwargs)\r\nFile \"/home/j/src/py/datasets/src/datasets/builder.py\", line 1555, in _prepare_split\r\nfor job_id, done, content in self._prepare_split_single(\r\nFile \"/home/j/src/py/datasets/src/datasets/builder.py\", line 1712\r\n```",
"I'm unable to reproduce this error. Based on https://github.com/psf/requests/issues/4956, newer releases of `urllib3` check the returned content length by default, so perhaps updating `requests` and `urllib3` to the latest versions (`pip install -U requests urllib3`) and loading the dataset with `datasets.load_dataset(\"xtreme\", \"udpos.English\", download_config=datasets.DownloadConfig(resume_download=True))` (re-run when it fails to resume the download) can fix the issue.",
"@jaggzh I think you will need to re-download the whole dataset with my patched code. Files which have already been downloaded and marked as complete by the broken downloader won't be detected even on re-run (I described that in the PR).\r\nI also had to download reazonspeech, which is over 1TB, twice. 🙈 \r\nFor re-download, you need to manually delete the dataset files from your local machine's huggingface download cache.\r\n\r\n@mariosasko Not sure how you tested it, but it's not an issue in `requests` or `urllib`. The problem is the huggingface downloader, which generates a nested download thread for the actual download I think.\r\nThe issue I had with the reazonspeech dataset (https://huggingface.co/datasets/reazon-research/reazonspeech/tree/main) basically was, that it started downloading a part, but sometimes the connection would 'starve' and only continue with a few kilobytes, and eventually stop receiving any data at all.\r\nSometimes it would even recover during the download and finish properly.\r\nHowever, if it did not recover, the request would hit the really generous default timeout (which is 100 seconds I think), however the exception thrown by the failure inside `urllib`, isn't captured or handled by the upper level downloader code of the `datasets` library.\r\n`datasets` even has a retry mechanism, which would continue interrupted downloads if they have the `.incomplete` suffix, which isn't cleared if, for example, a manual `CTRL+C` is sent by the user to the python process.\r\nBut: If it runs into that edge case I described above (TL;DR: connection starves after minutes + timeout exception which isn't captured), the cache downloader will consider the download as successful and remove the `.incomplete` suffix nevertheless, leaving the archive file in a corrupted state.\r\n\r\nHonestly, I spent hours on trying to figure out what was even going on and why the retry mechanics of the cache downloader didn't work at all.\r\nBut it is indeed an issue caused by the download process itself not receiving any info about actual content size and filesize size on disk of the archive to be downloaded, thus, having no direct control in case something fails on the request level.\r\n\r\nIMHO, this requires a major refactor of the way this part of the downloader works.\r\nYet I was able to quick-fix it by adding some synthetic Exception handling and explicit retry-handling in the code, als done in my PR.",
"@RuntimeRacer \r\nUgh. It took a day. I'm seeing if I can get some debug code in here to examine the files myself. (I'm not sure why checksum tests would fail, so, yeah, I think you're right -- this stuff needs some work. Going through ipdb right now to try to get some idea of what's going on in the code).",
"@RuntimeRacer Data can only be appended to the `.incomplete` files if `load_dataset` is called with `download_config=DownloadConfig(resume_download=True)`. \r\n\r\nWhere exactly does this exception happen (in the code)? The error stack trace would help a lot.",
"@mariosasko I do not have a trace of this exception nor do I know which type it is. I am honestly not even sure if an exception is thrown, or the process just aborts without error.\r\n\r\n> @RuntimeRacer Data can only be appended to the .incomplete files if load_dataset is called with download_config=DownloadConfig(resume_download=True).\r\n\r\nWell, I think I did a very clear explaination of the issue in the PR I shared, and the description above, but maybe I wasn't precise enough. Let me try to explain once more:\r\n\r\nWhat you mention here is the \"normal\" case, if the process is aborted. In this case, there will be files with `.incomplete` suffix, which the cache downloader can continue to download. That is correct.\r\n\r\nBUT: What I am talking about all the time is an edge case: if the download step crashes / timeouts internally, the cache downloader will NOT be aware of this, and REMOVES the `.incomplete` suffix.\r\nIt does NOT know that the file is incomplete when the `http_get` function returns and will remove the `.incomplete` suffix in any case once `http_get` returns.\r\nBut the problem is that `http_get` returns without failure, even if the download failed.\r\nAnd this is still a problem even with latest `urllib` and `requests` library.\r\n",
"@RuntimeRacer Updating `urllib3` and `requests` to the latest versions fixes the issue explained in this [blog](https://blog.petrzemek.net/2018/04/22/on-incomplete-http-reads-and-the-requests-library-in-python/) post. \r\n\r\nHowever, the issue explained above seems more similar to [this](https://stackoverflow.com/questions/52731196/python-3-6-5-requests-with-streaming-getting-stuck-in-iter-content-even-if-chun) one. To address it, we can reduce the default timeout to 10 seconds (btw, this was the initial value, but it was causing problems for some users) and expose a config variable so that users can easily control it. Additionally, we can re-run `http_get` similarly to https://github.com/huggingface/huggingface_hub/pull/1766 when the connection/timeout error happens to make the logic even more robust. Would this work for you? The last part is what you did in the PR, right?\r\n\r\n@jaggzh From all the datasets mentioned in this issue, `xtreme` is the only one that stores the data file checksums in the metadata. So, the checksum check has no effect when enabled for the rest of the datasets.",
"(I don't have any .incomplete files, just the extraction errors.)\r\nI was going through the code to try to relate filenames to the hex/hash files, but realized I might not need to.\r\nSo instead I coded up a script in bash to examine the tar files for validity (had an issue with bash subshells not adding to my array so I had cgpt recode it in perl).\r\n\r\n```perl\r\n#!/usr/bin/perl\r\nuse strict;\r\nuse warnings;\r\n\r\n# Initialize the array to store tar files\r\nmy @tars;\r\n\r\n# Open the current directory\r\nopendir(my $dh, '.') or die \"Cannot open directory: $!\";\r\n\r\n# Read files in the current directory\r\nwhile (my $f = readdir($dh)) {\r\n # Skip files ending with lock, json, or py\r\n next if $f =~ /\\.(lock|json|py)$/;\r\n\r\n # Use the `file` command to determine the type of file\r\n my $ft = `file \"$f\"`;\r\n\r\n # If it's a tar archive, add it to the list\r\n if ($ft =~ /tar archive/) {\r\n push @tars, $f;\r\n }\r\n}\r\n\r\nclosedir($dh);\r\n\r\nprint \"Final Tars count: \" . scalar(@tars) . \"\\n\";\r\n\r\n# Iterate over the tar files and check them\r\nforeach my $i (0 .. $#tars) {\r\n my $f = $tars[$i];\r\n printf '%d/%d ', $i+1, scalar(@tars);\r\n \r\n # Use `ls -lgG` to list the files, similar to the original bash script\r\n system(\"ls -lgG '$f'\");\r\n\r\n # Check the integrity of the tar file\r\n my $errfn = \"/tmp/$f.tarerr\";\r\n if (system(\"tar tf '$f' > /dev/null 2> '$errfn'\") != 0) {\r\n print \" BAD $f\\n\";\r\n print \" ERR: \";\r\n system(\"cat '$errfn'\");\r\n }\r\n\r\n # Remove the error file if it exists\r\n unlink $errfn if -e $errfn;\r\n}\r\n```\r\n\r\nThis found one hash file that errored in the tar extraction, and one small tmp* file that also was supposedly a tar and was erroring. I removed those two and re-data loaded.. it grabbed just what it needed and I'm on my way. Yay!\r\n\r\nSo... is there a way for the datasets api to get file sizes? That would be a very easy and fast test, leaving checksum slowdowns for extra-messed-up situations.\r\n\r\n",
"> @RuntimeRacer Updating `urllib3` and `requests` to the latest versions fixes the issue explained in this [blog](https://blog.petrzemek.net/2018/04/22/on-incomplete-http-reads-and-the-requests-library-in-python/) post.\r\n> \r\n> However, the issue explained above seems more similar to [this](https://stackoverflow.com/questions/52731196/python-3-6-5-requests-with-streaming-getting-stuck-in-iter-content-even-if-chun) one. To address it, we can reduce the default timeout to 10 seconds (btw, this was the initial value, but it was causing problems for some users) and expose a config variable so that users can easily control it. Additionally, we can re-run `http_get` similarly to [huggingface/huggingface_hub#1766](https://github.com/huggingface/huggingface_hub/pull/1766) when the connection/timeout error happens to make the logic even more robust. Would this work for you? The last part is what you did in the PR, right?\r\n> \r\n> @jaggzh From all the datasets mentioned in this issue, `xtreme` is the only one that stores the data file checksums in the metadata. So, the checksum check has no effect when enabled for the rest of the datasets.\r\n\r\n@mariosasko Well if you look at my commit date, you will see that I run into this problem still in October. The blog post you mention and the update in the pull request for `urllib` was from July: https://github.com/psf/requests/issues/4956#issuecomment-1648632935\r\n\r\nBut yeah the [issue on StackOverflow](https://stackoverflow.com/questions/52731196/python-3-6-5-requests-with-streaming-getting-stuck-in-iter-content-even-if-chun) you mentioned seems like that's the source issue I was running into there.\r\nI experimented with timeouts, but changing them didn't help to resolve the issue of the starving connection unfortunately.\r\nHowever, https://github.com/huggingface/huggingface_hub/pull/1766 seems like that could be working; it's very similar to my change. So yeah I think this would fix it probably.\r\n\r\nAlso I can confirm the checksum option did not work for [reazonspeech](https://huggingface.co/datasets/reazon-research/reazonspeech/tree/main) as well. So maybe it's a double edge case that only occurs for some datasets. 🤷♂️ ",
"Also, the hf urls to files -- while I can't see a way of getting a listing from the hf site side -- do include the file size in the http header response. So we do have a quick way of just verifying lengths for resume. (This message may not be interesting to you all).\r\n\r\nFirst, a json clip (mozilla-foundation___common_voice_11_0/en/11.0.0/3f27acf10f303eac5b6fbbbe02495aeddb46ecffdb0a2fe3507fcfbf89094631/dataset_info.json):\r\n\r\n* I don't know how specific this .json is to mozilla common voice\r\n* Note that *dataset_size* is not the dataset size :) DatasetInfo class docs indicate it might be their \"combined size in bytes of the Arrow tables for all splits.\"\r\n* *num_bytes*: does match the individual file size though, and matches the http header (further down)\r\n```\r\n{\r\n \"builder_name\" : \"common_voice_11_0\",\r\n...\r\n \"config_name\" : \"en\",\r\n \"dataset_name\" : \"common_voice_11_0\",\r\n \"dataset_size\" : 1680793952,\r\n...\r\n \"download_checksums\" : {\r\n...\r\n \"https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0/resolve/main/audio/en/invalidated/en_invalidated_3.tar\" : {\r\n \"checksum\" : null,\r\n \"num_bytes\" : 2110853120\r\n },\r\n...\r\n```\r\n\r\n```bash\r\n~/.cache/huggingface/datasets/downloads$ ls -lgG b45f82cb87bab2c35361857fcd46042ab658b42c37dc9a455248c2866c9b8f40* | cut -c 14-\r\n```\r\n```\r\n2110853120 Nov 1 16:28 b45f82cb87bab2c35361857fcd46042ab658b42c37dc9a455248c2866c9b8f40\r\n148 Nov 1 16:28 b45f82cb87bab2c35361857fcd46042ab658b42c37dc9a455248c2866c9b8f40.json\r\n0 Nov 1 16:07 b45f82cb87bab2c35361857fcd46042ab658b42c37dc9a455248c2866c9b8f40.lock\r\n```\r\n\r\n* Note the -L to follow redirects. Two headers are below:\r\n\r\n```bash\r\n$ curl -I -L https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0/resolve/main/audio/en/invalidated/en_invalidated_3.tar\r\n```\r\n```\r\nHTTP/2 302 \r\ncontent-type: text/plain; charset=utf-8\r\ncontent-length: 1215\r\nlocation: https://cdn-lfs.huggingface.co/repos/00/ce/00ce867b4ae70bd23a10b60c32a8626d87b2666fc088ad03f86b94788faff554/984086fc250badece2992e8be4d7c4430f7c1208fb8bf37dc7c4aecdc803b220?response-content-disposition=attachment%3B+filename*%3DUTF-8%27%27en_invalidated_3.tar%3B+filename%3D%22en_invalidated_3.tar%22%3B&response-content-type=application%2Fx-tar&Expires=1699389040&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTY5OTM4OTA0MH19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2RuLWxmcy5odWdnaW5nZmFjZS5jby9yZXBvcy8wMC9jZS8wMGNlODY3YjRhZTcwYmQyM2ExMGI2MGMzMmE4NjI2ZDg3YjI2NjZmYzA4OGFkMDNmODZiOTQ3ODhmYWZmNTU0Lzk4NDA4NmZjMjUwYmFkZWNlMjk5MmU4YmU0ZDdjNDQzMGY3YzEyMDhmYjhiZjM3ZGM3YzRhZWNkYzgwM2IyMjA%7EcmVzcG9uc2UtY29udGVudC1kaXNwb3NpdGlvbj0qJnJlc3BvbnNlLWNvbnRlbnQtdHlwZT0qIn1dfQ__&Signature=WYc32e75PqbKSAv3KTpG86ooFT6oOyDDQpCt1i2B8gVS10J3qvpZlDmxaBgnGlCCl7SRiAvhIQctgwooNtWbUeDqK3T4bAo0-OOrGCuVi-%7EKWUBcoHce7nHWpl%7Ex9ubHS%7EFoYcGB2SCEqh5fIgGjNV-VKRX6TSXkRto5bclQq4VCJKHufDsJ114A1V4Qu%7EYiRIWKG4Gi93Xv4OFhyWY0uqykvP5c0x02F%7ELX0m3WbW-eXBk6Fw2xnV1XLrEkdR-9Ax2vHqMYIIw6yV0wWEc1hxE393P9mMG1TNDj%7EXDuCoOaA7LbrwBCxai%7Ew2MopdPamTXyOia5-FnSqEdsV29v4Q__&Key-Pair-Id=KVTP0A1DKRTAX\r\ndate: Sat, 04 Nov 2023 20:30:40 GMT\r\nx-powered-by: huggingface-moon\r\nx-request-id: Root=1-6546a9f0-5e7f729d09bdb38e35649a7e\r\naccess-control-allow-origin: https://huggingface.co\r\nvary: Origin, Accept\r\naccess-control-expose-headers: X-Repo-Commit,X-Request-Id,X-Error-Code,X-Error-Message,ETag,Link,Accept-Ranges,Content-Range\r\nx-repo-commit: 23b4059922516c140711b91831aa3393a22e9b80\r\naccept-ranges: bytes\r\nx-linked-size: 2110853120\r\nx-linked-etag: \"984086fc250badece2992e8be4d7c4430f7c1208fb8bf37dc7c4aecdc803b220\"\r\nx-cache: Miss from cloudfront\r\nvia: 1.1 f31a6426ebd75ce4393909b12f5cbdcc.cloudfront.net (CloudFront)\r\nx-amz-cf-pop: LAX53-P4\r\nx-amz-cf-id: BcYMFcHVcxPome2IjAvx0ZU90G41QlNI_HEHDGDqCQaEPvrOsnsGXw==\r\n\r\nHTTP/2 200 \r\ncontent-type: application/x-tar\r\ncontent-length: 2110853120\r\ndate: Sat, 04 Nov 2023 20:19:35 GMT\r\nlast-modified: Fri, 18 Nov 2022 15:08:22 GMT\r\netag: \"acac28988e2f7e73b68e865179fbd008\"\r\nx-amz-storage-class: INTELLIGENT_TIERING\r\nx-amz-version-id: LgTuOcd9FGN4JnAXp26O.1v2VW42GPtF\r\ncontent-disposition: attachment; filename*=UTF-8''en_invalidated_3.tar; filename=\"en_invalidated_3.tar\";\r\naccept-ranges: bytes\r\nserver: AmazonS3\r\nx-cache: Hit from cloudfront\r\nvia: 1.1 d07c8167eda81d307ca96358727f505e.cloudfront.net (CloudFront)\r\nx-amz-cf-pop: LAX50-P5\r\nx-amz-cf-id: 6oNZg_V8U1M_JXsMHQAPuRmDfxbY2BnMUWcVH0nz3VnfEZCzF5lgkQ==\r\nage: 666\r\ncache-control: public, max-age=604800, immutable, s-maxage=604800\r\nvary: Origin\r\n\r\n```\r\n"
] | 2023-02-28T23:40:53
| 2023-11-04T20:45:56
| 2023-07-24T14:22:18
|
NONE
| null | null | null | null |
### Describe the bug
Hi,
I am facing an error while downloading the xtreme udpos dataset using load_dataset. I have datasets 2.10.1 installed
```Downloading and preparing dataset xtreme/udpos.Arabic to /compute/tir-1-18/skhanuja/multilingual_ft/cache/data/xtreme/udpos.Arabic/1.0.0/29f5d57a48779f37ccb75cb8708d1095448aad0713b425bdc1ff9a4a128a56e4...
Downloading data: 16%|██████████████▏ | 56.9M/355M [03:11<16:43, 297kB/s]
Generating train split: 0%| | 0/6075 [00:00<?, ? examples/s]Traceback (most recent call last):
File "/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/builder.py", line 1608, in _prepare_split_single
for key, record in generator:
File "/home/skhanuja/.cache/huggingface/modules/datasets_modules/datasets/xtreme/29f5d57a48779f37ccb75cb8708d1095448aad0713b425bdc1ff9a4a128a56e4/xtreme.py", line 732, in _generate_examples
yield from UdposParser.generate_examples(config=self.config, filepath=filepath, **kwargs)
File "/home/skhanuja/.cache/huggingface/modules/datasets_modules/datasets/xtreme/29f5d57a48779f37ccb75cb8708d1095448aad0713b425bdc1ff9a4a128a56e4/xtreme.py", line 921, in generate_examples
for path, file in filepath:
File "/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/download/download_manager.py", line 158, in __iter__
yield from self.generator(*self.args, **self.kwargs)
File "/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/download/download_manager.py", line 211, in _iter_from_path
yield from cls._iter_tar(f)
File "/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/download/download_manager.py", line 167, in _iter_tar
for tarinfo in stream:
File "/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/tarfile.py", line 2475, in __iter__
tarinfo = self.next()
File "/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/tarfile.py", line 2344, in next
raise ReadError("unexpected end of data")
tarfile.ReadError: unexpected end of data
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/skhanuja/Optimal-Resource-Allocation-for-Multilingual-Finetuning/src/train_al.py", line 855, in <module>
main()
File "/home/skhanuja/Optimal-Resource-Allocation-for-Multilingual-Finetuning/src/train_al.py", line 487, in main
train_dataset = load_dataset(dataset_name, source_language, split="train", cache_dir=args.cache_dir, download_mode="force_redownload")
File "/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/load.py", line 1782, in load_dataset
builder_instance.download_and_prepare(
File "/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/builder.py", line 872, in download_and_prepare
self._download_and_prepare(
File "/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/builder.py", line 1649, in _download_and_prepare
super()._download_and_prepare(
File "/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/builder.py", line 967, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/builder.py", line 1488, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/builder.py", line 1644, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.builder.DatasetGenerationError: An error occurred while generating the dataset
```
### Steps to reproduce the bug
```
train_dataset = load_dataset('xtreme', 'udpos.English', split="train", cache_dir=args.cache_dir, download_mode="force_redownload")
```
### Expected behavior
Download the udpos dataset
### Environment info
- `datasets` version: 2.10.1
- Platform: Linux-3.10.0-957.1.3.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.10.8
- PyArrow version: 10.0.1
- Pandas version: 1.5.2
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5594/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5594/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 145 days, 14:41:25
|
https://api.github.com/repos/huggingface/datasets/issues/5586
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5586/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5586/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5586/events
|
https://github.com/huggingface/datasets/issues/5586
| 1,602,961,544
|
I_kwDODunzps5fi0CI
| 5,586
|
.sort() is broken when used after .filter(), only in 2.10.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/57797966?v=4",
"events_url": "https://api.github.com/users/MattYoon/events{/privacy}",
"followers_url": "https://api.github.com/users/MattYoon/followers",
"following_url": "https://api.github.com/users/MattYoon/following{/other_user}",
"gists_url": "https://api.github.com/users/MattYoon/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/MattYoon",
"id": 57797966,
"login": "MattYoon",
"node_id": "MDQ6VXNlcjU3Nzk3OTY2",
"organizations_url": "https://api.github.com/users/MattYoon/orgs",
"received_events_url": "https://api.github.com/users/MattYoon/received_events",
"repos_url": "https://api.github.com/users/MattYoon/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/MattYoon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MattYoon/subscriptions",
"type": "User",
"url": "https://api.github.com/users/MattYoon",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] |
[
"Thanks for reporting and thanks @mariosasko for fixing ! We just did a patch release `2.10.1` with the fix"
] | 2023-02-28T12:18:09
| 2023-02-28T18:17:26
| 2023-02-28T17:21:59
|
NONE
| null | null | null | null |
### Describe the bug
Hi, thank you for your support!
It seems like the addition of multiple key sort (#5502) in 2.10.0 broke the `.sort()` method.
After filtering a dataset with `.filter()`, the `.sort()` seems to refer to the query_table index of the previous unfiltered dataset, resulting in an IndexError.
This only happens with the 2.10.0 release.
### Steps to reproduce the bug
```Python
from datasets import load_dataset
# dataset with length of 1104
ds = load_dataset('glue', 'ax')['test']
ds = ds.filter(lambda x: x['idx'] > 1100)
ds.sort('premise')
print('Done')
```
File "/home/dongkeun/datasets_test/test.py", line 5, in <module>
ds.sort('premise')
File "/home/dongkeun/miniconda3/envs/datasets_test/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 528, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/dongkeun/miniconda3/envs/datasets_test/lib/python3.9/site-packages/datasets/fingerprint.py", line 511, in wrapper
out = func(dataset, *args, **kwargs)
File "/home/dongkeun/miniconda3/envs/datasets_test/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 3959, in sort
sort_table = query_table(
File "/home/dongkeun/miniconda3/envs/datasets_test/lib/python3.9/site-packages/datasets/formatting/formatting.py", line 588, in query_table
_check_valid_index_key(key, size)
File "/home/dongkeun/miniconda3/envs/datasets_test/lib/python3.9/site-packages/datasets/formatting/formatting.py", line 537, in _check_valid_index_key
_check_valid_index_key(max(key), size=size)
File "/home/dongkeun/miniconda3/envs/datasets_test/lib/python3.9/site-packages/datasets/formatting/formatting.py", line 531, in _check_valid_index_key
raise IndexError(f"Invalid key: {key} is out of bounds for size {size}")
IndexError: Invalid key: 1103 is out of bounds for size 3
### Expected behavior
It should sort the dataset and print "Done". Which it does on 2.9.0.
### Environment info
- `datasets` version: 2.10.0
- Platform: Linux-5.15.0-41-generic-x86_64-with-glibc2.31
- Python version: 3.9.16
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5586/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5586/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 5:03:50
|
https://api.github.com/repos/huggingface/datasets/issues/5585
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5585/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5585/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5585/events
|
https://github.com/huggingface/datasets/issues/5585
| 1,602,190,030
|
I_kwDODunzps5ff3rO
| 5,585
|
Cache is not transportable
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/4443482?v=4",
"events_url": "https://api.github.com/users/davidgilbertson/events{/privacy}",
"followers_url": "https://api.github.com/users/davidgilbertson/followers",
"following_url": "https://api.github.com/users/davidgilbertson/following{/other_user}",
"gists_url": "https://api.github.com/users/davidgilbertson/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/davidgilbertson",
"id": 4443482,
"login": "davidgilbertson",
"node_id": "MDQ6VXNlcjQ0NDM0ODI=",
"organizations_url": "https://api.github.com/users/davidgilbertson/orgs",
"received_events_url": "https://api.github.com/users/davidgilbertson/received_events",
"repos_url": "https://api.github.com/users/davidgilbertson/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/davidgilbertson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davidgilbertson/subscriptions",
"type": "User",
"url": "https://api.github.com/users/davidgilbertson",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi ! No the cache is not transportable in general. It will work on a shared filesystem if you use the same python environment, but not across machines/os/environments.\r\n\r\nIn particular, reloading cached datasets does work, but reloading cached processed datasets (e.g. from `map`) may not work. This is because some hashes used by caching are based on pickle dumps of the function you pass to `map`.\r\n\r\nFinally you may copy the cache to another machine, but all the `cached-*.arrow` files are unlikely to be reloaded.",
"OK good to know. Thanks @lhoestq !"
] | 2023-02-28T00:53:06
| 2023-02-28T21:26:52
| 2023-02-28T21:26:52
|
NONE
| null | null | null | null |
### Describe the bug
I would like to share cache between two machines (a Windows host machine and a WSL instance).
I run most my code in WSL. I have just run out of space in the virtual drive. Rather than expand the drive size, I plan to move to cache to the host Windows machine, thereby sharing the downloads.
I'm hoping that I can just copy/paste the cache files, but I notice that a lot of the file names start with the path name, e.g. `_home_davidg_.cache_huggingface_datasets_conll2003_default-451...98.lock` where `home/davidg` is where the cache is in WSL.
This seems to suggest that the cache is not portable/cannot be centralised or shared. Is this the case, or are the files that start with path names not integral to the caching mechanism? Because copying the cache files _seems_ to work, but I'm not filled with confidence that something isn't going to break.
A related issue, when trying to load a dataset that should come from cache (running in WSL, pointing to cache on the Windows host) it seemed to work fine, but it still uses a WSL directory for `.cache\huggingface\modules\datasets_modules`. I see nothing in the docs about this, or how to point it to a different place.
I have asked a related question on the forum: https://discuss.huggingface.co/t/is-datasets-cache-operating-system-agnostic/32656
### Steps to reproduce the bug
View the cache directory in WSL/Windows.
### Expected behavior
Cache can be shared between (virtual) machines and be transportable.
It would be nice to have a simple way to say "Dear Hugging Face packages, please put ALL your cache in `blah/de/blah`" and have all the Hugging Face packages respect that single location.
### Environment info
```
- `datasets` version: 2.9.0
- Platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.31
- Python version: 3.10.8
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
- ```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/4443482?v=4",
"events_url": "https://api.github.com/users/davidgilbertson/events{/privacy}",
"followers_url": "https://api.github.com/users/davidgilbertson/followers",
"following_url": "https://api.github.com/users/davidgilbertson/following{/other_user}",
"gists_url": "https://api.github.com/users/davidgilbertson/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/davidgilbertson",
"id": 4443482,
"login": "davidgilbertson",
"node_id": "MDQ6VXNlcjQ0NDM0ODI=",
"organizations_url": "https://api.github.com/users/davidgilbertson/orgs",
"received_events_url": "https://api.github.com/users/davidgilbertson/received_events",
"repos_url": "https://api.github.com/users/davidgilbertson/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/davidgilbertson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davidgilbertson/subscriptions",
"type": "User",
"url": "https://api.github.com/users/davidgilbertson",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5585/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5585/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 20:33:46
|
https://api.github.com/repos/huggingface/datasets/issues/5584
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5584/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5584/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5584/events
|
https://github.com/huggingface/datasets/issues/5584
| 1,601,821,808
|
I_kwDODunzps5fedxw
| 5,584
|
Unable to load coyo700M dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/3059998?v=4",
"events_url": "https://api.github.com/users/manuaero/events{/privacy}",
"followers_url": "https://api.github.com/users/manuaero/followers",
"following_url": "https://api.github.com/users/manuaero/following{/other_user}",
"gists_url": "https://api.github.com/users/manuaero/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/manuaero",
"id": 3059998,
"login": "manuaero",
"node_id": "MDQ6VXNlcjMwNTk5OTg=",
"organizations_url": "https://api.github.com/users/manuaero/orgs",
"received_events_url": "https://api.github.com/users/manuaero/received_events",
"repos_url": "https://api.github.com/users/manuaero/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/manuaero/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/manuaero/subscriptions",
"type": "User",
"url": "https://api.github.com/users/manuaero",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi @manuaero \r\n\r\nThank you for your interest in the COYO dataset.\r\n\r\nOur dataset provides the img-url and alt-text in the form of a parquet, so to utilize the coyo dataset you will need to download it directly.\r\n\r\nWe provide a [guide](https://github.com/kakaobrain/coyo-dataset/blob/main/download/README.md) to download, so check it out.\r\n\r\nThank you."
] | 2023-02-27T19:35:03
| 2023-02-28T07:27:59
| 2023-02-28T07:27:58
|
NONE
| null | null | null | null |
### Describe the bug
Seeing this error when downloading https://huggingface.co/datasets/kakaobrain/coyo-700m:
```ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file.```
Full stack trace
```Downloading and preparing dataset parquet/kakaobrain--coyo-700m to /root/.cache/huggingface/datasets/kakaobrain___parquet/kakaobrain--coyo-700m-ae729692ae3e0073/0.0.0/2a3b91fbd88a2c90d1dbbb32b460cf621d31bd5b05b934492fdef7d8d6f236ec...
Downloading data files: 100%
1/1 [00:00<00:00, 63.35it/s]
Extracting data files: 100%
1/1 [00:00<00:00, 5.00it/s]
---------------------------------------------------------------------------
ArrowInvalid Traceback (most recent call last)
[/usr/local/lib/python3.8/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)
1859 _time = time.time()
-> 1860 for _, table in generator:
1861 if max_shard_size is not None and writer._num_bytes > max_shard_size:
9 frames
ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file.
The above exception was the direct cause of the following exception:
DatasetGenerationError Traceback (most recent call last)
[/usr/local/lib/python3.8/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)
1890 if isinstance(e, SchemaInferenceError) and e.__context__ is not None:
1891 e = e.__context__
-> 1892 raise DatasetGenerationError("An error occurred while generating the dataset") from e
1893
1894 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths)
DatasetGenerationError: An error occurred while generating the dataset```
### Steps to reproduce the bug
```
from datasets import load_dataset
hf_dataset = load_dataset("kakaobrain/coyo-700m")
```
### Expected behavior
The above commands load the dataset successfully. Or handles exception and continue loading the remainder.
### Environment info
colab. any
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/3059998?v=4",
"events_url": "https://api.github.com/users/manuaero/events{/privacy}",
"followers_url": "https://api.github.com/users/manuaero/followers",
"following_url": "https://api.github.com/users/manuaero/following{/other_user}",
"gists_url": "https://api.github.com/users/manuaero/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/manuaero",
"id": 3059998,
"login": "manuaero",
"node_id": "MDQ6VXNlcjMwNTk5OTg=",
"organizations_url": "https://api.github.com/users/manuaero/orgs",
"received_events_url": "https://api.github.com/users/manuaero/received_events",
"repos_url": "https://api.github.com/users/manuaero/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/manuaero/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/manuaero/subscriptions",
"type": "User",
"url": "https://api.github.com/users/manuaero",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5584/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5584/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 11:52:55
|
https://api.github.com/repos/huggingface/datasets/issues/5581
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5581/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5581/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5581/events
|
https://github.com/huggingface/datasets/issues/5581
| 1,600,675,489
|
I_kwDODunzps5faF6h
| 5,581
|
[DOC] Mistaken docs on set_format
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/36224762?v=4",
"events_url": "https://api.github.com/users/NightMachinery/events{/privacy}",
"followers_url": "https://api.github.com/users/NightMachinery/followers",
"following_url": "https://api.github.com/users/NightMachinery/following{/other_user}",
"gists_url": "https://api.github.com/users/NightMachinery/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/NightMachinery",
"id": 36224762,
"login": "NightMachinery",
"node_id": "MDQ6VXNlcjM2MjI0NzYy",
"organizations_url": "https://api.github.com/users/NightMachinery/orgs",
"received_events_url": "https://api.github.com/users/NightMachinery/received_events",
"repos_url": "https://api.github.com/users/NightMachinery/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/NightMachinery/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NightMachinery/subscriptions",
"type": "User",
"url": "https://api.github.com/users/NightMachinery",
"user_view_type": "public"
}
|
[
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
}
] |
closed
| false
| null |
[] |
[
"Thanks for reporting!"
] | 2023-02-27T08:03:09
| 2023-02-28T19:19:17
| 2023-02-28T19:19:17
|
CONTRIBUTOR
| null | null | null | null |
### Describe the bug
https://huggingface.co/docs/datasets/v2.10.0/en/package_reference/main_classes#datasets.Dataset.set_format
<img width="700" alt="image" src="https://user-images.githubusercontent.com/36224762/221506973-ae2e3991-60a7-4d4e-99f8-965c6eb61e59.png">
While actually running it will result in:
<img width="1094" alt="image" src="https://user-images.githubusercontent.com/36224762/221507032-007dab82-8781-4319-b21a-e6e4d40d97b3.png">
### Steps to reproduce the bug
_
### Expected behavior
_
### Environment info
- `datasets` version: 2.10.0
- Platform: Linux-5.10.147+-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 9.0.0
- Pandas version: 1.3.5
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stevhliu",
"id": 59462357,
"login": "stevhliu",
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stevhliu",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5581/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5581/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 1 day, 11:16:08
|
https://api.github.com/repos/huggingface/datasets/issues/5577
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5577/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5577/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5577/events
|
https://github.com/huggingface/datasets/issues/5577
| 1,598,587,665
|
I_kwDODunzps5fSIMR
| 5,577
|
Cannot load `the_pile_openwebtext2`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/5126316?v=4",
"events_url": "https://api.github.com/users/wjfwzzc/events{/privacy}",
"followers_url": "https://api.github.com/users/wjfwzzc/followers",
"following_url": "https://api.github.com/users/wjfwzzc/following{/other_user}",
"gists_url": "https://api.github.com/users/wjfwzzc/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/wjfwzzc",
"id": 5126316,
"login": "wjfwzzc",
"node_id": "MDQ6VXNlcjUxMjYzMTY=",
"organizations_url": "https://api.github.com/users/wjfwzzc/orgs",
"received_events_url": "https://api.github.com/users/wjfwzzc/received_events",
"repos_url": "https://api.github.com/users/wjfwzzc/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/wjfwzzc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wjfwzzc/subscriptions",
"type": "User",
"url": "https://api.github.com/users/wjfwzzc",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi! I've merged a PR to use `int32` instead of `int8` for `reddit_scores`, so it should work now.\r\n\r\n"
] | 2023-02-24T13:01:48
| 2023-02-24T14:01:09
| 2023-02-24T14:01:09
|
NONE
| null | null | null | null |
### Describe the bug
I met the same bug mentioned in #3053 which is never fixed. Because several `reddit_scores` are larger than `int8` even `int16`. https://huggingface.co/datasets/the_pile_openwebtext2/blob/main/the_pile_openwebtext2.py#L62
### Steps to reproduce the bug
```python3
from datasets import load_dataset
dataset = load_dataset("the_pile_openwebtext2")
```
### Expected behavior
load as normal.
### Environment info
- `datasets` version: 2.10.0
- Platform: Linux-5.4.143.bsk.7-amd64-x86_64-with-glibc2.31
- Python version: 3.9.2
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5577/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5577/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 0:59:21
|
https://api.github.com/repos/huggingface/datasets/issues/5576
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5576/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5576/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5576/events
|
https://github.com/huggingface/datasets/issues/5576
| 1,598,582,744
|
I_kwDODunzps5fSG_Y
| 5,576
|
I was getting a similar error `pyarrow.lib.ArrowInvalid: Integer value 528 not in range: -128 to 127` - AFAICT, this is because the type specified for `reddit_scores` is `datasets.Sequence(datasets.Value("int8"))`, but the actual values can be well outside the max range for 8-bit integers.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/5126316?v=4",
"events_url": "https://api.github.com/users/wjfwzzc/events{/privacy}",
"followers_url": "https://api.github.com/users/wjfwzzc/followers",
"following_url": "https://api.github.com/users/wjfwzzc/following{/other_user}",
"gists_url": "https://api.github.com/users/wjfwzzc/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/wjfwzzc",
"id": 5126316,
"login": "wjfwzzc",
"node_id": "MDQ6VXNlcjUxMjYzMTY=",
"organizations_url": "https://api.github.com/users/wjfwzzc/orgs",
"received_events_url": "https://api.github.com/users/wjfwzzc/received_events",
"repos_url": "https://api.github.com/users/wjfwzzc/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/wjfwzzc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wjfwzzc/subscriptions",
"type": "User",
"url": "https://api.github.com/users/wjfwzzc",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Duplicated issue."
] | 2023-02-24T12:57:49
| 2023-02-24T12:58:31
| 2023-02-24T12:58:18
|
NONE
| null | null | null | null |
I was getting a similar error `pyarrow.lib.ArrowInvalid: Integer value 528 not in range: -128 to 127` - AFAICT, this is because the type specified for `reddit_scores` is `datasets.Sequence(datasets.Value("int8"))`, but the actual values can be well outside the max range for 8-bit integers.
I worked around this by downloading the `the_pile_openwebtext2.py` and editing it to use local files and drop reddit scores as a column (not needed for my purposes).
_Originally posted by @tc-wolf in https://github.com/huggingface/datasets/issues/3053#issuecomment-1281392422_
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/5126316?v=4",
"events_url": "https://api.github.com/users/wjfwzzc/events{/privacy}",
"followers_url": "https://api.github.com/users/wjfwzzc/followers",
"following_url": "https://api.github.com/users/wjfwzzc/following{/other_user}",
"gists_url": "https://api.github.com/users/wjfwzzc/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/wjfwzzc",
"id": 5126316,
"login": "wjfwzzc",
"node_id": "MDQ6VXNlcjUxMjYzMTY=",
"organizations_url": "https://api.github.com/users/wjfwzzc/orgs",
"received_events_url": "https://api.github.com/users/wjfwzzc/received_events",
"repos_url": "https://api.github.com/users/wjfwzzc/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/wjfwzzc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wjfwzzc/subscriptions",
"type": "User",
"url": "https://api.github.com/users/wjfwzzc",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5576/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5576/timeline
| null |
not_planned
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 0:00:29
|
https://api.github.com/repos/huggingface/datasets/issues/5575
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5575/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5575/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5575/events
|
https://github.com/huggingface/datasets/issues/5575
| 1,598,396,552
|
I_kwDODunzps5fRZiI
| 5,575
|
Metadata for each column
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/11356471?v=4",
"events_url": "https://api.github.com/users/parsa-ra/events{/privacy}",
"followers_url": "https://api.github.com/users/parsa-ra/followers",
"following_url": "https://api.github.com/users/parsa-ra/following{/other_user}",
"gists_url": "https://api.github.com/users/parsa-ra/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/parsa-ra",
"id": 11356471,
"login": "parsa-ra",
"node_id": "MDQ6VXNlcjExMzU2NDcx",
"organizations_url": "https://api.github.com/users/parsa-ra/orgs",
"received_events_url": "https://api.github.com/users/parsa-ra/received_events",
"repos_url": "https://api.github.com/users/parsa-ra/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/parsa-ra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/parsa-ra/subscriptions",
"type": "User",
"url": "https://api.github.com/users/parsa-ra",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] |
[
"Hi! Indeed it would be useful to support this. PyArrow natively supports schema-level and column-level metadata, so implementing this should be straightforward. The API I have in mind would work as follows:\r\n```python\r\ncol_feature = Value(\"string\", metadata=\"Some column-level metadata\")\r\n\r\nfeatures = Features({\"col\": col_feature}, metadata=\"Some schema-level metadata\")\r\n```\r\n\r\nWDYT?",
"Sorry for the late reply, \r\nYes, I think this is the most straight-forward approach with the things that we already have.\r\n\r\n",
"@mariosasko Let me know how I can help.",
"Hi, is this feature to be implemented in the near future? It would be really nice if that would be the case! ",
"Hi, I also need this feature for tell my customer if any of the feature is encrypted with a certain key. "
] | 2023-02-24T10:53:44
| 2024-01-05T21:48:35
| null |
NONE
| null | null | null | null |
### Feature request
Being able to put some metadata for each column as a string or any other type.
### Motivation
I will bring the motivation by an example, lets say we are experimenting with embedding produced by some image encoder network, and we want to iterate through a couple of preprocessing and see which one works better in our downstream task, here as workaround right now what I do is the compute the hash of the preprocessing that the images went through as part of the new columns name, it would be nice to attach some kinda meta data in these scenarios to the each columns. metadata
### Your contribution
Maybe we could map another relational like database as the metadata?
| null |
{
"+1": 11,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 11,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5575/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5575/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/5574
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5574/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5574/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5574/events
|
https://github.com/huggingface/datasets/issues/5574
| 1,598,104,691
|
I_kwDODunzps5fQSRz
| 5,574
|
c4 dataset streaming fails with `FileNotFoundError`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/202907?v=4",
"events_url": "https://api.github.com/users/krasserm/events{/privacy}",
"followers_url": "https://api.github.com/users/krasserm/followers",
"following_url": "https://api.github.com/users/krasserm/following{/other_user}",
"gists_url": "https://api.github.com/users/krasserm/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/krasserm",
"id": 202907,
"login": "krasserm",
"node_id": "MDQ6VXNlcjIwMjkwNw==",
"organizations_url": "https://api.github.com/users/krasserm/orgs",
"received_events_url": "https://api.github.com/users/krasserm/received_events",
"repos_url": "https://api.github.com/users/krasserm/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/krasserm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/krasserm/subscriptions",
"type": "User",
"url": "https://api.github.com/users/krasserm",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Also encountering this issue for every dataset I try to stream! Installed datasets from main:\r\n```\r\n- `datasets` version: 2.10.1.dev0\r\n- Platform: macOS-13.1-arm64-arm-64bit\r\n- Python version: 3.9.13\r\n- PyArrow version: 10.0.1\r\n- Pandas version: 1.5.2\r\n```\r\n\r\nRepro:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nspigi = load_dataset(\"kensho/spgispeech\", \"dev\", split=\"validation\", streaming=True, use_auth_token=True)\r\nsample = next(iter(spigi))\r\n```\r\n\r\n<details>\r\n<summary> Traceback </summary>\r\n\r\n```python\r\n---------------------------------------------------------------------------\r\nClientResponseError Traceback (most recent call last)\r\nFile ~/venv/lib/python3.9/site-packages/fsspec/implementations/http.py:407, in HTTPFileSystem._info(self, url, **kwargs)\r\n 405 try:\r\n 406 info.update(\r\n--> 407 await _file_info(\r\n 408 self.encode_url(url),\r\n 409 size_policy=policy,\r\n 410 session=session,\r\n 411 **self.kwargs,\r\n 412 **kwargs,\r\n 413 )\r\n 414 )\r\n 415 if info.get(\"size\") is not None:\r\n\r\nFile ~/venv/lib/python3.9/site-packages/fsspec/implementations/http.py:792, in _file_info(url, session, size_policy, **kwargs)\r\n 791 async with r:\r\n--> 792 r.raise_for_status()\r\n 794 # TODO:\r\n 795 # recognise lack of 'Accept-Ranges',\r\n 796 # or 'Accept-Ranges': 'none' (not 'bytes')\r\n 797 # to mean streaming only, no random access => return None\r\n\r\nFile ~/venv/lib/python3.9/site-packages/aiohttp/client_reqrep.py:1005, in ClientResponse.raise_for_status(self)\r\n 1004 self.release()\r\n-> 1005 raise ClientResponseError(\r\n 1006 self.request_info,\r\n 1007 self.history,\r\n 1008 status=self.status,\r\n 1009 message=self.reason,\r\n 1010 headers=self.headers,\r\n 1011 )\r\n\r\nClientResponseError: 403, message='Forbidden', url=URL('[https://cdn-lfs.huggingface.co/repos/e2/89/e28905247d6f48bb4edad5baf9b1bb4158e897a13fdf18bf3b8ee89ff8387ab8/46eca7431a7b6bad344bf451800e5b10cea1dd168f26d1027a6d9eb374b7fac3?response-content-disposition=attachment%3B+filename*%3DUTF-8''dev.csv%3B+filename%3D%22dev.csv%22%3B&response-content-type=text/csv&Expires=1677494732&Policy=eyJTdGF0ZW1lbnQiOlt7IlJlc291cmNlIjoiaHR0cHM6Ly9jZG4tbGZzLmh1Z2dpbmdmYWNlLmNvL3JlcG9zL2UyLzg5L2UyODkwNTI0N2Q2ZjQ4YmI0ZWRhZDViYWY5YjFiYjQxNThlODk3YTEzZmRmMThiZjNiOGVlODlmZjgzODdhYjgvNDZlY2E3NDMxYTdiNmJhZDM0NGJmNDUxODAwZTViMTBjZWExZGQxNjhmMjZkMTAyN2E2ZDllYjM3NGI3ZmFjMz9yZXNwb25zZS1jb250ZW50LWRpc3Bvc2l0aW9uPSomcmVzcG9uc2UtY29udGVudC10eXBlPXRleHQlMkZjc3YiLCJDb25kaXRpb24iOnsiRGF0ZUxlc3NUaGFuIjp7IkFXUzpFcG9jaFRpbWUiOjE2Nzc0OTQ3MzJ9fX1dfQ__&Signature=EzQB9f7xPckvqfFB6LzcyR-wzTnQCqtPDdWtQUzZ3QJ-gY-IHG5mxQITJgMr1nVTbJZrPmGAaDngMcPFUfSQa8RmCqYH~dZl-UGE8CO4neKNUT1DvA2WEvLDS4WaAJ3SN-9rX0uFb03~c1QS78cIgIRboYvf6ugKiJz86Bd7Vs~tcp201JFR0A6jIMseqApOnkb9d8dHMP3Ny~F6gO3Qf2QpEWM-QsDIyw2Kz2QV55nq8TsDpRYZCZo50~WwD~73Hej0PoDhEA1K37d19pa0CQhkaN-gjCrbT9xLabbvhJWa~ZkWcMdD0teCgjYqv1wKyvFXDAxukxLGEc7OBXVbYw__&Key-Pair-Id=KVTP0A1DKRTAX](https://cdn-lfs.huggingface.co/repos/e2/89/e28905247d6f48bb4edad5baf9b1bb4158e897a13fdf18bf3b8ee89ff8387ab8/46eca7431a7b6bad344bf451800e5b10cea1dd168f26d1027a6d9eb374b7fac3?response-content-disposition=attachment%3B+filename*%3DUTF-8%27%27dev.csv%3B+filename%3D%22dev.csv%22%3B&response-content-type=text/csv&Expires=1677494732&Policy=eyJTdGF0ZW1lbnQiOlt7IlJlc291cmNlIjoiaHR0cHM6Ly9jZG4tbGZzLmh1Z2dpbmdmYWNlLmNvL3JlcG9zL2UyLzg5L2UyODkwNTI0N2Q2ZjQ4YmI0ZWRhZDViYWY5YjFiYjQxNThlODk3YTEzZmRmMThiZjNiOGVlODlmZjgzODdhYjgvNDZlY2E3NDMxYTdiNmJhZDM0NGJmNDUxODAwZTViMTBjZWExZGQxNjhmMjZkMTAyN2E2ZDllYjM3NGI3ZmFjMz9yZXNwb25zZS1jb250ZW50LWRpc3Bvc2l0aW9uPSomcmVzcG9uc2UtY29udGVudC10eXBlPXRleHQlMkZjc3YiLCJDb25kaXRpb24iOnsiRGF0ZUxlc3NUaGFuIjp7IkFXUzpFcG9jaFRpbWUiOjE2Nzc0OTQ3MzJ9fX1dfQ__&Signature=EzQB9f7xPckvqfFB6LzcyR-wzTnQCqtPDdWtQUzZ3QJ-gY-IHG5mxQITJgMr1nVTbJZrPmGAaDngMcPFUfSQa8RmCqYH~dZl-UGE8CO4neKNUT1DvA2WEvLDS4WaAJ3SN-9rX0uFb03~c1QS78cIgIRboYvf6ugKiJz86Bd7Vs~tcp201JFR0A6jIMseqApOnkb9d8dHMP3Ny~F6gO3Qf2QpEWM-QsDIyw2Kz2QV55nq8TsDpRYZCZo50~WwD~73Hej0PoDhEA1K37d19pa0CQhkaN-gjCrbT9xLabbvhJWa~ZkWcMdD0teCgjYqv1wKyvFXDAxukxLGEc7OBXVbYw__&Key-Pair-Id=KVTP0A1DKRTAX)')\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nFileNotFoundError Traceback (most recent call last)\r\nCell In[5], line 4\r\n 1 from datasets import load_dataset\r\n 3 spigi = load_dataset(\"kensho/spgispeech\", \"dev\", split=\"validation\", streaming=True)\r\n----> 4 sample = next(iter(spigi))\r\n\r\nFile ~/datasets/src/datasets/iterable_dataset.py:937, in IterableDataset.__iter__(self)\r\n 934 yield from self._iter_pytorch(ex_iterable)\r\n 935 return\r\n--> 937 for key, example in ex_iterable:\r\n 938 if self.features:\r\n 939 # `IterableDataset` automatically fills missing columns with None.\r\n 940 # This is done with `_apply_feature_types_on_example`.\r\n 941 yield _apply_feature_types_on_example(\r\n 942 example, self.features, token_per_repo_id=self._token_per_repo_id\r\n 943 )\r\n\r\nFile ~/datasets/src/datasets/iterable_dataset.py:113, in ExamplesIterable.__iter__(self)\r\n 112 def __iter__(self):\r\n--> 113 yield from self.generate_examples_fn(**self.kwargs)\r\n\r\nFile ~/.cache/huggingface/modules/datasets_modules/datasets/kensho--spgispeech/5fbf75dd9ef795a9b5a673457d2cbaf0b8fa0de8fb62acbd1da338d83a41e2f0/spgispeech.py:186, in Spgispeech._generate_examples(self, local_extracted_archive_paths, archives, meta_path)\r\n 183 dict_keys = [\"wav_filename\", \"wav_filesize\", \"transcript\"]\r\n 185 logging.info(\"Reading metadata...\")\r\n--> 186 with open(meta_path, encoding=\"utf-8\") as f:\r\n 187 csvreader = csv.DictReader(f, delimiter=\"|\")\r\n 188 metadata = {x[\"wav_filename\"]: dict((k, x[k]) for k in dict_keys) for x in csvreader}\r\n\r\nFile ~/datasets/src/datasets/streaming.py:70, in extend_module_for_streaming.<locals>.wrap_auth.<locals>.wrapper(*args, **kwargs)\r\n 68 @wraps(function)\r\n 69 def wrapper(*args, **kwargs):\r\n---> 70 return function(*args, use_auth_token=use_auth_token, **kwargs)\r\n\r\nFile ~/datasets/src/datasets/download/streaming_download_manager.py:495, in xopen(file, mode, use_auth_token, *args, **kwargs)\r\n 493 kwargs = {**kwargs, **new_kwargs}\r\n 494 try:\r\n--> 495 file_obj = fsspec.open(file, mode=mode, *args, **kwargs).open()\r\n 496 except ValueError as e:\r\n 497 if str(e) == \"Cannot seek streaming HTTP file\":\r\n\r\nFile ~/venv/lib/python3.9/site-packages/fsspec/core.py:135, in OpenFile.open(self)\r\n 128 def open(self):\r\n 129 \"\"\"Materialise this as a real open file without context\r\n 130 \r\n 131 The OpenFile object should be explicitly closed to avoid enclosed file\r\n 132 instances persisting. You must, therefore, keep a reference to the OpenFile\r\n 133 during the life of the file-like it generates.\r\n 134 \"\"\"\r\n--> 135 return self.__enter__()\r\n\r\nFile ~/venv/lib/python3.9/site-packages/fsspec/core.py:103, in OpenFile.__enter__(self)\r\n 100 def __enter__(self):\r\n 101 mode = self.mode.replace(\"t\", \"\").replace(\"b\", \"\") + \"b\"\r\n--> 103 f = self.fs.open(self.path, mode=mode)\r\n 105 self.fobjects = [f]\r\n 107 if self.compression is not None:\r\n\r\nFile ~/venv/lib/python3.9/site-packages/fsspec/spec.py:1106, in AbstractFileSystem.open(self, path, mode, block_size, cache_options, compression, **kwargs)\r\n 1104 else:\r\n 1105 ac = kwargs.pop(\"autocommit\", not self._intrans)\r\n-> 1106 f = self._open(\r\n 1107 path,\r\n 1108 mode=mode,\r\n 1109 block_size=block_size,\r\n 1110 autocommit=ac,\r\n 1111 cache_options=cache_options,\r\n 1112 **kwargs,\r\n 1113 )\r\n 1114 if compression is not None:\r\n 1115 from fsspec.compression import compr\r\n\r\nFile ~/venv/lib/python3.9/site-packages/fsspec/implementations/http.py:346, in HTTPFileSystem._open(self, path, mode, block_size, autocommit, cache_type, cache_options, size, **kwargs)\r\n 344 kw[\"asynchronous\"] = self.asynchronous\r\n 345 kw.update(kwargs)\r\n--> 346 size = size or self.info(path, **kwargs)[\"size\"]\r\n 347 session = sync(self.loop, self.set_session)\r\n 348 if block_size and size:\r\n\r\nFile ~/venv/lib/python3.9/site-packages/fsspec/asyn.py:113, in sync_wrapper.<locals>.wrapper(*args, **kwargs)\r\n 110 @functools.wraps(func)\r\n 111 def wrapper(*args, **kwargs):\r\n 112 self = obj or args[0]\r\n--> 113 return sync(self.loop, func, *args, **kwargs)\r\n\r\nFile ~/venv/lib/python3.9/site-packages/fsspec/asyn.py:98, in sync(loop, func, timeout, *args, **kwargs)\r\n 96 raise FSTimeoutError from return_result\r\n 97 elif isinstance(return_result, BaseException):\r\n---> 98 raise return_result\r\n 99 else:\r\n 100 return return_result\r\n\r\nFile ~/venv/lib/python3.9/site-packages/fsspec/asyn.py:53, in _runner(event, coro, result, timeout)\r\n 51 coro = asyncio.wait_for(coro, timeout=timeout)\r\n 52 try:\r\n---> 53 result[0] = await coro\r\n 54 except Exception as ex:\r\n 55 result[0] = ex\r\n\r\nFile ~/venv/lib/python3.9/site-packages/fsspec/implementations/http.py:420, in HTTPFileSystem._info(self, url, **kwargs)\r\n 417 except Exception as exc:\r\n 418 if policy == \"get\":\r\n 419 # If get failed, then raise a FileNotFoundError\r\n--> 420 raise FileNotFoundError(url) from exc\r\n 421 logger.debug(str(exc))\r\n 423 return {\"name\": url, \"size\": None, **info, \"type\": \"file\"}\r\n\r\nFileNotFoundError: https://huggingface.co/datasets/kensho/spgispeech/resolve/main/data/meta/dev.csv\r\n```\r\n</details>",
"Hi ! We're investigating this issue, sorry for the inconvenience",
"This has been resolved ! Thanks for reporting",
"Wow, thanks for the very quick fix!",
"This problem now appears again, this time with an underlying HTTP 502 status code:\r\n\r\n```\r\naiohttp.client_exceptions.ClientResponseError: 502, message='Bad Gateway', url=URL('https://huggingface.co/datasets/allenai/c4/resolve/1ddc917116b730e1859edef32896ec5c16be51d0/en/c4-validation.00002-of-00008.json.gz')\r\n```",
"Re-executing a minute later, the underlying cause is an HTTP 403 status code, as reported yesterday:\r\n\r\n```\r\naiohttp.client_exceptions.ClientResponseError: 403, message='Forbidden', url=URL('https://cdn-lfs.huggingface.co/datasets/allenai/c4/4bf6b248b0f910dcde2cdf2118d6369d8208c8f9515ec29ab73e531f380b18e2?response-content-disposition=attachment%3B+filename*%3DUTF-8''c4-validation.00002-of-00008.json.gz%3B+filename%3D%22c4-validation.00002-of-00008.json.gz%22%3B&response-content-type=application/gzip&Expires=1677571273&Policy=eyJTdGF0ZW1lbnQiOlt7IlJlc291cmNlIjoiaHR0cHM6Ly9jZG4tbGZzLmh1Z2dpbmdmYWNlLmNvL2RhdGFzZXRzL2FsbGVuYWkvYzQvNGJmNmIyNDhiMGY5MTBkY2RlMmNkZjIxMThkNjM2OWQ4MjA4YzhmOTUxNWVjMjlhYjczZTUzMWYzODBiMThlMj9yZXNwb25zZS1jb250ZW50LWRpc3Bvc2l0aW9uPSomcmVzcG9uc2UtY29udGVudC10eXBlPWFwcGxpY2F0aW9uJTJGZ3ppcCIsIkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTY3NzU3MTI3M319fV19&Signature=WW42NOKkLuX~xVB1QfbkqzdvGo2AOXpgbF3PjTXy6iKd~ffilr1N9ScPXfvTXqy5yvdhJg1G0xJy1zYtUjGAL8GEx3Av-0vIhpWMGYTM8XKEU5gYA9qt30oVtNph6TkTYSABrsYTaj-hzQL9WCgyapmjvG69ETMh4wj44r2rcbk4T3j0l6l4u76Gh~lyRSll3aK4qycdUwcyL7FECDu~0W1mJIJwKkCrWHhSpHJSshb-0ElwG71pq4eyQ5g2uxHdK6JbRF7loxUpRQQJ1vlk0EHXdw0wTMaQ9tqHy6xcrQd8Ep0Yvx3tUD8MR0vWOcbQKnL6LwPQByc8tkChlpjnig__&Key-Pair-Id=KVTP0A1DKRTAX')\r\n```",
"I'm facing the same problem. Interestingly using `wget` I can download the file. ",
"It's been resolved again ;)",
"> It's been resolved again ;)\r\n\r\nI'm experiencing the same issue when trying to load this dataset, `FileNotFoundError: https://huggingface.co/datasets/allenai/c4/resolve/1ddc917116b730e1859edef32896ec5c16be51d0/realnewslike/c4-train.00000-of-00512.json.gz`",
"Experiencing the same issues as above : `FileNotFoundError: https://huggingface.co/datasets/allenai/c4/resolve/1ddc917116b730e1859edef32896ec5c16be51d0/en/c4-train.00000-of-01024.json.gz\r\nIf the repo is private or gated, make sure to log in with `huggingface-cli login`.`\r\n\r\nHave made sure to login as well, issue persists.",
"> Experiencing the same issues as above : `FileNotFoundError: https://huggingface.co/datasets/allenai/c4/resolve/1ddc917116b730e1859edef32896ec5c16be51d0/en/c4-train.00000-of-01024.json.gz If the repo is private or gated, make sure to log in with `huggingface-cli login`.`\r\n> \r\n> Have made sure to login as well, issue persists.\r\n\r\nI meet the same issue",
"I meet the same issue"
] | 2023-02-24T07:57:32
| 2023-12-18T07:32:32
| 2023-02-27T04:03:38
|
NONE
| null | null | null | null |
### Describe the bug
Loading the `c4` dataset in streaming mode with `load_dataset("c4", "en", split="validation", streaming=True)` and then using it fails with a `FileNotFoundException`.
### Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("c4", "en", split="train", streaming=True)
next(iter(dataset))
```
causes a
```
FileNotFoundError: https://huggingface.co/datasets/allenai/c4/resolve/1ddc917116b730e1859edef32896ec5c16be51d0/en/c4-train.00000-of-01024.json.gz
```
I can download this file manually though e.g. by entering this URL in a browser.
There is an underlying HTTP 403 status code:
```
aiohttp.client_exceptions.ClientResponseError: 403, message='Forbidden', url=URL('https://cdn-lfs.huggingface.co/datasets/allenai/c4/8ef8d75b0e045dec4aa5123a671b4564466b0707086a7ed1ba8721626dfffbc9?response-content-disposition=attachment%3B+filename*%3DUTF-8''c4-train.00000-of-01024.json.gz%3B+filename%3D%22c4-train.00000-of-01024.json.gz%22%3B&response-content-type=application/gzip&Expires=1677483770&Policy=eyJTdGF0ZW1lbnQiOlt7IlJlc291cmNlIjoiaHR0cHM6Ly9jZG4tbGZzLmh1Z2dpbmdmYWNlLmNvL2RhdGFzZXRzL2FsbGVuYWkvYzQvOGVmOGQ3NWIwZTA0NWRlYzRhYTUxMjNhNjcxYjQ1NjQ0NjZiMDcwNzA4NmE3ZWQxYmE4NzIxNjI2ZGZmZmJjOT9yZXNwb25zZS1jb250ZW50LWRpc3Bvc2l0aW9uPSomcmVzcG9uc2UtY29udGVudC10eXBlPWFwcGxpY2F0aW9uJTJGZ3ppcCIsIkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTY3NzQ4Mzc3MH19fV19&Signature=yjL3UeY72cf2xpnvPvD68eAYOEe2qtaUJV55sB-jnPskBJEMwpMJcBZvg2~GqXZdM3O-GWV-Z3CI~d4u5VCb4YZ-HlmOjr3VBYkvox2EKiXnBIhjMecf2UVUPtxhTa9kBVlWjqu4qKzB9gKXZF2Cwpp5ctLzapEaT2nnqF84RAL-rsqMA3I~M8vWWfivQsbBK63hMfgZqqKMgdWM0iKMaItveDl0ufQ29azMFmsR7qd8V7sU2Z-F1fAeohS8HpN9OOnClW34yi~YJ2AbgZJJBXA~qsylfVA0Qp7Q~yX~q4P8JF1vmJ2BjkiSbGrj3bAXOGugpOVU5msI52DT88yMdA__&Key-Pair-Id=KVTP0A1DKRTAX')
```
### Expected behavior
This should retrieve the first example from the C4 validation set. This worked a few days ago but stopped working now.
### Environment info
- `datasets` version: 2.9.0
- Platform: Linux-5.15.0-60-generic-x86_64-with-glibc2.31
- Python version: 3.9.16
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/202907?v=4",
"events_url": "https://api.github.com/users/krasserm/events{/privacy}",
"followers_url": "https://api.github.com/users/krasserm/followers",
"following_url": "https://api.github.com/users/krasserm/following{/other_user}",
"gists_url": "https://api.github.com/users/krasserm/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/krasserm",
"id": 202907,
"login": "krasserm",
"node_id": "MDQ6VXNlcjIwMjkwNw==",
"organizations_url": "https://api.github.com/users/krasserm/orgs",
"received_events_url": "https://api.github.com/users/krasserm/received_events",
"repos_url": "https://api.github.com/users/krasserm/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/krasserm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/krasserm/subscriptions",
"type": "User",
"url": "https://api.github.com/users/krasserm",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5574/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5574/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 2 days, 20:06:06
|
https://api.github.com/repos/huggingface/datasets/issues/5572
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5572/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5572/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5572/events
|
https://github.com/huggingface/datasets/issues/5572
| 1,597,257,624
|
I_kwDODunzps5fNDeY
| 5,572
|
Datasets 2.10.0 does not reuse the dataset cache
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/45281?v=4",
"events_url": "https://api.github.com/users/lsb/events{/privacy}",
"followers_url": "https://api.github.com/users/lsb/followers",
"following_url": "https://api.github.com/users/lsb/following{/other_user}",
"gists_url": "https://api.github.com/users/lsb/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lsb",
"id": 45281,
"login": "lsb",
"node_id": "MDQ6VXNlcjQ1Mjgx",
"organizations_url": "https://api.github.com/users/lsb/orgs",
"received_events_url": "https://api.github.com/users/lsb/received_events",
"repos_url": "https://api.github.com/users/lsb/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lsb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lsb/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lsb",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[] | 2023-02-23T17:28:11
| 2023-02-23T18:03:55
| 2023-02-23T18:03:55
|
NONE
| null | null | null | null |
### Describe the bug
download_mode="reuse_dataset_if_exists" will always consider that a dataset doesn't exist.
Specifically, upon losing an internet connection trying to load a dataset for a second time in ten seconds, a connection error results, showing a breakpoint of:
```
File ~/jupyterlab/.direnv/python-3.9.6/lib/python3.9/site-packages/datasets/load.py:1174, in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, **download_kwargs)
1165 except Exception as e: # noqa: catch any exception of hf_hub and consider that the dataset doesn't exist
1166 if isinstance(
1167 e,
1168 (
(...)
1172 ),
1173 ):
-> 1174 raise ConnectionError(f"Couldn't reach '{path}' on the Hub ({type(e).__name__})")
1175 elif "404" in str(e):
1176 msg = f"Dataset '{path}' doesn't exist on the Hub"
ConnectionError: Couldn't reach 'lsb/tenk' on the Hub (ConnectionError)
```
This has been around since at least v2.0.
### Steps to reproduce the bug
```
from datasets import load_dataset
import numpy as np
tenk = load_dataset("lsb/tenk") # ten thousand integers
print(np.average(tenk['train']['a'])) # prints 4999.5
### now disconnect your internet
tenk_too = load_dataset("lsb/tenk", download_mode="reuse_dataset_if_exists")
# Raises ConnectionError: Couldn't reach 'lsb/tenk' on the Hub (ConnectionError)
```
### Expected behavior
I expected that I would be able to reuse the dataset I just downloaded.
### Environment info
- `datasets` version: 2.10.0
- Platform: macOS-13.1-arm64-arm-64bit
- Python version: 3.9.6
- PyArrow version: 7.0.0
- Pandas version: 1.5.2
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/45281?v=4",
"events_url": "https://api.github.com/users/lsb/events{/privacy}",
"followers_url": "https://api.github.com/users/lsb/followers",
"following_url": "https://api.github.com/users/lsb/following{/other_user}",
"gists_url": "https://api.github.com/users/lsb/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lsb",
"id": 45281,
"login": "lsb",
"node_id": "MDQ6VXNlcjQ1Mjgx",
"organizations_url": "https://api.github.com/users/lsb/orgs",
"received_events_url": "https://api.github.com/users/lsb/received_events",
"repos_url": "https://api.github.com/users/lsb/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lsb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lsb/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lsb",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5572/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5572/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 0:35:44
|
https://api.github.com/repos/huggingface/datasets/issues/5571
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5571/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5571/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5571/events
|
https://github.com/huggingface/datasets/issues/5571
| 1,597,198,953
|
I_kwDODunzps5fM1Jp
| 5,571
|
load_dataset fails for JSON in windows
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/11876897?v=4",
"events_url": "https://api.github.com/users/abinashsahu/events{/privacy}",
"followers_url": "https://api.github.com/users/abinashsahu/followers",
"following_url": "https://api.github.com/users/abinashsahu/following{/other_user}",
"gists_url": "https://api.github.com/users/abinashsahu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/abinashsahu",
"id": 11876897,
"login": "abinashsahu",
"node_id": "MDQ6VXNlcjExODc2ODk3",
"organizations_url": "https://api.github.com/users/abinashsahu/orgs",
"received_events_url": "https://api.github.com/users/abinashsahu/received_events",
"repos_url": "https://api.github.com/users/abinashsahu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/abinashsahu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abinashsahu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/abinashsahu",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi! \r\n\r\nYou need to pass an input json file explicitly as `data_files` to `load_dataset` to avoid this error:\r\n```python\r\n ds = load_dataset(\"json\", data_files=args.input_json)\r\n```\r\n\r\n",
"Thanks it worked!"
] | 2023-02-23T16:50:11
| 2023-02-24T13:21:47
| 2023-02-24T13:21:47
|
NONE
| null | null | null | null |
### Describe the bug
Steps:
1. Created a dataset in a Linux VM and created a small sample using dataset.to_json() method.
2. Downloaded the JSON file to my local Windows machine for working and saved in say - r"C:\Users\name\file.json"
3. I am reading the file in my local PyCharm - the location of python file is different than the location of the JSON.
4. When I read using load_dataset("json",args.input_json), it throws and error from builder.py.
raise InvalidConfigName(
f"Bad characters from black list '{invalid_windows_characters}' found in '{self.name}'. "
f"They could create issues when creating a directory for this config on Windows filesystem."
6. When I bring the data to the current directory, it works fine.
### Steps to reproduce the bug
Steps:
1. Created a dataset in a Linux VM and created a small sample using dataset.to_json() method.
2. Downloaded the JSON file to my local Windows machine for working and saved in say - r"C:\Users\name\file.json"
3. I am reading the file in my local PyCharm - the location of python file is different than the location of the JSON.
4. When I read using load_dataset("json",args.input_json), it throws and error from builder.py.
raise InvalidConfigName(
f"Bad characters from black list '{invalid_windows_characters}' found in '{self.name}'. "
f"They could create issues when creating a directory for this config on Windows filesystem."
6. When I bring the data to the current directory, it works fine.
### Expected behavior
Should be able to read from a path different than current directory in Windows machine.
### Environment info
datasets version: 2.3.1
python version: 3.8
Windows OS
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5571/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5571/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 20:31:36
|
https://api.github.com/repos/huggingface/datasets/issues/5570
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5570/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5570/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5570/events
|
https://github.com/huggingface/datasets/issues/5570
| 1,597,190,926
|
I_kwDODunzps5fMzMO
| 5,570
|
load_dataset gives FileNotFoundError on imagenet-1k if license is not accepted on the hub
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/38630200?v=4",
"events_url": "https://api.github.com/users/buoi/events{/privacy}",
"followers_url": "https://api.github.com/users/buoi/followers",
"following_url": "https://api.github.com/users/buoi/following{/other_user}",
"gists_url": "https://api.github.com/users/buoi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/buoi",
"id": 38630200,
"login": "buoi",
"node_id": "MDQ6VXNlcjM4NjMwMjAw",
"organizations_url": "https://api.github.com/users/buoi/orgs",
"received_events_url": "https://api.github.com/users/buoi/received_events",
"repos_url": "https://api.github.com/users/buoi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/buoi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/buoi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/buoi",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi, thanks for the feedback! Would it help to add a tip or note saying the dataset is gated and you need to accept the license before downloading it?",
"The error is now more informative:\r\n```\r\nFileNotFoundError: Couldn't find a dataset script at /content/imagenet-1k/imagenet-1k.py or any data file in the same directory. Couldn't find 'imagenet-1k' on the Hugging Face Hub either: FileNotFoundError: Dataset 'imagenet-1k' doesn't exist on the Hub. If the repo is private or gated, make sure to log in with `huggingface-cli login`.\r\n```\r\n\r\n"
] | 2023-02-23T16:44:32
| 2023-07-24T15:18:50
| 2023-07-24T15:18:50
|
NONE
| null | null | null | null |
### Describe the bug
When calling ```load_dataset('imagenet-1k')``` FileNotFoundError is raised, if not logged in and if logged in with huggingface-cli but not having accepted the licence on the hub. There is no error once accepting.
### Steps to reproduce the bug
```
from datasets import load_dataset
imagenet = load_dataset("imagenet-1k", split="train", streaming=True)
FileNotFoundError: Couldn't find a dataset script at /content/imagenet-1k/imagenet-1k.py or any data file in the same directory. Couldn't find 'imagenet-1k' on the Hugging Face Hub either: FileNotFoundError: Dataset 'imagenet-1k' doesn't exist on the Hub
```
tested on a colab notebook.
### Expected behavior
I would expect a specific error indicating that I have to login then accept the dataset licence.
I find this bug very relevant as this code is on a guide on the [Huggingface documentation for Datasets](https://huggingface.co/docs/datasets/about_mapstyle_vs_iterable)
### Environment info
google colab cpu-only instance
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5570/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5570/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 150 days, 22:34:18
|
https://api.github.com/repos/huggingface/datasets/issues/5568
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5568/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5568/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5568/events
|
https://github.com/huggingface/datasets/issues/5568
| 1,596,900,532
|
I_kwDODunzps5fLsS0
| 5,568
|
dataset.to_iterable_dataset() loses useful info like dataset features
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/48770768?v=4",
"events_url": "https://api.github.com/users/bruno-hays/events{/privacy}",
"followers_url": "https://api.github.com/users/bruno-hays/followers",
"following_url": "https://api.github.com/users/bruno-hays/following{/other_user}",
"gists_url": "https://api.github.com/users/bruno-hays/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bruno-hays",
"id": 48770768,
"login": "bruno-hays",
"node_id": "MDQ6VXNlcjQ4NzcwNzY4",
"organizations_url": "https://api.github.com/users/bruno-hays/orgs",
"received_events_url": "https://api.github.com/users/bruno-hays/received_events",
"repos_url": "https://api.github.com/users/bruno-hays/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bruno-hays/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bruno-hays/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bruno-hays",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/48770768?v=4",
"events_url": "https://api.github.com/users/bruno-hays/events{/privacy}",
"followers_url": "https://api.github.com/users/bruno-hays/followers",
"following_url": "https://api.github.com/users/bruno-hays/following{/other_user}",
"gists_url": "https://api.github.com/users/bruno-hays/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bruno-hays",
"id": 48770768,
"login": "bruno-hays",
"node_id": "MDQ6VXNlcjQ4NzcwNzY4",
"organizations_url": "https://api.github.com/users/bruno-hays/orgs",
"received_events_url": "https://api.github.com/users/bruno-hays/received_events",
"repos_url": "https://api.github.com/users/bruno-hays/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bruno-hays/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bruno-hays/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bruno-hays",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/48770768?v=4",
"events_url": "https://api.github.com/users/bruno-hays/events{/privacy}",
"followers_url": "https://api.github.com/users/bruno-hays/followers",
"following_url": "https://api.github.com/users/bruno-hays/following{/other_user}",
"gists_url": "https://api.github.com/users/bruno-hays/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bruno-hays",
"id": 48770768,
"login": "bruno-hays",
"node_id": "MDQ6VXNlcjQ4NzcwNzY4",
"organizations_url": "https://api.github.com/users/bruno-hays/orgs",
"received_events_url": "https://api.github.com/users/bruno-hays/received_events",
"repos_url": "https://api.github.com/users/bruno-hays/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bruno-hays/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bruno-hays/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bruno-hays",
"user_view_type": "public"
}
] |
[
"Hi ! Oh good catch. I think the features should be passed to `IterableDataset.from_generator()` in `to_iterable_dataset()` indeed.\r\n\r\nSetting this as a good first issue if someone would like to contribute, otherwise we can take care of it :)",
"#self-assign",
"seems like the feature parameter is missing from `return IterableDataset.from_generator(Dataset._iter_shards, gen_kwargs={\"shards\": shards})` hence it defaults to None."
] | 2023-02-23T13:45:33
| 2023-02-24T13:22:36
| 2023-02-24T13:22:36
|
CONTRIBUTOR
| null | null | null | null |
### Describe the bug
Hello,
I like the new `to_iterable_dataset` feature but I noticed something that seems to be missing.
When using `to_iterable_dataset` to transform your map style dataset into iterable dataset, you lose valuable metadata like the features.
These metadata are useful if you want to interleave iterable datasets, cast columns etc.
### Steps to reproduce the bug
```python
dataset = load_dataset("lhoestq/demo1")["train"]
print(dataset.features)
# {'id': Value(dtype='string', id=None), 'package_name': Value(dtype='string', id=None), 'review': Value(dtype='string', id=None), 'date': Value(dtype='string', id=None), 'star': Value(dtype='int64', id=None), 'version_id': Value(dtype='int64', id=None)}
dataset = dataset.to_iterable_dataset()
print(dataset.features)
# None
```
### Expected behavior
Keep the relevant information
### Environment info
datasets==2.10.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5568/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5568/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 23:37:03
|
https://api.github.com/repos/huggingface/datasets/issues/5566
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5566/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5566/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5566/events
|
https://github.com/huggingface/datasets/issues/5566
| 1,595,916,674
|
I_kwDODunzps5fH8GC
| 5,566
|
Directly reading parquet files in a s3 bucket from the load_dataset method
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"events_url": "https://api.github.com/users/shamanez/events{/privacy}",
"followers_url": "https://api.github.com/users/shamanez/followers",
"following_url": "https://api.github.com/users/shamanez/following{/other_user}",
"gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/shamanez",
"id": 16892570,
"login": "shamanez",
"node_id": "MDQ6VXNlcjE2ODkyNTcw",
"organizations_url": "https://api.github.com/users/shamanez/orgs",
"received_events_url": "https://api.github.com/users/shamanez/received_events",
"repos_url": "https://api.github.com/users/shamanez/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shamanez/subscriptions",
"type": "User",
"url": "https://api.github.com/users/shamanez",
"user_view_type": "public"
}
|
[
{
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists",
"id": 1935892865,
"name": "duplicate",
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate"
},
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] |
[
"Hi ! I think is in the scope of this other issue: to https://github.com/huggingface/datasets/issues/5281 "
] | 2023-02-22T22:13:40
| 2023-02-23T11:03:29
| null |
NONE
| null | null | null | null |
### Feature request
Right now, we have to read the get the parquet file to the local storage. So having ability to read given the bucket directly address would be benificial
### Motivation
In a production set up, this feature can help us a lot. So we do not need move training datafiles in between storage.
### Your contribution
I am willing to help if there's anyway.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5566/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5566/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/5555
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5555/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5555/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5555/events
|
https://github.com/huggingface/datasets/issues/5555
| 1,592,469,938
|
I_kwDODunzps5e6ymy
| 5,555
|
`.shuffle` throwing error `ValueError: Protocol not known: parent`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/10768588?v=4",
"events_url": "https://api.github.com/users/prabhakar267/events{/privacy}",
"followers_url": "https://api.github.com/users/prabhakar267/followers",
"following_url": "https://api.github.com/users/prabhakar267/following{/other_user}",
"gists_url": "https://api.github.com/users/prabhakar267/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/prabhakar267",
"id": 10768588,
"login": "prabhakar267",
"node_id": "MDQ6VXNlcjEwNzY4NTg4",
"organizations_url": "https://api.github.com/users/prabhakar267/orgs",
"received_events_url": "https://api.github.com/users/prabhakar267/received_events",
"repos_url": "https://api.github.com/users/prabhakar267/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/prabhakar267/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/prabhakar267/subscriptions",
"type": "User",
"url": "https://api.github.com/users/prabhakar267",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[
"Hi ! The indices mapping is written in the same cachedirectory as your dataset.\r\n\r\nCan you run this to show your current cache directory ?\r\n```python\r\nprint(train_dataset.cache_files)\r\n```",
"```\r\n[{'filename': '.../train/dataset.arrow'}, {'filename': '.../train/dataset.arrow'}]\r\n```\r\n\r\nThese are the actual paths where `.hf` files are stored. ",
"I'm not aware of any `.hf` file ? What are you referring to ?\r\n\r\nAlso the error says \"Protocol unknown: parent\". Is there a chance you may have ended up with a path that contains this string `parent://` ?",
"I figured out why the issue was occuring but don't know the long-term fix.\r\nThe dataset I was trying to shuffle was loaded from a saved file which had `::` delimiter in filename. When I try with the exact same file without `::` in filename, it works as expected.\r\nQuick fix is to not use colons in filename. But if this is expected behaviour, this should be clearly stated in the documentation.\r\nThanks for help @lhoestq "
] | 2023-02-20T21:33:45
| 2023-02-27T09:23:34
| null |
NONE
| null | null | null | null |
### Describe the bug
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In [16], line 1
----> 1 train_dataset = train_dataset.shuffle()
File /opt/conda/envs/pytorch/lib/python3.9/site-packages/datasets/arrow_dataset.py:551, in transmit_format.<locals>.wrapper(*args, **kwargs)
544 self_format = {
545 "type": self._format_type,
546 "format_kwargs": self._format_kwargs,
547 "columns": self._format_columns,
548 "output_all_columns": self._output_all_columns,
549 }
550 # apply actual function
--> 551 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
552 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
553 # re-apply format to the output
File /opt/conda/envs/pytorch/lib/python3.9/site-packages/datasets/fingerprint.py:480, in fingerprint_transform.<locals>._fingerprint.<locals>.wrapper(*args, **kwargs)
476 validate_fingerprint(kwargs[fingerprint_name])
478 # Call actual function
--> 480 out = func(self, *args, **kwargs)
482 # Update fingerprint of in-place transforms + update in-place history of transforms
484 if inplace: # update after calling func so that the fingerprint doesn't change if the function fails
File /opt/conda/envs/pytorch/lib/python3.9/site-packages/datasets/arrow_dataset.py:3616, in Dataset.shuffle(self, seed, generator, keep_in_memory, load_from_cache_file, indices_cache_file_name, writer_batch_size, new_fingerprint)
3610 return self._new_dataset_with_indices(
3611 fingerprint=new_fingerprint, indices_cache_file_name=indices_cache_file_name
3612 )
3614 permutation = generator.permutation(len(self))
-> 3616 return self.select(
3617 indices=permutation,
3618 keep_in_memory=keep_in_memory,
3619 indices_cache_file_name=indices_cache_file_name if not keep_in_memory else None,
3620 writer_batch_size=writer_batch_size,
3621 new_fingerprint=new_fingerprint,
3622 )
File /opt/conda/envs/pytorch/lib/python3.9/site-packages/datasets/arrow_dataset.py:551, in transmit_format.<locals>.wrapper(*args, **kwargs)
544 self_format = {
545 "type": self._format_type,
546 "format_kwargs": self._format_kwargs,
547 "columns": self._format_columns,
548 "output_all_columns": self._output_all_columns,
549 }
550 # apply actual function
--> 551 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
552 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
553 # re-apply format to the output
File /opt/conda/envs/pytorch/lib/python3.9/site-packages/datasets/fingerprint.py:480, in fingerprint_transform.<locals>._fingerprint.<locals>.wrapper(*args, **kwargs)
476 validate_fingerprint(kwargs[fingerprint_name])
478 # Call actual function
--> 480 out = func(self, *args, **kwargs)
482 # Update fingerprint of in-place transforms + update in-place history of transforms
484 if inplace: # update after calling func so that the fingerprint doesn't change if the function fails
File /opt/conda/envs/pytorch/lib/python3.9/site-packages/datasets/arrow_dataset.py:3266, in Dataset.select(self, indices, keep_in_memory, indices_cache_file_name, writer_batch_size, new_fingerprint)
3263 return self._select_contiguous(start, length, new_fingerprint=new_fingerprint)
3265 # If not contiguous, we need to create a new indices mapping
-> 3266 return self._select_with_indices_mapping(
3267 indices,
3268 keep_in_memory=keep_in_memory,
3269 indices_cache_file_name=indices_cache_file_name,
3270 writer_batch_size=writer_batch_size,
3271 new_fingerprint=new_fingerprint,
3272 )
File /opt/conda/envs/pytorch/lib/python3.9/site-packages/datasets/arrow_dataset.py:551, in transmit_format.<locals>.wrapper(*args, **kwargs)
544 self_format = {
545 "type": self._format_type,
546 "format_kwargs": self._format_kwargs,
547 "columns": self._format_columns,
548 "output_all_columns": self._output_all_columns,
549 }
550 # apply actual function
--> 551 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
552 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
553 # re-apply format to the output
File /opt/conda/envs/pytorch/lib/python3.9/site-packages/datasets/fingerprint.py:480, in fingerprint_transform.<locals>._fingerprint.<locals>.wrapper(*args, **kwargs)
476 validate_fingerprint(kwargs[fingerprint_name])
478 # Call actual function
--> 480 out = func(self, *args, **kwargs)
482 # Update fingerprint of in-place transforms + update in-place history of transforms
484 if inplace: # update after calling func so that the fingerprint doesn't change if the function fails
File /opt/conda/envs/pytorch/lib/python3.9/site-packages/datasets/arrow_dataset.py:3389, in Dataset._select_with_indices_mapping(self, indices, keep_in_memory, indices_cache_file_name, writer_batch_size, new_fingerprint)
3387 logger.info(f"Caching indices mapping at {indices_cache_file_name}")
3388 tmp_file = tempfile.NamedTemporaryFile("wb", dir=os.path.dirname(indices_cache_file_name), delete=False)
-> 3389 writer = ArrowWriter(
3390 path=tmp_file.name, writer_batch_size=writer_batch_size, fingerprint=new_fingerprint, unit="indices"
3391 )
3393 indices = indices if isinstance(indices, list) else list(indices)
3395 size = len(self)
File /opt/conda/envs/pytorch/lib/python3.9/site-packages/datasets/arrow_writer.py:315, in ArrowWriter.__init__(self, schema, features, path, stream, fingerprint, writer_batch_size, hash_salt, check_duplicates, disable_nullable, update_features, with_metadata, unit, embed_local_files, storage_options)
312 self._disable_nullable = disable_nullable
314 if stream is None:
--> 315 fs_token_paths = fsspec.get_fs_token_paths(path, storage_options=storage_options)
316 self._fs: fsspec.AbstractFileSystem = fs_token_paths[0]
317 self._path = (
318 fs_token_paths[2][0]
319 if not is_remote_filesystem(self._fs)
320 else self._fs.unstrip_protocol(fs_token_paths[2][0])
321 )
File /opt/conda/envs/pytorch/lib/python3.9/site-packages/fsspec/core.py:593, in get_fs_token_paths(urlpath, mode, num, name_function, storage_options, protocol, expand)
591 else:
592 urlpath = stringify_path(urlpath)
--> 593 chain = _un_chain(urlpath, storage_options or {})
594 if len(chain) > 1:
595 inkwargs = {}
File /opt/conda/envs/pytorch/lib/python3.9/site-packages/fsspec/core.py:330, in _un_chain(path, kwargs)
328 for bit in reversed(bits):
329 protocol = split_protocol(bit)[0] or "file"
--> 330 cls = get_filesystem_class(protocol)
331 extra_kwargs = cls._get_kwargs_from_urls(bit)
332 kws = kwargs.get(protocol, {})
File /opt/conda/envs/pytorch/lib/python3.9/site-packages/fsspec/registry.py:240, in get_filesystem_class(protocol)
238 if protocol not in registry:
239 if protocol not in known_implementations:
--> 240 raise ValueError("Protocol not known: %s" % protocol)
241 bit = known_implementations[protocol]
242 try:
ValueError: Protocol not known: parent
```
This is what the `train_dataset` object looks like
```
Dataset({
features: ['label', 'input_ids', 'attention_mask'],
num_rows: 364166
})
```
### Steps to reproduce the bug
The `train_dataset` obj is created by concatenating two datasets
And then shuffle is called, but it throws the mentioned error.
### Expected behavior
Should shuffle the dataset properly.
### Environment info
- `datasets` version: 2.6.1
- Platform: Linux-5.15.0-1022-aws-x86_64-with-glibc2.31
- Python version: 3.9.13
- PyArrow version: 10.0.0
- Pandas version: 1.4.4
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5555/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5555/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/5548
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5548/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5548/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5548/events
|
https://github.com/huggingface/datasets/issues/5548
| 1,590,835,479
|
I_kwDODunzps5e0jkX
| 5,548
|
Apply flake8-comprehensions to codebase
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/2053727?v=4",
"events_url": "https://api.github.com/users/Skylion007/events{/privacy}",
"followers_url": "https://api.github.com/users/Skylion007/followers",
"following_url": "https://api.github.com/users/Skylion007/following{/other_user}",
"gists_url": "https://api.github.com/users/Skylion007/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Skylion007",
"id": 2053727,
"login": "Skylion007",
"node_id": "MDQ6VXNlcjIwNTM3Mjc=",
"organizations_url": "https://api.github.com/users/Skylion007/orgs",
"received_events_url": "https://api.github.com/users/Skylion007/received_events",
"repos_url": "https://api.github.com/users/Skylion007/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Skylion007/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Skylion007/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Skylion007",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
| null |
[] |
[] | 2023-02-19T20:05:38
| 2023-02-23T13:59:41
| 2023-02-23T13:59:41
|
CONTRIBUTOR
| null | null | null | null |
### Feature request
Apply ruff flake8 comprehension checks to codebase.
### Motivation
This should strictly improve the performance / readability of the codebase by removing unnecessary iteration, function calls, etc. This should generate better Python bytecode which should strictly improve performance.
I already applied this fixes to PyTorch and Sympy with little issue and have opened PRs to diffusers and transformers todo this as well.
### Your contribution
Making a PR.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5548/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5548/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 3 days, 17:54:03
|
https://api.github.com/repos/huggingface/datasets/issues/5546
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5546/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5546/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5546/events
|
https://github.com/huggingface/datasets/issues/5546
| 1,590,346,349
|
I_kwDODunzps5eysJt
| 5,546
|
Downloaded datasets do not cache at $HF_HOME
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/79091831?v=4",
"events_url": "https://api.github.com/users/ErfanMoosaviMonazzah/events{/privacy}",
"followers_url": "https://api.github.com/users/ErfanMoosaviMonazzah/followers",
"following_url": "https://api.github.com/users/ErfanMoosaviMonazzah/following{/other_user}",
"gists_url": "https://api.github.com/users/ErfanMoosaviMonazzah/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ErfanMoosaviMonazzah",
"id": 79091831,
"login": "ErfanMoosaviMonazzah",
"node_id": "MDQ6VXNlcjc5MDkxODMx",
"organizations_url": "https://api.github.com/users/ErfanMoosaviMonazzah/orgs",
"received_events_url": "https://api.github.com/users/ErfanMoosaviMonazzah/received_events",
"repos_url": "https://api.github.com/users/ErfanMoosaviMonazzah/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ErfanMoosaviMonazzah/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ErfanMoosaviMonazzah/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ErfanMoosaviMonazzah",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi ! Can you make sure you set `HF_HOME` before importing `datasets` ?\r\n\r\nThen you can print\r\n```python\r\nprint(datasets.config.HF_CACHE_HOME)\r\nprint(datasets.config.HF_DATASETS_CACHE)\r\n```"
] | 2023-02-18T13:30:35
| 2023-07-24T14:22:43
| 2023-07-24T14:22:43
|
NONE
| null | null | null | null |
### Describe the bug
In the huggingface course (https://huggingface.co/course/chapter3/2?fw=pt) it said that if we set HF_HOME, downloaded datasets would be cached at specified address but it does not. downloaded models from checkpoint names are downloaded and cached at HF_HOME but this is not the case for datasets, they are still cached at ~/.cache/huggingface/datasets.
### Steps to reproduce the bug
Run the following code
```
from datasets import load_dataset
raw_datasets = load_dataset("glue", "mrpc")
raw_datasets
```
it downloads and store dataset at ~/.cache/huggingface/datasets
### Expected behavior
to cache dataset at HF_HOME.
### Environment info
python 3.10.6
Kubuntu 22.04
HF_HOME located on a separate partition
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5546/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5546/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 156 days, 0:52:08
|
https://api.github.com/repos/huggingface/datasets/issues/5543
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5543/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5543/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5543/events
|
https://github.com/huggingface/datasets/issues/5543
| 1,588,951,379
|
I_kwDODunzps5etXlT
| 5,543
|
the pile datasets url seems to change back
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/5126316?v=4",
"events_url": "https://api.github.com/users/wjfwzzc/events{/privacy}",
"followers_url": "https://api.github.com/users/wjfwzzc/followers",
"following_url": "https://api.github.com/users/wjfwzzc/following{/other_user}",
"gists_url": "https://api.github.com/users/wjfwzzc/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/wjfwzzc",
"id": 5126316,
"login": "wjfwzzc",
"node_id": "MDQ6VXNlcjUxMjYzMTY=",
"organizations_url": "https://api.github.com/users/wjfwzzc/orgs",
"received_events_url": "https://api.github.com/users/wjfwzzc/received_events",
"repos_url": "https://api.github.com/users/wjfwzzc/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/wjfwzzc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wjfwzzc/subscriptions",
"type": "User",
"url": "https://api.github.com/users/wjfwzzc",
"user_view_type": "public"
}
|
[] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] |
[
"Thanks for reporting, @wjfwzzc.\r\n\r\nI am transferring this issue to the corresponding dataset on the Hub: https://huggingface.co/datasets/bookcorpusopen/discussions/1",
"Thank you. All fixes are done:\r\n- [x] https://huggingface.co/datasets/bookcorpusopen/discussions/2\r\n- [x] https://huggingface.co/datasets/the_pile/discussions/1\r\n- [x] https://huggingface.co/datasets/the_pile_books3/discussions/1\r\n- [x] https://huggingface.co/datasets/the_pile_openwebtext2/discussions/2\r\n- [x] https://huggingface.co/datasets/the_pile_stack_exchange/discussions/2"
] | 2023-02-17T08:40:11
| 2023-02-21T06:37:00
| 2023-02-20T08:41:33
|
NONE
| null | null | null | null |
### Describe the bug
in #3627, the host url of the pile dataset became `https://mystic.the-eye.eu`. Now the new url is broken, but `https://the-eye.eu` seems to work again.
### Steps to reproduce the bug
```python3
from datasets import load_dataset
dataset = load_dataset("bookcorpusopen")
```
shows
```python3
ConnectionError: Couldn't reach https://mystic.the-eye.eu/public/AI/pile_preliminary_components/books1.tar.gz (ProxyError(MaxRetryError("HTTPSConnectionPool(host='mystic.the-eye.eu', port=443): Max retries exceeded with url: /public/AI/pile_pr
eliminary_components/books1.tar.gz (Caused by ProxyError('Cannot connect to proxy.', OSError('Tunnel connection failed: 504 Gateway Timeout')))")))
```
### Expected behavior
Downloading as normal.
### Environment info
- `datasets` version: 2.9.0
- Platform: Linux-5.4.143.bsk.7-amd64-x86_64-with-glibc2.31
- Python version: 3.9.2
- PyArrow version: 6.0.1
- Pandas version: 1.5.3
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5543/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5543/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 3 days, 0:01:22
|
https://api.github.com/repos/huggingface/datasets/issues/5541
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5541/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5541/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5541/events
|
https://github.com/huggingface/datasets/issues/5541
| 1,588,633,555
|
I_kwDODunzps5esJ_T
| 5,541
|
Flattening indices in selected datasets is extremely inefficient
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/6591505?v=4",
"events_url": "https://api.github.com/users/marioga/events{/privacy}",
"followers_url": "https://api.github.com/users/marioga/followers",
"following_url": "https://api.github.com/users/marioga/following{/other_user}",
"gists_url": "https://api.github.com/users/marioga/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/marioga",
"id": 6591505,
"login": "marioga",
"node_id": "MDQ6VXNlcjY1OTE1MDU=",
"organizations_url": "https://api.github.com/users/marioga/orgs",
"received_events_url": "https://api.github.com/users/marioga/received_events",
"repos_url": "https://api.github.com/users/marioga/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/marioga/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marioga/subscriptions",
"type": "User",
"url": "https://api.github.com/users/marioga",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Running the script above on the branch https://github.com/huggingface/datasets/pull/5542 results in the expected behaviour:\r\n```\r\nNum chunks for original ds: 1\r\nOriginal ds save/load\r\nsave_to_disk -- RAM memory used: 0.671875 MB -- Total time: 0.255265 s\r\nload_from_disk -- RAM memory used: 42.796875 MB -- Total time: 0.014899 s\r\nNum chunks for original ds after reloading: 5000\r\n\r\nNum chunks for selected ds: 1\r\nflatten_indices -- RAM memory used: 42.546875 MB -- Total time: 23.735089 s\r\nNum chunks for selected ds after flattening: 5000\r\n\r\nSelected ds save/load\r\nsave_to_disk -- RAM memory used: 0.0 MB -- Total time: 0.287112 s\r\nload_from_disk -- RAM memory used: 38.84375 MB -- Total time: 0.014772 s\r\nNum chunks for selected ds after reloading: 5000\r\n```",
"Wouahouh super cool @marioga thanks a lot!",
"We just released `datasets==2.10.0` with this big improvement, thanks again @marioga "
] | 2023-02-17T01:52:24
| 2023-02-22T13:15:20
| 2023-02-17T11:12:33
|
CONTRIBUTOR
| null | null | null | null |
### Describe the bug
If we perform a `select` (or `shuffle`, `train_test_split`, etc.) operation on a dataset , we end up with a dataset with an `indices_table`. Currently, flattening such dataset consumes a lot of memory and the resulting flat dataset contains ChunkedArrays with as many chunks as there are rows. This is extremely inefficient and slows down the operations on the flat dataset, e.g., saving/loading the dataset to disk becomes really slow.
Perhaps more importantly, loading the dataset back from disk basically loads the whole table into RAM, as it cannot take advantage of memory mapping.
### Steps to reproduce the bug
The following script reproduces the issue:
```python
import gc
import os
import psutil
import tempfile
import time
from datasets import Dataset
DATASET_SIZE = 5000000
def profile(func):
def wrapper(*args, **kwargs):
mem_before = psutil.Process(os.getpid()).memory_info().rss / (1024 * 1024)
start = time.time()
# Run function here
out = func(*args, **kwargs)
end = time.time()
mem_after = psutil.Process(os.getpid()).memory_info().rss / (1024 * 1024)
print(f"{func.__name__} -- RAM memory used: {mem_after - mem_before} MB -- Total time: {end - start:.6f} s")
return out
return wrapper
def main():
ds = Dataset.from_list([{'col': i} for i in range(DATASET_SIZE)])
print(f"Num chunks for original ds: {ds.data['col'].num_chunks}")
with tempfile.TemporaryDirectory() as tmpdir:
path1 = os.path.join(tmpdir, 'ds1')
print("Original ds save/load")
profile(ds.save_to_disk)(path1)
ds_loaded = profile(Dataset.load_from_disk)(path1)
print(f"Num chunks for original ds after reloading: {ds_loaded.data['col'].num_chunks}")
print("")
ds_select = ds.select(reversed(range(len(ds))))
print(f"Num chunks for selected ds: {ds_select.data['col'].num_chunks}")
del ds
del ds_loaded
gc.collect()
# This would happen anyway when we call save_to_disk
ds_select = profile(ds_select.flatten_indices)()
print(f"Num chunks for selected ds after flattening: {ds_select.data['col'].num_chunks}")
print("")
path2 = os.path.join(tmpdir, 'ds2')
print("Selected ds save/load")
profile(ds_select.save_to_disk)(path2)
del ds_select
gc.collect()
ds_select_loaded = profile(Dataset.load_from_disk)(path2)
print(f"Num chunks for selected ds after reloading: {ds_select_loaded.data['col'].num_chunks}")
if __name__ == '__main__':
main()
```
Sample result:
```
Num chunks for original ds: 1
Original ds save/load
save_to_disk -- RAM memory used: 0.515625 MB -- Total time: 0.253888 s
load_from_disk -- RAM memory used: 42.765625 MB -- Total time: 0.015176 s
Num chunks for original ds after reloading: 5000
Num chunks for selected ds: 1
flatten_indices -- RAM memory used: 4852.609375 MB -- Total time: 46.116774 s
Num chunks for selected ds after flattening: 5000000
Selected ds save/load
save_to_disk -- RAM memory used: 1326.65625 MB -- Total time: 42.309825 s
load_from_disk -- RAM memory used: 2085.953125 MB -- Total time: 11.659137 s
Num chunks for selected ds after reloading: 5000000
```
### Expected behavior
Saving/loading the dataset should be much faster and consume almost no extra memory thanks to pyarrow memory mapping.
### Environment info
- `datasets` version: 2.9.1.dev0
- Platform: macOS-13.1-arm64-arm-64bit
- Python version: 3.10.8
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5541/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5541/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 9:20:09
|
https://api.github.com/repos/huggingface/datasets/issues/5539
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5539/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5539/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5539/events
|
https://github.com/huggingface/datasets/issues/5539
| 1,587,970,083
|
I_kwDODunzps5epoAj
| 5,539
|
IndexError: invalid index of a 0-dim tensor. Use `tensor.item()` in Python or `tensor.item<T>()` in C++ to convert a 0-dim tensor to a number
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/41912135?v=4",
"events_url": "https://api.github.com/users/aalbersk/events{/privacy}",
"followers_url": "https://api.github.com/users/aalbersk/followers",
"following_url": "https://api.github.com/users/aalbersk/following{/other_user}",
"gists_url": "https://api.github.com/users/aalbersk/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/aalbersk",
"id": 41912135,
"login": "aalbersk",
"node_id": "MDQ6VXNlcjQxOTEyMTM1",
"organizations_url": "https://api.github.com/users/aalbersk/orgs",
"received_events_url": "https://api.github.com/users/aalbersk/received_events",
"repos_url": "https://api.github.com/users/aalbersk/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/aalbersk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aalbersk/subscriptions",
"type": "User",
"url": "https://api.github.com/users/aalbersk",
"user_view_type": "public"
}
|
[
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
}
] |
closed
| false
| null |
[] |
[
"Hi! The `set_transform` does not apply a custom formatting transform on a single example but the entire batch, so the fixed version of your transform would look as follows:\r\n```python\r\nfrom datasets import load_dataset\r\nimport torch\r\n\r\ndataset = load_dataset(\"lambdalabs/pokemon-blip-captions\", split='train')\r\ndef t(batch):\r\n return {\"test\": torch.tensor([1] * len(batch[next(iter(batch))]))}\r\n \r\ndataset.set_transform(t)\r\nd_0 = dataset[0]\r\n```\r\n\r\nStill, the formatter's error message should mention that a dict of **sequences** is expected as the returned value (not just a dict) to make debugging easier.",
"I can take this",
"Fixed in #5553 ",
"> Hi! The `set_transform` does not apply a custom formatting transform on a single example but the entire batch, so the fixed version of your transform would look as follows:\r\n> \r\n> ```python\r\n> from datasets import load_dataset\r\n> import torch\r\n> \r\n> dataset = load_dataset(\"lambdalabs/pokemon-blip-captions\", split='train')\r\n> def t(batch):\r\n> return {\"test\": torch.tensor([1] * len(batch[next(iter(batch))]))}\r\n> \r\n> dataset.set_transform(t)\r\n> d_0 = dataset[0]\r\n> ```\r\n> \r\n> Still, the formatter's error message should mention that a dict of **sequences** is expected as the returned value (not just a dict) to make debugging easier.\r\n\r\nok, will change it according to suggestion. Thanks for the reply!"
] | 2023-02-16T16:08:51
| 2023-02-22T10:30:30
| 2023-02-21T13:03:57
|
NONE
| null | null | null | null |
### Describe the bug
When dataset contains a 0-dim tensor, formatting.py raises a following error and fails.
```bash
Traceback (most recent call last):
File "<path>/lib/python3.8/site-packages/datasets/formatting/formatting.py", line 501, in format_row
return _unnest(formatted_batch)
File "<path>/lib/python3.8/site-packages/datasets/formatting/formatting.py", line 137, in _unnest
return {key: array[0] for key, array in py_dict.items()}
File "<path>/lib/python3.8/site-packages/datasets/formatting/formatting.py", line 137, in <dictcomp>
return {key: array[0] for key, array in py_dict.items()}
IndexError: invalid index of a 0-dim tensor. Use `tensor.item()` in Python or `tensor.item<T>()` in C++ to convert a 0-dim tensor to a number
```
### Steps to reproduce the bug
Load whichever dataset and add transform method to add 0-dim tensor. Or create/find a dataset containing 0-dim tensor. E.g.
```python
from datasets import load_dataset
import torch
dataset = load_dataset("lambdalabs/pokemon-blip-captions", split='train')
def t(batch):
return {"test": torch.tensor(1)}
dataset.set_transform(t)
d_0 = dataset[0]
```
### Expected behavior
Extractor will correctly get a row from the dataset, even if it contains 0-dim tensor.
### Environment info
`datasets==2.8.0`, but it looks like it is also applicable to main branch version (as of 16th February)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5539/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5539/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 4 days, 20:55:06
|
https://api.github.com/repos/huggingface/datasets/issues/5538
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5538/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5538/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5538/events
|
https://github.com/huggingface/datasets/issues/5538
| 1,587,732,596
|
I_kwDODunzps5eouB0
| 5,538
|
load_dataset in seaborn is not working for me. getting this error.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/125575109?v=4",
"events_url": "https://api.github.com/users/reemaranibarik/events{/privacy}",
"followers_url": "https://api.github.com/users/reemaranibarik/followers",
"following_url": "https://api.github.com/users/reemaranibarik/following{/other_user}",
"gists_url": "https://api.github.com/users/reemaranibarik/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/reemaranibarik",
"id": 125575109,
"login": "reemaranibarik",
"node_id": "U_kgDOB3wfxQ",
"organizations_url": "https://api.github.com/users/reemaranibarik/orgs",
"received_events_url": "https://api.github.com/users/reemaranibarik/received_events",
"repos_url": "https://api.github.com/users/reemaranibarik/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/reemaranibarik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/reemaranibarik/subscriptions",
"type": "User",
"url": "https://api.github.com/users/reemaranibarik",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi! `seaborn`'s `load_dataset` pulls datasets from [here](https://github.com/mwaskom/seaborn-data) and not from our Hub, so this issue is not related to our library in any way and should be reported in their repo instead."
] | 2023-02-16T14:01:58
| 2023-02-16T14:44:36
| 2023-02-16T14:44:36
|
NONE
| null | null | null | null |
TimeoutError Traceback (most recent call last)
~\anaconda3\lib\urllib\request.py in do_open(self, http_class, req, **http_conn_args)
1345 try:
-> 1346 h.request(req.get_method(), req.selector, req.data, headers,
1347 encode_chunked=req.has_header('Transfer-encoding'))
~\anaconda3\lib\http\client.py in request(self, method, url, body, headers, encode_chunked)
1278 """Send a complete request to the server."""
-> 1279 self._send_request(method, url, body, headers, encode_chunked)
1280
~\anaconda3\lib\http\client.py in _send_request(self, method, url, body, headers, encode_chunked)
1324 body = _encode(body, 'body')
-> 1325 self.endheaders(body, encode_chunked=encode_chunked)
1326
~\anaconda3\lib\http\client.py in endheaders(self, message_body, encode_chunked)
1273 raise CannotSendHeader()
-> 1274 self._send_output(message_body, encode_chunked=encode_chunked)
1275
~\anaconda3\lib\http\client.py in _send_output(self, message_body, encode_chunked)
1033 del self._buffer[:]
-> 1034 self.send(msg)
1035
~\anaconda3\lib\http\client.py in send(self, data)
973 if self.auto_open:
--> 974 self.connect()
975 else:
~\anaconda3\lib\http\client.py in connect(self)
1440
-> 1441 super().connect()
1442
~\anaconda3\lib\http\client.py in connect(self)
944 """Connect to the host and port specified in __init__."""
--> 945 self.sock = self._create_connection(
946 (self.host,self.port), self.timeout, self.source_address)
~\anaconda3\lib\socket.py in create_connection(address, timeout, source_address)
843 try:
--> 844 raise err
845 finally:
~\anaconda3\lib\socket.py in create_connection(address, timeout, source_address)
831 sock.bind(source_address)
--> 832 sock.connect(sa)
833 # Break explicitly a reference cycle
TimeoutError: [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond
During handling of the above exception, another exception occurred:
URLError Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_12220/2927704185.py in <module>
1 import seaborn as sn
----> 2 iris = sn.load_dataset('iris')
~\anaconda3\lib\site-packages\seaborn\utils.py in load_dataset(name, cache, data_home, **kws)
594 if name not in get_dataset_names():
595 raise ValueError(f"'{name}' is not one of the example datasets.")
--> 596 urlretrieve(url, cache_path)
597 full_path = cache_path
598 else:
~\anaconda3\lib\urllib\request.py in urlretrieve(url, filename, reporthook, data)
237 url_type, path = _splittype(url)
238
--> 239 with contextlib.closing(urlopen(url, data)) as fp:
240 headers = fp.info()
241
~\anaconda3\lib\urllib\request.py in urlopen(url, data, timeout, cafile, capath, cadefault, context)
212 else:
213 opener = _opener
--> 214 return opener.open(url, data, timeout)
215
216 def install_opener(opener):
~\anaconda3\lib\urllib\request.py in open(self, fullurl, data, timeout)
515
516 sys.audit('urllib.Request', req.full_url, req.data, req.headers, req.get_method())
--> 517 response = self._open(req, data)
518
519 # post-process response
~\anaconda3\lib\urllib\request.py in _open(self, req, data)
532
533 protocol = req.type
--> 534 result = self._call_chain(self.handle_open, protocol, protocol +
535 '_open', req)
536 if result:
~\anaconda3\lib\urllib\request.py in _call_chain(self, chain, kind, meth_name, *args)
492 for handler in handlers:
493 func = getattr(handler, meth_name)
--> 494 result = func(*args)
495 if result is not None:
496 return result
~\anaconda3\lib\urllib\request.py in https_open(self, req)
1387
1388 def https_open(self, req):
-> 1389 return self.do_open(http.client.HTTPSConnection, req,
1390 context=self._context, check_hostname=self._check_hostname)
1391
~\anaconda3\lib\urllib\request.py in do_open(self, http_class, req, **http_conn_args)
1347 encode_chunked=req.has_header('Transfer-encoding'))
1348 except OSError as err: # timeout error
-> 1349 raise URLError(err)
1350 r = h.getresponse()
1351 except:
URLError: <urlopen error [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond>
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5538/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5538/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 0:42:38
|
https://api.github.com/repos/huggingface/datasets/issues/5537
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5537/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5537/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5537/events
|
https://github.com/huggingface/datasets/issues/5537
| 1,587,567,464
|
I_kwDODunzps5eoFto
| 5,537
|
Increase speed of data files resolution
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "BDE59C",
"default": false,
"description": "Issues a bit more difficult than \"Good First\" issues",
"id": 3761482852,
"name": "good second issue",
"node_id": "LA_kwDODunzps7gM6xk",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20second%20issue"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/35013374?v=4",
"events_url": "https://api.github.com/users/semajyllek/events{/privacy}",
"followers_url": "https://api.github.com/users/semajyllek/followers",
"following_url": "https://api.github.com/users/semajyllek/following{/other_user}",
"gists_url": "https://api.github.com/users/semajyllek/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/semajyllek",
"id": 35013374,
"login": "semajyllek",
"node_id": "MDQ6VXNlcjM1MDEzMzc0",
"organizations_url": "https://api.github.com/users/semajyllek/orgs",
"received_events_url": "https://api.github.com/users/semajyllek/received_events",
"repos_url": "https://api.github.com/users/semajyllek/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/semajyllek/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/semajyllek/subscriptions",
"type": "User",
"url": "https://api.github.com/users/semajyllek",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/35013374?v=4",
"events_url": "https://api.github.com/users/semajyllek/events{/privacy}",
"followers_url": "https://api.github.com/users/semajyllek/followers",
"following_url": "https://api.github.com/users/semajyllek/following{/other_user}",
"gists_url": "https://api.github.com/users/semajyllek/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/semajyllek",
"id": 35013374,
"login": "semajyllek",
"node_id": "MDQ6VXNlcjM1MDEzMzc0",
"organizations_url": "https://api.github.com/users/semajyllek/orgs",
"received_events_url": "https://api.github.com/users/semajyllek/received_events",
"repos_url": "https://api.github.com/users/semajyllek/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/semajyllek/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/semajyllek/subscriptions",
"type": "User",
"url": "https://api.github.com/users/semajyllek",
"user_view_type": "public"
}
] |
[
"#self-assign",
"You were right, if `self.dir_cache` is not None in glob, it is exactly the same as what is returned by find, at least for all the tests we have, and some extended evaluation I did across a random sample of about 1000 datasets. \r\n\r\nThanks for the nice hints, and let me know if this is not exactly what we want here!\r\n\r\nsee PR: https://github.com/huggingface/datasets/pull/5704\r\n\r\n",
"I think we can make the data files resolution (significantly) faster in 2 steps:\r\n\r\n1. `glob` calls `find` (which in turn calls `ls`), so we need `find` to be fast, and this can be achieved by fetching all the entries in a single API call and avoiding calls to `ls`. Implementing this for `HfFileSystem.find` (the one in `huggingface_hub`) is on my TO-DO list.\r\n2. caching the repeated `find` calls in `_get_data_files_patterns` when the `data_files` patterns are not provided in `load_dataset`. To address this, we can introduce a `_resolve_single_pattern` function that would accept a filesystem object and a list of regex patterns to resolve. Then we can wrap this filesystem object in `_get_data_files_patterns` with an object that would cache the find calls before resolving the patterns with `_resolve_single_pattern`. (Feel free to suggest a cleaner implementation)\r\n\r\nWDYT?",
"Good idea :) \r\n\r\nFor 2:\r\n\r\nThat would work ! It's also possible to have a FileSystem with a cache on `.find` and use it inside the resolver passed to `_get_data_files_patterns`. Right now they're pretty simple:\r\n\r\n```python\r\n# for remote repositories\r\nresolver = partial(_resolve_single_pattern_in_dataset_repository, dataset_info, base_path=base_path)\r\n# for local\r\nresolver = partial(_resolve_single_pattern_locally, base_path)\r\n```",
"something like this maybe (with Quentin's reimplementation of `HfFilesystem.find`)?\r\n\r\n ```\r\n @lru_cache(max_size=None)\r\n def _find(self, path, maxdepth=None, withdirs=False, detail=False, **kwargs):\r\n```\r\n\r\nIn any case please let me know if I can help in any way!"
] | 2023-02-16T12:11:45
| 2023-12-15T13:12:31
| 2023-12-15T13:12:31
|
MEMBER
| null | null | null | null |
Certain datasets like `bigcode/the-stack-dedup` have so many files that loading them takes forever right from the data files resolution step.
`datasets` uses file patterns to check the structure of the repository but it takes too much time to iterate over and over again on all the data files.
This comes from `resolve_patterns_in_dataset_repository` which calls `_resolve_single_pattern_in_dataset_repository`, which iterates on all the files at
```python
glob_iter = [PurePath(filepath) for filepath in fs.glob(PurePath(pattern).as_posix()) if fs.isfile(filepath)]
```
but calling `glob` on such a dataset is too expensive. Indeed it calls `ls()` in `hffilesystem.py` too many times.
Maybe `glob` can be more optimized in `hffilesystem.py`, or the data files resolution can directly be implemented in the filesystem by checking its `dir_cache` ?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5537/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5537/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 302 days, 1:00:46
|
https://api.github.com/repos/huggingface/datasets/issues/5536
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5536/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5536/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5536/events
|
https://github.com/huggingface/datasets/issues/5536
| 1,586,930,643
|
I_kwDODunzps5elqPT
| 5,536
|
Failure to hash function when using .map()
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/6916056?v=4",
"events_url": "https://api.github.com/users/venzen/events{/privacy}",
"followers_url": "https://api.github.com/users/venzen/followers",
"following_url": "https://api.github.com/users/venzen/following{/other_user}",
"gists_url": "https://api.github.com/users/venzen/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/venzen",
"id": 6916056,
"login": "venzen",
"node_id": "MDQ6VXNlcjY5MTYwNTY=",
"organizations_url": "https://api.github.com/users/venzen/orgs",
"received_events_url": "https://api.github.com/users/venzen/received_events",
"repos_url": "https://api.github.com/users/venzen/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/venzen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/venzen/subscriptions",
"type": "User",
"url": "https://api.github.com/users/venzen",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi ! `enc` is not hashable:\r\n```python\r\nimport tiktoken\r\nfrom datasets.fingerprint import Hasher\r\n\r\nenc = tiktoken.get_encoding(\"gpt2\")\r\nHasher.hash(enc)\r\n# raises TypeError: cannot pickle 'builtins.CoreBPE' object\r\n```\r\nIt happens because it's not picklable, and because of that it's not possible to cache the result of `map`, hence the warning message.\r\n\r\nYou can find more details about caching here: https://huggingface.co/docs/datasets/about_cache\r\n\r\nYou can also provide your own unique hash in `map` if you want, with the `new_fingerprint` argument.\r\nOr disable caching using\r\n```python\r\nimport datasets\r\ndatasets.disable_caching()\r\n```",
"@lhoestq Thank you for the explanation and advice. Will relay all of this to the repo where this (non)issue arose. \r\n\r\nGreat job with huggingface! ",
"We made tiktoken tokenizers hashable in #5552, which is included in today's release `datasets==2.10.0`",
"Just a heads up that when I'm trying to use TikToken along with the a given Dataset `.map()` method, I am still met with the following error :\r\n\r\n```\r\n File \"/opt/conda/lib/python3.8/site-packages/dill/_dill.py\", line 388, in save\r\n StockPickler.save(self, obj, save_persistent_id)\r\n File \"/opt/conda/lib/python3.8/pickle.py\", line 578, in save\r\n rv = reduce(self.proto)\r\nTypeError: cannot pickle 'builtins.CoreBPE' object\r\n```\r\n\r\nMy current environment is running datasets v2.10.0.",
"cc @mariosasko ",
"@lhoestq @edhenry I am also seeing this, do you have any suggested solution?",
"With which `datasets` version ? Can you try to udpate ?",
"@lhoestq @edhenry I am on datasets version `'2.12.0'. I see the same `TypeError: cannot pickle 'builtins.CoreBPE' object` that others are seeing.",
"I am able to reproduce this on datasets 2.14.2. The `datasets.disable_caching()` doesn't work around it.\r\n\r\n@lhoestq - you might want to reopen this issue. Because of this issue folks won't be able run Karpathy's NanoGPT :(.",
"update: temporarily solved the problem by setting\r\n```\r\n--preprocess_num_workers 1\r\n```\r\n\r\n-------------\r\nI have met the same problem, here is my env:\r\n```\r\ndatasets 2.14.4\r\ntransformers 4.31.0\r\ntiktoken 0.4.0\r\ntorch 1.13.1\r\n```",
"@mengban I cannot reproduce the issue even with these versions installed. It would help if you could provide info about your system and the `pip list` output.",
"@mariosasko Please take a look at this\r\n```python\r\nfrom typing import Any\r\nfrom datasets import Dataset\r\nimport tiktoken\r\n\r\ndataset = Dataset.from_list([{\"n\": str(i)} for i in range(20)])\r\nenc = tiktoken.get_encoding(\"gpt2\")\r\n\r\n\r\nclass A:\r\n tokenizer = enc #tiktoken.get_encoding(\"gpt2\")\r\n\r\n def __call__(self, example) -> Any:\r\n ids = self.tokenizer.encode(example[\"n\"])\r\n example[\"len\"] = len(ids)\r\n return example\r\n\r\na = A()\r\n\r\ndef process(example):\r\n ids = a.tokenizer.encode(example[\"n\"])\r\n example[\"len\"] = len(ids)\r\n return example\r\n\r\n# success\r\ntokenized = dataset.map(process, desc=\"tiktoken\", num_proc=2)\r\n\r\n# raise TypeError: cannot pickle 'builtins.CoreBPE' object\r\ntokenized = dataset.map(a, desc=\"tiktoken\", num_proc=2)\r\n```\r\n\r\npip list\r\n```\r\ndatasets 2.14.4\r\ntiktoken 0.4.0\r\n```",
"Thanks @maxwellzh! Our `Hasher` works with this snippet, but the problem is running multiprocessing with a non-serializable `tiktoken.Encoding` object.\r\n\r\nInserting the following code before the `map` should fix this:\r\n```python\r\nimport copyreg\r\n\r\ndef pickle_Encoding(enc):\r\n return (functools.partial(tiktoken.core.Encoding, enc.name, pat_str=enc._pat_str, mergeable_ranks=enc._mergeable_ranks, special_tokens=enc._special_tokens), ())\r\n\r\ncopyreg.pickle(tiktoken.core.Encoding, pickle_Encoding)\r\n```\r\n\r\nBut the best fix would be implementing `__reduce__` for `tiktoken.Encoding` or `tiktoken.CoreBPE`. If I find time, I'll try to fix this in the `tiktoken` repo.",
"I think the right way to fix this would be to have new tokenizer instance for each process. This applies to many other tokenizers that don't support multi-process or have bugs. To do this, first define tokenizer factory class like this:\r\n\r\n```\r\n class TikTokenFactory:\r\n def __init__(self):\r\n self._enc = None\r\n self.eot_token = None\r\n\r\n def encode_ordinary(self, text):\r\n if self._enc is None:\r\n self._enc = tiktoken.get_encoding(\"gpt2\")\r\n self.eot_token = self._enc.eot_token\r\n return self._enc.encode_ordinary(text)\r\n```\r\n\r\nNow use this in `.map()` like this:\r\n\r\n```\r\n # tokenize the dataset\r\n tokenized = dataset.map(\r\n partial(process, TikTokenFactory()),\r\n remove_columns=['text'],\r\n desc=\"tokenizing the splits\",\r\n num_proc=max(1, cpu_count()//2),\r\n )\r\n```\r\n\r\nA full working example is here: https://github.com/sytelus/nanoGPT/blob/refactor/nanogpt_common/hf_data_prepare.py"
] | 2023-02-16T03:12:07
| 2023-09-08T21:06:01
| 2023-02-16T14:56:41
|
NONE
| null | null | null | null |
### Describe the bug
_Parameter 'function'=<function process at 0x7f1ec4388af0> of the transform datasets.arrow_dataset.Dataset.\_map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed._
This issue with `.map()` happens for me consistently, as also described in closed issue #4506
Dataset indices can be individually serialized using dill and pickle without any errors. I'm using tiktoken to encode in the function passed to map(). Similarly, indices can be individually encoded without error.
### Steps to reproduce the bug
```py
from datasets import load_dataset
import tiktoken
dataset = load_dataset("stas/openwebtext-10k")
enc = tiktoken.get_encoding("gpt2")
tokenized = dataset.map(
process,
remove_columns=['text'],
desc="tokenizing the OWT splits",
)
def process(example):
ids = enc.encode(example['text'])
ids.append(enc.eot_token)
out = {'ids': ids, 'len': len(ids)}
return out
```
### Expected behavior
Should encode simple text objects.
### Environment info
Python versions tried: both 3.8 and 3.10.10
`PYTHONUTF8=1` as env variable
Datasets tried:
- stas/openwebtext-10k
- rotten_tomatoes
- local text file
OS: Ubuntu Linux 20.04
Package versions:
- torch 1.13.1
- dill 0.3.4 (if using 0.3.6 - same issue)
- datasets 2.9.0
- tiktoken 0.2.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/6916056?v=4",
"events_url": "https://api.github.com/users/venzen/events{/privacy}",
"followers_url": "https://api.github.com/users/venzen/followers",
"following_url": "https://api.github.com/users/venzen/following{/other_user}",
"gists_url": "https://api.github.com/users/venzen/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/venzen",
"id": 6916056,
"login": "venzen",
"node_id": "MDQ6VXNlcjY5MTYwNTY=",
"organizations_url": "https://api.github.com/users/venzen/orgs",
"received_events_url": "https://api.github.com/users/venzen/received_events",
"repos_url": "https://api.github.com/users/venzen/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/venzen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/venzen/subscriptions",
"type": "User",
"url": "https://api.github.com/users/venzen",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5536/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5536/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 11:44:34
|
https://api.github.com/repos/huggingface/datasets/issues/5534
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5534/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5534/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5534/events
|
https://github.com/huggingface/datasets/issues/5534
| 1,586,177,862
|
I_kwDODunzps5eiydG
| 5,534
|
map() breaks at certain dataset size when using Array3D
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/3375489?v=4",
"events_url": "https://api.github.com/users/ArneBinder/events{/privacy}",
"followers_url": "https://api.github.com/users/ArneBinder/followers",
"following_url": "https://api.github.com/users/ArneBinder/following{/other_user}",
"gists_url": "https://api.github.com/users/ArneBinder/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ArneBinder",
"id": 3375489,
"login": "ArneBinder",
"node_id": "MDQ6VXNlcjMzNzU0ODk=",
"organizations_url": "https://api.github.com/users/ArneBinder/orgs",
"received_events_url": "https://api.github.com/users/ArneBinder/received_events",
"repos_url": "https://api.github.com/users/ArneBinder/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ArneBinder/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArneBinder/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ArneBinder",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[
"Hi! This code works for me locally or in Colab. What's the output of `python -c \"import pyarrow as pa; print(pa.__version__)\"` when you run it inside your environment?",
"Thanks for looking into this!\r\nThe output of `python -c \"import pyarrow as pa; print(pa.__version__)\"` is:\r\n```\r\n11.0.0\r\n```\r\n\r\nI did the following to setup the environment:\r\n```\r\nconda create -n datasets_debug python=3.9\r\nconda activate datasets_debug\r\npip install datasets==2.9.0\r\n```\r\n\r\nI just tested this on another machine (Ubuntu 18.04.6 LTS) with the same result as mentioned in the issue description.\r\n"
] | 2023-02-15T16:34:25
| 2023-03-03T16:31:33
| null |
NONE
| null | null | null | null |
### Describe the bug
`map()` magically breaks when using a `Array3D` feature and mapping it. I created a very simple dummy dataset (see below). When filtering it down to 95 elements I can apply map, but it breaks when filtering it down to just 96 entries with the following exception:
```
Traceback (most recent call last):
File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 3255, in _map_single
writer.finalize() # close_stream=bool(buf_writer is None)) # We only close if we are writing in a file
File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/arrow_writer.py", line 581, in finalize
self.write_examples_on_file()
File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/arrow_writer.py", line 440, in write_examples_on_file
batch_examples[col] = array_concat(arrays)
File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/table.py", line 1931, in array_concat
return _concat_arrays(arrays)
File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/table.py", line 1901, in _concat_arrays
return array_type.wrap_array(_concat_arrays([array.storage for array in arrays]))
File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/table.py", line 1922, in _concat_arrays
_concat_arrays([array.values for array in arrays]),
File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/table.py", line 1922, in _concat_arrays
_concat_arrays([array.values for array in arrays]),
File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/table.py", line 1920, in _concat_arrays
return pa.ListArray.from_arrays(
File "pyarrow/array.pxi", line 1997, in pyarrow.lib.ListArray.from_arrays
File "pyarrow/array.pxi", line 1527, in pyarrow.lib.Array.validate
File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Negative offsets in list array
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 2815, in map
return self._map_single(
File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 546, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 513, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/fingerprint.py", line 480, in wrapper
out = func(self, *args, **kwargs)
File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 3259, in _map_single
writer.finalize()
File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/arrow_writer.py", line 581, in finalize
self.write_examples_on_file()
File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/arrow_writer.py", line 440, in write_examples_on_file
batch_examples[col] = array_concat(arrays)
File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/table.py", line 1931, in array_concat
return _concat_arrays(arrays)
File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/table.py", line 1901, in _concat_arrays
return array_type.wrap_array(_concat_arrays([array.storage for array in arrays]))
File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/table.py", line 1922, in _concat_arrays
_concat_arrays([array.values for array in arrays]),
File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/table.py", line 1922, in _concat_arrays
_concat_arrays([array.values for array in arrays]),
File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/table.py", line 1920, in _concat_arrays
return pa.ListArray.from_arrays(
File "pyarrow/array.pxi", line 1997, in pyarrow.lib.ListArray.from_arrays
File "pyarrow/array.pxi", line 1527, in pyarrow.lib.Array.validate
File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Negative offsets in list array
```
### Steps to reproduce the bug
1. put following dataset loading script into: debug/debug.py
```python
import datasets
import numpy as np
class DEBUG(datasets.GeneratorBasedBuilder):
"""DEBUG dataset."""
def _info(self):
return datasets.DatasetInfo(
features=datasets.Features(
{
"id": datasets.Value("uint8"),
"img_data": datasets.Array3D(shape=(3, 224, 224), dtype="uint8"),
},
),
supervised_keys=None,
)
def _split_generators(self, dl_manager):
return [datasets.SplitGenerator(name=datasets.Split.TRAIN)]
def _generate_examples(self):
for i in range(149):
image_np = np.zeros(shape=(3, 224, 224), dtype=np.int8).tolist()
yield f"id_{i}", {"id": i, "img_data": image_np}
```
2. try the following code:
```python
import datasets
def add_dummy_col(ex):
ex["dummy"] = "test"
return ex
ds = datasets.load_dataset(path="debug", split="train")
# works
ds_filtered_works = ds.filter(lambda example: example["id"] < 95)
print(f"filtered result size: {len(ds_filtered_works)}")
# output:
# filtered result size: 95
ds_mapped_works = ds_filtered_works.map(add_dummy_col)
# fails
ds_filtered_error = ds.filter(lambda example: example["id"] < 96)
print(f"filtered result size: {len(ds_filtered_error)}")
# output:
# filtered result size: 96
ds_mapped_error = ds_filtered_error.map(add_dummy_col)
```
### Expected behavior
The example code does not fail.
### Environment info
Python 3.9.16 (main, Jan 11 2023, 16:05:54); [GCC 11.2.0] :: Anaconda, Inc. on linux
datasets 2.9.0
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5534/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5534/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/5532
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5532/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5532/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5532/events
|
https://github.com/huggingface/datasets/issues/5532
| 1,584,505,128
|
I_kwDODunzps5ecaEo
| 5,532
|
train_test_split in arrow_dataset does not ensure to keep single classes in test set
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/37191008?v=4",
"events_url": "https://api.github.com/users/Ulipenitz/events{/privacy}",
"followers_url": "https://api.github.com/users/Ulipenitz/followers",
"following_url": "https://api.github.com/users/Ulipenitz/following{/other_user}",
"gists_url": "https://api.github.com/users/Ulipenitz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Ulipenitz",
"id": 37191008,
"login": "Ulipenitz",
"node_id": "MDQ6VXNlcjM3MTkxMDA4",
"organizations_url": "https://api.github.com/users/Ulipenitz/orgs",
"received_events_url": "https://api.github.com/users/Ulipenitz/received_events",
"repos_url": "https://api.github.com/users/Ulipenitz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Ulipenitz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ulipenitz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Ulipenitz",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi! You can get this behavior by specifying `stratify_by_column=\"label\"` in `train_test_split`.\r\n\r\nThis is the full example:\r\n```python\r\nimport numpy as np\r\nfrom datasets import Dataset, ClassLabel\r\n\r\ndata = [\r\n {'label': 0, 'text': \"example1\"},\r\n {'label': 1, 'text': \"example2\"},\r\n {'label': 1, 'text': \"example3\"},\r\n {'label': 1, 'text': \"example4\"},\r\n {'label': 0, 'text': \"example5\"},\r\n {'label': 1, 'text': \"example6\"},\r\n {'label': 2, 'text': \"example7\"},\r\n {'label': 2, 'text': \"example8\"}\r\n]\r\n\r\nfor _ in range(10):\r\n data_set = Dataset.from_list(data)\r\n data_set = data_set.cast_column(\"label\", ClassLabel(num_classes=3))\r\n data_set = data_set.train_test_split(test_size=0.5, stratify_by_column=\"label\")\r\n unique_labels_train = np.unique(data_set[\"train\"][:][\"label\"])\r\n unique_labels_test = np.unique(data_set[\"test\"][:][\"label\"])\r\n assert len(unique_labels_train) >= len(unique_labels_test) \r\n```\r\n"
] | 2023-02-14T16:52:29
| 2023-02-15T16:09:19
| 2023-02-15T16:09:19
|
NONE
| null | null | null | null |
### Describe the bug
When I have a dataset with very few (e.g. 1) examples per class and I call the train_test_split function on it, sometimes the single class will be in the test set. thus will never be considered for training.
### Steps to reproduce the bug
```
import numpy as np
from datasets import Dataset
data = [
{'label': 0, 'text': "example1"},
{'label': 1, 'text': "example2"},
{'label': 1, 'text': "example3"},
{'label': 1, 'text': "example4"},
{'label': 0, 'text': "example5"},
{'label': 1, 'text': "example6"},
{'label': 2, 'text': "example7"},
{'label': 2, 'text': "example8"}
]
for _ in range(10):
data_set = Dataset.from_list(data)
data_set = data_set.train_test_split(test_size=0.5)
data_set["train"]
unique_labels_train = np.unique(data_set["train"][:]["label"])
unique_labels_test = np.unique(data_set["test"][:]["label"])
assert len(unique_labels_train) >= len(unique_labels_test)
```
### Expected behavior
I expect to have every available class at least once in my training set.
### Environment info
- `datasets` version: 2.9.0
- Platform: Linux-5.15.65+-x86_64-with-debian-bullseye-sid
- Python version: 3.7.12
- PyArrow version: 11.0.0
- Pandas version: 1.3.5
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5532/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5532/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 23:16:50
|
https://api.github.com/repos/huggingface/datasets/issues/5531
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5531/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5531/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5531/events
|
https://github.com/huggingface/datasets/issues/5531
| 1,584,387,276
|
I_kwDODunzps5eb9TM
| 5,531
|
Invalid Arrow data from JSONL
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
open
| false
| null |
[] |
[] | 2023-02-14T15:39:49
| 2023-02-14T15:46:09
| null |
MEMBER
| null | null | null | null |
This code fails:
```python
from datasets import Dataset
ds = Dataset.from_json(path_to_file)
ds.data.validate()
```
raises
```python
ArrowInvalid: Column 2: In chunk 1: Invalid: Struct child array #3 invalid: Invalid: Length spanned by list offsets (4064) larger than values array (length 4063)
```
This causes many issues for @TevenLeScao:
- `map` fails because it fails to rewrite invalid arrow arrays
```python
~/Desktop/hf/datasets/src/datasets/arrow_writer.py in write_examples_on_file(self)
438 if all(isinstance(row[0][col], (pa.Array, pa.ChunkedArray)) for row in self.current_examples):
439 arrays = [row[0][col] for row in self.current_examples]
--> 440 batch_examples[col] = array_concat(arrays)
441 else:
442 batch_examples[col] = [
~/Desktop/hf/datasets/src/datasets/table.py in array_concat(arrays)
1885
1886 if not _is_extension_type(array_type):
-> 1887 return pa.concat_arrays(arrays)
1888
1889 def _offsets_concat(offsets):
~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/array.pxi in pyarrow.lib.concat_arrays()
~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status()
~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status()
ArrowIndexError: array slice would exceed array length
```
- `to_dict()` **segfaults** ⚠️
```python
/Users/runner/work/crossbow/crossbow/arrow/cpp/src/arrow/array/data.cc:99: Check failed: (off) <= (length) Slice offset greater
than array length
```
To reproduce: unzip the archive and run the above code using `sanity_oscar_en.jsonl`
[sanity_oscar_en.jsonl.zip](https://github.com/huggingface/datasets/files/10734124/sanity_oscar_en.jsonl.zip)
PS: reading using pandas and converting to Arrow works though (note that the dataset lives in RAM in this case):
```python
ds = Dataset.from_pandas(pd.read_json(path_to_file, lines=True))
ds.data.validate()
```
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 2,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5531/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5531/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/5525
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5525/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5525/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5525/events
|
https://github.com/huggingface/datasets/issues/5525
| 1,580,342,729
|
I_kwDODunzps5eMh3J
| 5,525
|
TypeError: Couldn't cast array of type string to null
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/74564958?v=4",
"events_url": "https://api.github.com/users/TJ-Solergibert/events{/privacy}",
"followers_url": "https://api.github.com/users/TJ-Solergibert/followers",
"following_url": "https://api.github.com/users/TJ-Solergibert/following{/other_user}",
"gists_url": "https://api.github.com/users/TJ-Solergibert/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/TJ-Solergibert",
"id": 74564958,
"login": "TJ-Solergibert",
"node_id": "MDQ6VXNlcjc0NTY0OTU4",
"organizations_url": "https://api.github.com/users/TJ-Solergibert/orgs",
"received_events_url": "https://api.github.com/users/TJ-Solergibert/received_events",
"repos_url": "https://api.github.com/users/TJ-Solergibert/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/TJ-Solergibert/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TJ-Solergibert/subscriptions",
"type": "User",
"url": "https://api.github.com/users/TJ-Solergibert",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Thanks for reporting, @TJ-Solergibert.\r\n\r\nWe cannot access your Colab notebook: `There was an error loading this notebook. Ensure that the file is accessible and try again.`\r\nCould you please make it publicly accessible?\r\n",
"I swear it's public, I've checked the settings and I've been able to open it in incognito mode.\r\n\r\nNotebook: https://colab.research.google.com/drive/1JCrS7FlGfu_kFqChMrwKZ_bpabnIMqbP?usp=sharing\r\n\r\nAnyway, this is the code to reproduce the error:\r\n\r\n```python3\r\nfrom datasets import ClassLabel\r\nfrom datasets import load_dataset\r\n\r\neuroparl_ds = load_dataset(\"tj-solergibert/Europarl-ST\")\r\n\r\nsource_lang = \"nl\"\r\nlanguages = list(europarl_ds[\"train\"][0][\"transcriptions\"].keys())\r\nClassLabels = ClassLabel(num_classes = len(languages), names = languages)\r\n\r\ndef map_label2id(example):\r\n example['dest_lang'] = ClassLabels.str2int(example['dest_lang'])\r\n return example\r\n\r\ndef unfold_transcriptions(example):\r\n for lang in languages:\r\n example[lang] = example[\"transcriptions\"][lang]\r\n return example\r\n\r\ndef unroll(batch, src_lang, dest_langs):\r\n source_t, dest_t, dest_l = [], [], []\r\n for lang in dest_langs: \r\n source_t += batch[src_lang]\r\n dest_t += batch[lang]\r\n dest_l += [lang]\r\n return_dict = {\"source_text\": source_t, \"dest_text\": dest_t, \"dest_lang\": dest_l}\r\n return return_dict\r\n\r\ndef preprocess_split(ds_split, src_lang):\r\n dest_langs = [x for x in languages if x != src_lang]\r\n\r\n ds_split = ds_split.map(unroll, fn_kwargs= {\"src_lang\": src_lang, \"dest_langs\": dest_langs}, batched = True, batch_size = 1, remove_columns= list(languages))\r\n ds_split = ds_split.filter(lambda x: x[\"source_text\"] != None and x[\"dest_text\"] != None) # Remove incomplete translations\r\n ds_split = ds_split.filter(lambda x: x[\"source_text\"] != \"None\" and x[\"dest_text\"] != \"None\")\r\n ds_split = ds_split.map(map_label2id) \r\n ds_split = ds_split.cast_column(\"dest_lang\", ClassLabels)\r\n return ds_split\r\n\r\ndef reset_cortas(example):\r\n for lang in languages:\r\n if isinstance(example[lang], str):\r\n if example[lang].isnumeric () or len(example[lang]) <= 5:\r\n example[lang] = \"None\"\r\n return example\r\n\r\ndef clean_dataset(dataset):\r\n # Remove columns\r\n dataset = dataset.remove_columns([\"original_speech\", \"original_language\", \"audio_path\", \"segment_start\", \"segment_end\"])\r\n # Unfold\r\n dataset = dataset.map(unfold_transcriptions, remove_columns = [\"transcriptions\"])\r\n dataset = dataset.map(reset_cortas)\r\n return dataset\r\n\r\nprocessed_europarl = clean_dataset(europarl_ds[\"test\"])\r\nnew_train_ds = preprocess_split(processed_europarl, 'nl')\r\n```",
"Thanks, @TJ-Solergibert. I can access your notebook now. Maybe it was just a temporary issue.\r\n\r\nAt first sight, it seems something related to your data: maybe some of the examples do not have all the transcriptions for all the languages. Then, some of them are null when unrolled. And when trying to concatenate with the other rows containing strings, the cast issue is raised (the arrays to be concatenated have different types).\r\n\r\nDo you think this could be the case?",
"See, in this example, \"nl\" and \"ro\" transcripts are null:\r\n```python\r\n>>> europarl_ds[\"test\"][:1]\r\n{'original_speech': ['− Señor Presidente, en primer lugar, quisiera felicitar al señor Seeber por el trabajo realizado, porque en su informe se recogen muchas de las preocupaciones manifestadas en esta'],\r\n 'original_language': ['es'],\r\n 'audio_path': ['es/audios/en.20081008.24.3-238.m4a'],\r\n 'segment_start': [0.6200000047683716],\r\n 'segment_end': [11.319999694824219],\r\n 'transcriptions': [{'de': '− Herr Präsident! Zunächst möchte ich Richard Seeber zu der von ihm geleisteten Arbeit gratulieren, denn sein Bericht greift viele der in diesem Haus zum Ausdruck gebrachten Anliegen',\r\n 'en': '− Mr President, firstly I would like to congratulate Mr Seeber on the work he has done, because his report picks up many of the concerns expressed in this',\r\n 'es': '− Señor Presidente, en primer lugar, quisiera felicitar al señor Seeber por el trabajo realizado, porque en su informe se recogen muchas de las preocupaciones manifestadas en esta',\r\n 'fr': '− Monsieur le Président, je voudrais tout d ’ abord féliciter M. Seeber pour le travail qu ’ il a effectué, parce que son rapport reprend beaucoup des inquiétudes exprimées au sein de cette',\r\n 'it': \"− Signor Presidente, mi congratulo innanzi tutto con l'onorevole Seeber per il lavoro svolto, perché la sua relazione accoglie molti dei timori espressi da quest'Aula\",\r\n 'nl': None,\r\n 'pl': '− Panie przewodniczący! Po pierwsze chciałabym pogratulować panu posłowi Seeberowi wykonanej pracy, ponieważ jego sprawozdanie podejmuje szereg podnoszonych w tej Izbie',\r\n 'pt': '− Senhor Presidente, começo por felicitar o senhor deputado Seeber pelo trabalho que desenvolveu em torno deste relatório, que retoma muitas das preocupações expressas nesta',\r\n 'ro': None}]}\r\n```\r\n```python\r\n>>> processed_europarl[0]\r\n{'de': '− Herr Präsident! Zunächst möchte ich Richard Seeber zu der von ihm geleisteten Arbeit gratulieren, denn sein Bericht greift viele der in diesem Haus zum Ausdruck gebrachten Anliegen',\r\n 'en': '− Mr President, firstly I would like to congratulate Mr Seeber on the work he has done, because his report picks up many of the concerns expressed in this',\r\n 'es': '− Señor Presidente, en primer lugar, quisiera felicitar al señor Seeber por el trabajo realizado, porque en su informe se recogen muchas de las preocupaciones manifestadas en esta',\r\n 'fr': '− Monsieur le Président, je voudrais tout d ’ abord féliciter M. Seeber pour le travail qu ’ il a effectué, parce que son rapport reprend beaucoup des inquiétudes exprimées au sein de cette',\r\n 'it': \"− Signor Presidente, mi congratulo innanzi tutto con l'onorevole Seeber per il lavoro svolto, perché la sua relazione accoglie molti dei timori espressi da quest'Aula\",\r\n 'nl': None,\r\n 'pl': '− Panie przewodniczący! Po pierwsze chciałabym pogratulować panu posłowi Seeberowi wykonanej pracy, ponieważ jego sprawozdanie podejmuje szereg podnoszonych w tej Izbie',\r\n 'pt': '− Senhor Presidente, começo por felicitar o senhor deputado Seeber pelo trabalho que desenvolveu em torno deste relatório, que retoma muitas das preocupações expressas nesta',\r\n 'ro': None}\r\n```",
"You can fix this issue by forcing the cast of None to str by hand:\r\n- If you replace this line:\r\n```python\r\nsource_t += batch[src_lang]\r\n```\r\n- With this line (because the batch size is 1):\r\n```python\r\nsource_t += [str(batch[src_lang][0])]\r\n```\r\n- Or with this line (if the batch size were larger than 1):\r\n```python\r\nsource_t += [str(text) for text in batch[src_lang]]\r\n```",
"Problem solved! Thanks @albertvillanova, now I have even increased the batch size and it's crazy fast :rocket: !"
] | 2023-02-10T21:12:36
| 2023-02-14T17:41:08
| 2023-02-14T09:35:49
|
NONE
| null | null | null | null |
### Describe the bug
Processing a dataset I alredy uploaded to the Hub (https://huggingface.co/datasets/tj-solergibert/Europarl-ST) I found that for some splits and some languages (test split, source_lang = "nl") after applying a map function I get the mentioned error.
I alredy tried reseting the shorter strings (reset_cortas function). It only happends with NL, PL, RO and PT. It does not make sense since when processing the other languages I also use the corpus of those that fail and it does not cause any errors.
I suspect that the error may be in this direction:
We use cast_array_to_feature to support casting to custom types like Audio and Image # Also, when trying type "string", we don't want to convert integers or floats to "string". # We only do it if trying_type is False - since this is what the user asks for.
### Steps to reproduce the bug
Here I link a colab notebook to reproduce the error:
https://colab.research.google.com/drive/1JCrS7FlGfu_kFqChMrwKZ_bpabnIMqbP?authuser=1#scrollTo=FBAvlhMxIzpA
### Expected behavior
Data processing does not fail. A correct example can be seen here: https://huggingface.co/datasets/tj-solergibert/Europarl-ST-processed-mt-en
### Environment info
- `datasets` version: 2.9.0
- Platform: Linux-5.10.147+-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 9.0.0
- Pandas version: 1.3.5
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5525/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5525/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 3 days, 12:23:13
|
https://api.github.com/repos/huggingface/datasets/issues/5523
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5523/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5523/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5523/events
|
https://github.com/huggingface/datasets/issues/5523
| 1,580,193,015
|
I_kwDODunzps5eL9T3
| 5,523
|
Checking that split name is correct happens only after the data is downloaded
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
open
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna",
"user_view_type": "public"
}
] |
[] | 2023-02-10T19:13:03
| 2023-02-10T19:14:50
| null |
CONTRIBUTOR
| null | null | null | null |
### Describe the bug
Verification of split names (=indexing data by split) happens after downloading the data. So when the split name is incorrect, users learn about that only after the data is fully downloaded, for large datasets it might take a lot of time.
### Steps to reproduce the bug
Load any dataset with random split name, for example:
```python
from datasets import load_dataset
load_dataset("mozilla-foundation/common_voice_11_0", "en", split="blabla")
```
and the download will start smoothly, despite there is no split named "blabla".
### Expected behavior
Raise error when split name is incorrect.
### Environment info
`datasets==2.9.1.dev0`
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5523/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5523/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/5520
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5520/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5520/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5520/events
|
https://github.com/huggingface/datasets/issues/5520
| 1,578,417,074
|
I_kwDODunzps5eFLuy
| 5,520
|
ClassLabel.cast_storage raises TypeError when called on an empty IntegerArray
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/6591505?v=4",
"events_url": "https://api.github.com/users/marioga/events{/privacy}",
"followers_url": "https://api.github.com/users/marioga/followers",
"following_url": "https://api.github.com/users/marioga/following{/other_user}",
"gists_url": "https://api.github.com/users/marioga/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/marioga",
"id": 6591505,
"login": "marioga",
"node_id": "MDQ6VXNlcjY1OTE1MDU=",
"organizations_url": "https://api.github.com/users/marioga/orgs",
"received_events_url": "https://api.github.com/users/marioga/received_events",
"repos_url": "https://api.github.com/users/marioga/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/marioga/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marioga/subscriptions",
"type": "User",
"url": "https://api.github.com/users/marioga",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[] | 2023-02-09T18:46:52
| 2023-02-12T11:17:18
| 2023-02-12T11:17:18
|
CONTRIBUTOR
| null | null | null | null |
### Describe the bug
`ClassLabel.cast_storage` raises `TypeError` when called on an empty `IntegerArray`.
### Steps to reproduce the bug
Minimal steps:
```python
import pyarrow as pa
from datasets import ClassLabel
ClassLabel(names=['foo', 'bar']).cast_storage(pa.array([], pa.int64()))
```
In practice, this bug arises in situations like the one below:
```python
from datasets import ClassLabel, Dataset, Features, Sequence
dataset = Dataset.from_dict({'labels': [[], []]}, features=Features({'labels': Sequence(ClassLabel(names=['foo', 'bar']))}))
# this raises TypeError
dataset.map(batched=True, batch_size=1)
```
### Expected behavior
`ClassLabel.cast_storage` should return an empty Int64Array.
### Environment info
- `datasets` version: 2.9.1.dev0
- Platform: Linux-4.15.0-1032-aws-x86_64-with-glibc2.27
- Python version: 3.10.6
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5520/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5520/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 2 days, 16:30:26
|
https://api.github.com/repos/huggingface/datasets/issues/5517
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5517/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5517/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5517/events
|
https://github.com/huggingface/datasets/issues/5517
| 1,577,976,608
|
I_kwDODunzps5eDgMg
| 5,517
|
`with_format("numpy")` silently downcasts float64 to float32 features
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1250234?v=4",
"events_url": "https://api.github.com/users/ernestum/events{/privacy}",
"followers_url": "https://api.github.com/users/ernestum/followers",
"following_url": "https://api.github.com/users/ernestum/following{/other_user}",
"gists_url": "https://api.github.com/users/ernestum/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ernestum",
"id": 1250234,
"login": "ernestum",
"node_id": "MDQ6VXNlcjEyNTAyMzQ=",
"organizations_url": "https://api.github.com/users/ernestum/orgs",
"received_events_url": "https://api.github.com/users/ernestum/received_events",
"repos_url": "https://api.github.com/users/ernestum/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ernestum/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ernestum/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ernestum",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[
"Hi! This behavior stems from these lines:\r\n\r\nhttps://github.com/huggingface/datasets/blob/b065547654efa0ec633cf373ac1512884c68b2e1/src/datasets/formatting/np_formatter.py#L45-L46\r\n\r\nI agree we should preserve the original type whenever possible and downcast explicitly with a warning.\r\n\r\n@lhoestq Do you remember why we need this \"default dtype\" logic in our formatters?",
"I was also wondering why the default type logic is needed. Me just deleting it is probably too naive of a solution.",
"Hmm I think the idea was to end up with the usual default precision for deep learning models - no matter how the data was stored or where it comes from.\r\n\r\nFor example in NLP we store tokens using an optimized low precision to save disk space, but when we set the format to `torch` we actually need to get `int64`. Although the need for a default for integers also comes from numpy not returning the same integer precision depending on your machine. Finally I guess we added a default for floats as well for consistency.\r\n\r\nI'm a bit embarrassed by this though, as a user I'd have expected to get the same precision indeed as well and get a zero copy view.",
"Will you fix this or should I open a PR?",
"Unfortunately removing it for integers is a breaking change for most `transformers` + `datasets` users for NLP (which is a common case). Removing it for floats is a breaking change for `transformers` + `datasets` for ASR as well. And it also is a breaking change for the other users relying on this behavior.\r\n\r\nTherefore I think that the only short term solution is for the user to provide `dtype=` manually and document better this behavior. We could also extend `dtype` to accept a value that means \"return the same dtype as the underlying storage\" and make it easier to do zero copy.",
"@lhoestq It should be fine to remove this conversion in Datasets 3.0, no? For now, we can warn the user (with a log message) about the future change when the default type is changed.",
"Let's see with the transformers team if it sounds reasonable ? We'd have to fix multiple example scripts though.\r\n\r\nIf it's not ok we can also explore keeping this behavior only for tokens and audio data.",
"IMO being coupled with Transformers can lead to unexpected behavior when one tries to use our lib without pairing it with Transformers, so I think it's still important to \"fix\" this, even if it means we will need to update Transformers' example scripts afterward.\r\n",
"Ideally let's update the `transformers` example scripts before the change :P",
"For others that run into the same issue: A temporary workaround for me is this:\r\n```python\r\ndef numpy_transform(batch):\r\n return {key: np.asarray(val) for key, val in batch.items()}\r\n\r\ndataset = dataset.with_transform(numpy_transform)\r\n```",
"This behavior (silent upcast from `int32` to `int64`) is also unexpected for the user in https://discuss.huggingface.co/t/standard-getitem-returns-wrong-data-type-for-arrays/62470/2",
"Hi, I stumbled on a variation that upcasts uint8 to int64. I would expect the dtype to be the same as it was when I generated the dataset.\r\n\r\n```\r\nimport numpy as np\r\nimport datasets as ds\r\n\r\nfoo = np.random.randint(0, 256, size=(5, 10, 10), dtype=np.uint8)\r\n\r\nfeatures = ds.Features({\"foo\": ds.Array2D((10, 10), \"uint8\")})\r\ndataset = ds.Dataset.from_dict({\"foo\": foo}, features=features)\r\ndataset.set_format(\"torch\")\r\nprint(\"feature dtype:\", dataset.features[\"foo\"].dtype)\r\nprint(\"array dtype:\", dataset[\"foo\"].dtype)\r\n\r\n# feature dtype: uint8\r\n# array dtype: torch.int64\r\n```\r\n",
"workaround to remove torch upcasting\r\n\r\n```\r\nimport datasets as ds\r\nimport torch\r\n\r\nclass FixedTorchFormatter(ds.formatting.TorchFormatter):\r\n def _tensorize(self, value):\r\n return torch.from_numpy(value)\r\n\r\n\r\nds.formatting._register_formatter(FixedTorchFormatter, \"torch\")\r\n```"
] | 2023-02-09T14:18:00
| 2024-01-18T08:42:17
| null |
NONE
| null | null | null | null |
### Describe the bug
When I create a dataset with a `float64` feature, then apply numpy formatting the returned numpy arrays are silently downcasted to `float32`.
### Steps to reproduce the bug
```python
import datasets
dataset = datasets.Dataset.from_dict({'a': [1.0, 2.0, 3.0]}).with_format("numpy")
print("feature dtype:", dataset.features['a'].dtype)
print("array dtype:", dataset['a'].dtype)
```
output:
```
feature dtype: float64
array dtype: float32
```
### Expected behavior
```
feature dtype: float64
array dtype: float64
```
### Environment info
- `datasets` version: 2.8.0
- Platform: Linux-5.4.0-135-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 10.0.1
- Pandas version: 1.4.4
### Suggested Fix
Changing [the `_tensorize` function of the numpy formatter](https://github.com/huggingface/datasets/blob/b065547654efa0ec633cf373ac1512884c68b2e1/src/datasets/formatting/np_formatter.py#L32) to
```python
def _tensorize(self, value):
if isinstance(value, (str, bytes, type(None))):
return value
elif isinstance(value, (np.character, np.ndarray)) and np.issubdtype(value.dtype, np.character):
return value
elif isinstance(value, np.number):
return value
return np.asarray(value, **self.np_array_kwargs)
```
fixes this particular issue for me. Not sure if this would break other tests. This should also avoid unnecessary copying of the array.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5517/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5517/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/5514
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5514/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5514/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5514/events
|
https://github.com/huggingface/datasets/issues/5514
| 1,576,453,837
|
I_kwDODunzps5d9sbN
| 5,514
|
Improve inconsistency of `Dataset.map` interface for `load_from_cache_file`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/22773355?v=4",
"events_url": "https://api.github.com/users/HallerPatrick/events{/privacy}",
"followers_url": "https://api.github.com/users/HallerPatrick/followers",
"following_url": "https://api.github.com/users/HallerPatrick/following{/other_user}",
"gists_url": "https://api.github.com/users/HallerPatrick/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/HallerPatrick",
"id": 22773355,
"login": "HallerPatrick",
"node_id": "MDQ6VXNlcjIyNzczMzU1",
"organizations_url": "https://api.github.com/users/HallerPatrick/orgs",
"received_events_url": "https://api.github.com/users/HallerPatrick/received_events",
"repos_url": "https://api.github.com/users/HallerPatrick/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/HallerPatrick/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HallerPatrick/subscriptions",
"type": "User",
"url": "https://api.github.com/users/HallerPatrick",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
| null |
[] |
[
"Hi, thanks for noticing this! We can't just remove the cache control as this allows us to control where the arrow files generated by the ops are written (cached on disk if enabled or a temporary directory if disabled). The right way to address this inconsistency would be by having `load_from_cache_file=None` by default everywhere.",
"Hi! Yes, this seems more plausible. I can implement that. One last thing is the type annotation `load_from_cache_file: bool = None`. Which I then would change to `load_from_cache_file: Optional[bool] = None`.",
"PR #5515 ",
"Yes, `Optional[bool]` is the correct type annotation and thanks for the PR."
] | 2023-02-08T16:40:44
| 2023-02-14T14:26:44
| 2023-02-14T14:26:44
|
CONTRIBUTOR
| null | null | null | null |
### Feature request
1. Replace the `load_from_cache_file` default value to `True`.
2. Remove or alter checks from `is_caching_enabled` logic.
### Motivation
I stumbled over an inconsistency in the `Dataset.map` interface. The documentation (and source) states for the parameter `load_from_cache_file`:
```
load_from_cache_file (`bool`, defaults to `True` if caching is enabled):
If a cache file storing the current computation from `function`
can be identified, use it instead of recomputing.
```
1. `load_from_cache_file` default value is `None`, while being annotated as `bool`
2. It is inconsistent with other method signatures like `filter`, that have the default value `True`
3. The logic is inconsistent, as the `map` method checks if caching is enabled through `is_caching_enabled`. This logic is not used for other similar methods.
### Your contribution
I am not fully aware of the logic behind caching checks. If this is just a inconsistency that historically grew, I would suggest to remove the `is_caching_enabled` logic as the "default" logic. Maybe someone can give insights, if environment variables have a higher priority than local variables or vice versa.
If this is clarified, I could adjust the source according to the "Feature request" section of this issue.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5514/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5514/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 5 days, 21:46:00
|
https://api.github.com/repos/huggingface/datasets/issues/5513
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5513/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5513/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5513/events
|
https://github.com/huggingface/datasets/issues/5513
| 1,576,300,803
|
I_kwDODunzps5d9HED
| 5,513
|
Some functions use a param named `type` shouldn't that be avoided since it's a Python reserved name?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alvarobartt",
"id": 36760800,
"login": "alvarobartt",
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alvarobartt",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi! Let's not do this - renaming it would be a breaking change, and going through the deprecation cycle is only worth it if it improves user experience.",
"Hi @mariosasko, ok it makes sense. Anyway, don't you think it's worth it at some point to start a deprecation cycle e.g. `fs` in `load_from_disk`? It doesn't affect user experience but it's for sure a bad practice IMO, but's up to you 😄 Feel free to close this issue otherwise!",
"I don't think deprecating a param name in this particular instance is worth the hassle, so I'm closing the issue 🙂.",
"Sure, makes sense @mariosasko thanks!"
] | 2023-02-08T15:13:46
| 2023-07-24T16:02:18
| 2023-07-24T14:27:59
|
MEMBER
| null | null | null | null |
Hi @mariosasko, @lhoestq, or whoever reads this! :)
After going through `ArrowDataset.set_format` I found out that the `type` param is actually named `type` which is a Python reserved name as you may already know, shouldn't that be renamed to `format_type` before the 3.0.0 is released?
Just wanted to get your input, and if applicable, tackle this issue myself! Thanks 🤗
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5513/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5513/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 165 days, 23:14:13
|
https://api.github.com/repos/huggingface/datasets/issues/5511
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5511/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5511/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5511/events
|
https://github.com/huggingface/datasets/issues/5511
| 1,575,851,768
|
I_kwDODunzps5d7Zb4
| 5,511
|
Creating a dummy dataset from a bigger one
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Update `datasets` or downgrade `huggingface-hub` ;)\r\n\r\nThe `huggingface-hub` lib did a breaking change a few months ago, and you're using an old version of `datasets` that does't support it",
"Awesome thanks a lot! Everything works just fine with `datasets==2.9.0` :-) ",
"Getting same error with latest versions.\r\n\r\n\r\n```shell\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\nCell In[99], line 1\r\n----> 1 dataset.push_to_hub(\"mirfan899/kids_phoneme_asr\")\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/datasets/arrow_dataset.py:3538, in Dataset.push_to_hub(self, repo_id, split, private, token, branch, shard_size, embed_external_files)\r\n 3493 def push_to_hub(\r\n 3494 self,\r\n 3495 repo_id: str,\r\n (...)\r\n 3501 embed_external_files: bool = True,\r\n 3502 ):\r\n 3503 \"\"\"Pushes the dataset to the hub.\r\n 3504 The dataset is pushed using HTTP requests and does not need to have neither git or git-lfs installed.\r\n 3505 \r\n (...)\r\n 3536 ```\r\n 3537 \"\"\"\r\n-> 3538 repo_id, split, uploaded_size, dataset_nbytes = self._push_parquet_shards_to_hub(\r\n 3539 repo_id=repo_id,\r\n 3540 split=split,\r\n 3541 private=private,\r\n 3542 token=token,\r\n 3543 branch=branch,\r\n 3544 shard_size=shard_size,\r\n 3545 embed_external_files=embed_external_files,\r\n 3546 )\r\n 3547 organization, dataset_name = repo_id.split(\"/\")\r\n 3548 info_to_dump = self.info.copy()\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/datasets/arrow_dataset.py:3474, in Dataset._push_parquet_shards_to_hub(self, repo_id, split, private, token, branch, shard_size, embed_external_files)\r\n 3472 shard.to_parquet(buffer)\r\n 3473 uploaded_size += buffer.tell()\r\n-> 3474 _retry(\r\n 3475 api.upload_file,\r\n 3476 func_kwargs=dict(\r\n 3477 path_or_fileobj=buffer.getvalue(),\r\n 3478 path_in_repo=path_in_repo(index),\r\n 3479 repo_id=repo_id,\r\n 3480 token=token,\r\n 3481 repo_type=\"dataset\",\r\n 3482 revision=branch,\r\n 3483 identical_ok=True,\r\n 3484 ),\r\n 3485 exceptions=HTTPError,\r\n 3486 status_codes=[504],\r\n 3487 base_wait_time=2.0,\r\n 3488 max_retries=5,\r\n 3489 max_wait_time=20.0,\r\n 3490 )\r\n 3491 return repo_id, split, uploaded_size, dataset_nbytes\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/datasets/utils/file_utils.py:330, in _retry(func, func_args, func_kwargs, exceptions, status_codes, max_retries, base_wait_time, max_wait_time)\r\n 328 while True:\r\n 329 try:\r\n--> 330 return func(*func_args, **func_kwargs)\r\n 331 except exceptions as err:\r\n 332 if retry >= max_retries or (status_codes and err.response.status_code not in status_codes):\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py:120, in validate_hf_hub_args.<locals>._inner_fn(*args, **kwargs)\r\n 117 if check_use_auth_token:\r\n 118 kwargs = smoothly_deprecate_use_auth_token(fn_name=fn.__name__, has_token=has_token, kwargs=kwargs)\r\n--> 120 return fn(*args, **kwargs)\r\n\r\nTypeError: HfApi.upload_file() got an unexpected keyword argument 'identical_ok'\r\n```",
"Feel free to update `datasets` and `huggingface-hub`, it should fix it :)",
"I went ahead and upgraded both datasets and hub and still getting the same error\r\n",
"Which version do you have ? It's been a while since it has been fixed",
"huggingface 0.0.1\r\nhuggingface-hub 0.17.1\r\ndatasets 2.14.5\r\n\r\nstill has the issue!!",
"I face the same issue even after upgrading :/"
] | 2023-02-08T10:18:41
| 2023-12-28T18:21:01
| 2023-02-08T10:35:48
|
CONTRIBUTOR
| null | null | null | null |
### Describe the bug
I often want to create a dummy dataset from a bigger dataset for fast iteration when training. However, I'm having a hard time doing this especially when trying to upload the dataset to the Hub.
### Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("lambdalabs/pokemon-blip-captions")
dataset["train"] = dataset["train"].select(range(20))
dataset.push_to_hub("patrickvonplaten/dummy_image_data")
```
gives:
```
~/python_bin/datasets/arrow_dataset.py in _push_parquet_shards_to_hub(self, repo_id, split, private, token, branch, max_shard_size, embed_external_files)
4003 base_wait_time=2.0,
4004 max_retries=5,
-> 4005 max_wait_time=20.0,
4006 )
4007 return repo_id, split, uploaded_size, dataset_nbytes
~/python_bin/datasets/utils/file_utils.py in _retry(func, func_args, func_kwargs, exceptions, status_codes, max_retries, base_wait_time, max_wait_time)
328 while True:
329 try:
--> 330 return func(*func_args, **func_kwargs)
331 except exceptions as err:
332 if retry >= max_retries or (status_codes and err.response.status_code not in status_codes):
~/hf/lib/python3.7/site-packages/huggingface_hub/utils/_validators.py in _inner_fn(*args, **kwargs)
122 )
123
--> 124 return fn(*args, **kwargs)
125
126 return _inner_fn # type: ignore
TypeError: upload_file() got an unexpected keyword argument 'identical_ok'
In [2]:
```
### Expected behavior
I would have expected this to work. It's for me the most intuitive way of creating a dummy dataset.
### Environment info
```
- `datasets` version: 2.1.1.dev0
- Platform: Linux-4.19.0-22-cloud-amd64-x86_64-with-debian-10.13
- Python version: 3.7.3
- PyArrow version: 11.0.0
- Pandas version: 1.3.5
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5511/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5511/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 0:17:07
|
https://api.github.com/repos/huggingface/datasets/issues/5508
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5508/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5508/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5508/events
|
https://github.com/huggingface/datasets/issues/5508
| 1,573,290,359
|
I_kwDODunzps5dxoF3
| 5,508
|
Saving a dataset after setting format to torch doesn't work, but only if filtering
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/13984157?v=4",
"events_url": "https://api.github.com/users/joebhakim/events{/privacy}",
"followers_url": "https://api.github.com/users/joebhakim/followers",
"following_url": "https://api.github.com/users/joebhakim/following{/other_user}",
"gists_url": "https://api.github.com/users/joebhakim/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/joebhakim",
"id": 13984157,
"login": "joebhakim",
"node_id": "MDQ6VXNlcjEzOTg0MTU3",
"organizations_url": "https://api.github.com/users/joebhakim/orgs",
"received_events_url": "https://api.github.com/users/joebhakim/received_events",
"repos_url": "https://api.github.com/users/joebhakim/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/joebhakim/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joebhakim/subscriptions",
"type": "User",
"url": "https://api.github.com/users/joebhakim",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hey, I'm a research engineer working on language modelling wanting to contribute to open source. I was wondering if I could give it a shot?",
"Hi! This issue was fixed in https://github.com/huggingface/datasets/pull/4972, so please install `datasets>=2.5.0` to avoid it."
] | 2023-02-06T21:08:58
| 2023-02-09T14:55:26
| 2023-02-09T14:55:26
|
NONE
| null | null | null | null |
### Describe the bug
Saving a dataset after setting format to torch doesn't work, but only if filtering
### Steps to reproduce the bug
```
a = Dataset.from_dict({"b": [1, 2]})
a.set_format('torch')
a.save_to_disk("test_save") # saves successfully
a.filter(None).save_to_disk("test_save_filter") # does not
>> [...] TypeError: Provided `function` which is applied to all elements of table returns a `dict` of types [<class 'torch.Tensor'>]. When using `batched=True`, make sure provided `function` returns a `dict` of types like `(<class 'list'>, <class 'numpy.ndarray'>)`.
# note: skipping the format change to torch lets this work.
### Expected behavior
Saving to work
### Environment info
- `datasets` version: 2.4.0
- Platform: Linux-6.1.9-arch1-1-x86_64-with-glibc2.36
- Python version: 3.10.9
- PyArrow version: 9.0.0
- Pandas version: 1.4.4
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5508/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5508/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 2 days, 17:46:28
|
https://api.github.com/repos/huggingface/datasets/issues/5507
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5507/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5507/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5507/events
|
https://github.com/huggingface/datasets/issues/5507
| 1,572,667,036
|
I_kwDODunzps5dvP6c
| 5,507
|
Optimise behaviour in respect to indices mapping
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
] |
[] | 2023-02-06T14:25:55
| 2023-02-28T18:19:18
| null |
COLLABORATOR
| null | null | null | null |
_Originally [posted](https://huggingface.slack.com/archives/C02V51Q3800/p1675443873878489?thread_ts=1675418893.373479&cid=C02V51Q3800) on Slack_
Considering all this, perhaps for Datasets 3.0, we can do the following:
* [ ] have `continuous=True` by default in `.shard` (requested in the survey and makes more sense for us since it doesn't create an indices mapping)
* [x] allow calling `save_to_disk` on "unflattened" datasets
* [ ] remove "hidden" expensive calls in `save_to_disk`, `unique`, `concatenate_datasets`, etc. For instance, instead of silently calling `flatten_indices` where it's needed, it's probably better to be explicit (considering how expensive these ops can be) and raise an error instead
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5507/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5507/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/5506
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5506/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5506/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5506/events
|
https://github.com/huggingface/datasets/issues/5506
| 1,571,838,641
|
I_kwDODunzps5dsFqx
| 5,506
|
IterableDataset and Dataset return different batch sizes when using Trainer with multiple GPUs
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/38166299?v=4",
"events_url": "https://api.github.com/users/kheyer/events{/privacy}",
"followers_url": "https://api.github.com/users/kheyer/followers",
"following_url": "https://api.github.com/users/kheyer/following{/other_user}",
"gists_url": "https://api.github.com/users/kheyer/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kheyer",
"id": 38166299,
"login": "kheyer",
"node_id": "MDQ6VXNlcjM4MTY2Mjk5",
"organizations_url": "https://api.github.com/users/kheyer/orgs",
"received_events_url": "https://api.github.com/users/kheyer/received_events",
"repos_url": "https://api.github.com/users/kheyer/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kheyer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kheyer/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kheyer",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi ! `datasets` doesn't do batching - the PyTorch DataLoader does and is created by the `Trainer`. Do you pass other arguments to training_args with respect to data loading ?\r\n\r\nAlso we recently released `.to_iterable_dataset` that does pretty much what you implemented, but using contiguous shards to get a better speed:\r\n```python\r\nif use_iterable_dataset:\r\n num_shards = 100\r\n dataset = dataset.to_iterable_dataset(num_shards=num_shards)\r\n```",
"This is the full set of training args passed. No training args were changed when switching dataset types.\r\n\r\n```python\r\ntraining_args = TrainingArguments(\r\n output_dir=\"./checkpoints\",\r\n overwrite_output_dir=True,\r\n num_train_epochs=1,\r\n per_device_train_batch_size=256,\r\n save_steps=2000,\r\n save_total_limit=4,\r\n prediction_loss_only=True,\r\n report_to='none',\r\n gradient_accumulation_steps=6,\r\n fp16=True,\r\n max_steps=60000,\r\n lr_scheduler_type='linear',\r\n warmup_ratio=0.1,\r\n logging_steps=100,\r\n weight_decay=0.01,\r\n adam_beta1=0.9,\r\n adam_beta2=0.98,\r\n adam_epsilon=1e-6,\r\n learning_rate=1e-4\r\n)\r\n```",
"I think the issue comes from `transformers`: https://github.com/huggingface/transformers/issues/21444",
"Makes sense. Given that it's a `transformers` issue and already being tracked, I'll close this out."
] | 2023-02-06T03:26:03
| 2023-02-08T18:30:08
| 2023-02-08T18:30:07
|
NONE
| null | null | null | null |
### Describe the bug
I am training a Roberta model using 2 GPUs and the `Trainer` API with a batch size of 256.
Initially I used a standard `Dataset`, but had issues with slow data loading. After reading [this issue](https://github.com/huggingface/datasets/issues/2252), I swapped to loading my dataset as contiguous shards and passing those to an `IterableDataset`. I observed an unexpected drop in GPU memory utilization, and found the batch size returned from the model had been cut in half.
When using `Trainer` with 2 GPUs and a batch size of 256, `Dataset` returns a batch of size 512 (256 per GPU), while `IterableDataset` returns a batch size of 256 (256 total). My guess is `IterableDataset` isn't accounting for multiple cards.
### Steps to reproduce the bug
```python
import datasets
from datasets import IterableDataset
from transformers import RobertaConfig
from transformers import RobertaTokenizerFast
from transformers import RobertaForMaskedLM
from transformers import DataCollatorForLanguageModeling
from transformers import Trainer, TrainingArguments
use_iterable_dataset = True
def gen_from_shards(shards):
for shard in shards:
for example in shard:
yield example
dataset = datasets.load_from_disk('my_dataset.hf')
if use_iterable_dataset:
n_shards = 100
shards = [dataset.shard(num_shards=n_shards, index=i) for i in range(n_shards)]
dataset = IterableDataset.from_generator(gen_from_shards, gen_kwargs={"shards": shards})
tokenizer = RobertaTokenizerFast.from_pretrained("./my_tokenizer", max_len=160, use_fast=True)
config = RobertaConfig(
vocab_size=8248,
max_position_embeddings=256,
num_attention_heads=8,
num_hidden_layers=6,
type_vocab_size=1)
model = RobertaForMaskedLM(config=config)
data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=True, mlm_probability=0.15)
training_args = TrainingArguments(
per_device_train_batch_size=256
# other args removed for brevity
)
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=dataset,
)
trainer.train()
```
### Expected behavior
Expected `Dataset` and `IterableDataset` to have the same batch size behavior. If the current behavior is intentional, the batch size printout at the start of training should be updated. Currently, both dataset classes result in `Trainer` printing the same total batch size, even though the batch size sent to the GPUs are different.
### Environment info
datasets 2.7.1
transformers 4.25.1
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/38166299?v=4",
"events_url": "https://api.github.com/users/kheyer/events{/privacy}",
"followers_url": "https://api.github.com/users/kheyer/followers",
"following_url": "https://api.github.com/users/kheyer/following{/other_user}",
"gists_url": "https://api.github.com/users/kheyer/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kheyer",
"id": 38166299,
"login": "kheyer",
"node_id": "MDQ6VXNlcjM4MTY2Mjk5",
"organizations_url": "https://api.github.com/users/kheyer/orgs",
"received_events_url": "https://api.github.com/users/kheyer/received_events",
"repos_url": "https://api.github.com/users/kheyer/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kheyer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kheyer/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kheyer",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5506/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5506/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 2 days, 15:04:04
|
https://api.github.com/repos/huggingface/datasets/issues/5505
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5505/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5505/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5505/events
|
https://github.com/huggingface/datasets/issues/5505
| 1,571,720,814
|
I_kwDODunzps5dro5u
| 5,505
|
PyTorch BatchSampler still loads from Dataset one-by-one
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/4443482?v=4",
"events_url": "https://api.github.com/users/davidgilbertson/events{/privacy}",
"followers_url": "https://api.github.com/users/davidgilbertson/followers",
"following_url": "https://api.github.com/users/davidgilbertson/following{/other_user}",
"gists_url": "https://api.github.com/users/davidgilbertson/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/davidgilbertson",
"id": 4443482,
"login": "davidgilbertson",
"node_id": "MDQ6VXNlcjQ0NDM0ODI=",
"organizations_url": "https://api.github.com/users/davidgilbertson/orgs",
"received_events_url": "https://api.github.com/users/davidgilbertson/received_events",
"repos_url": "https://api.github.com/users/davidgilbertson/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/davidgilbertson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davidgilbertson/subscriptions",
"type": "User",
"url": "https://api.github.com/users/davidgilbertson",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"This change seems to come from a few months ago in the PyTorch side. That's good news and it means we may not need to pass a batch_sampler as soon as we add `Dataset.__getitems__` to get the optimal speed :)\r\n\r\nThanks for reporting ! Would you like to open a PR to add `__getitems__` and remove this outdated documentation ?",
"Yeah I figured this was the sort of thing that probably once worked. I can confirm that you no longer need the batch sampler, just `batch_size=n` in the `DataLoader`.\r\n\r\nI'll pass on the PR, I'm flat out right now, sorry."
] | 2023-02-06T01:14:55
| 2023-02-19T18:27:30
| 2023-02-19T18:27:30
|
NONE
| null | null | null | null |
### Describe the bug
In [the docs here](https://huggingface.co/docs/datasets/use_with_pytorch#use-a-batchsampler), it mentions the issue of the Dataset being read one-by-one, then states that using a BatchSampler resolves the issue.
I'm not sure if this is a mistake in the docs or the code, but it seems that the only way for a Dataset to be passed a list of indexes by PyTorch (instead of one index at a time) is to define a `__getitems__` method (note the plural) on the Dataset object, and since the HF Dataset doesn't have this, PyTorch executes [this line of code](https://github.com/pytorch/pytorch/blob/master/torch/utils/data/_utils/fetch.py#L58), reverting to fetching one-by-one.
### Steps to reproduce the bug
You can put a breakpoint in `Dataset.__getitem__()` or just print the args from there and see that it's called multiple times for a single `next(iter(dataloader))`, even when using the code from the docs:
```py
from torch.utils.data.sampler import BatchSampler, RandomSampler
batch_sampler = BatchSampler(RandomSampler(ds), batch_size=32, drop_last=False)
dataloader = DataLoader(ds, batch_sampler=batch_sampler)
```
### Expected behavior
The expected behaviour would be for it to fetch batches from the dataset, rather than one-by-one.
To demonstrate that there is room for improvement: once I have a HF dataset `ds`, if I just add this line:
```py
ds.__getitems__ = ds.__getitem__
```
...then the time taken to loop over the dataset improves considerably (for wikitext-103, from one minute to 13 seconds with batch size 32). Probably not a big deal in the grand scheme of things, but seems like an easy win.
### Environment info
- `datasets` version: 2.9.0
- Platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.31
- Python version: 3.10.8
- PyArrow version: 10.0.1
- Pandas version: 1.5.3
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5505/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5505/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 13 days, 17:12:35
|
https://api.github.com/repos/huggingface/datasets/issues/5500
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5500/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5500/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5500/events
|
https://github.com/huggingface/datasets/issues/5500
| 1,569,257,240
|
I_kwDODunzps5diPcY
| 5,500
|
WMT19 custom download checksum error
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/38466901?v=4",
"events_url": "https://api.github.com/users/Hannibal046/events{/privacy}",
"followers_url": "https://api.github.com/users/Hannibal046/followers",
"following_url": "https://api.github.com/users/Hannibal046/following{/other_user}",
"gists_url": "https://api.github.com/users/Hannibal046/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Hannibal046",
"id": 38466901,
"login": "Hannibal046",
"node_id": "MDQ6VXNlcjM4NDY2OTAx",
"organizations_url": "https://api.github.com/users/Hannibal046/orgs",
"received_events_url": "https://api.github.com/users/Hannibal046/received_events",
"repos_url": "https://api.github.com/users/Hannibal046/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Hannibal046/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Hannibal046/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Hannibal046",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"I update the `datatsets` version and it works."
] | 2023-02-03T05:45:37
| 2023-02-03T05:52:56
| 2023-02-03T05:52:56
|
NONE
| null | null | null | null |
### Describe the bug
I use the following scripts to download data from WMT19:
```python
import datasets
from datasets import inspect_dataset, load_dataset_builder
from wmt19.wmt_utils import _TRAIN_SUBSETS,_DEV_SUBSETS
## this is a must due to: https://discuss.huggingface.co/t/load-dataset-hangs-with-local-files/28034/3
if __name__ == '__main__':
dev_subsets,train_subsets = [],[]
for subset in _TRAIN_SUBSETS:
if subset.target=='en' and 'de' in subset.sources:
train_subsets.append(subset.name)
for subset in _DEV_SUBSETS:
if subset.target=='en' and 'de' in subset.sources:
dev_subsets.append(subset.name)
inspect_dataset("wmt19", "./wmt19")
builder = load_dataset_builder(
"./wmt19/wmt_utils.py",
language_pair=("de", "en"),
subsets={
datasets.Split.TRAIN: train_subsets,
datasets.Split.VALIDATION: dev_subsets,
},
)
builder.download_and_prepare()
ds = builder.as_dataset()
ds.to_json("../data/wmt19/ende/data.json")
```
And I got the following error:
```
Traceback (most recent call last): | 0/2 [00:00<?, ?obj/s]
File "draft.py", line 26, in <module>
builder.download_and_prepare() | 0/1 [00:00<?, ?obj/s]
File "/Users/hannibal046/anaconda3/lib/python3.8/site-packages/datasets/builder.py", line 605, in download_and_prepare
self._download_and_prepare(%| | 0/1 [00:00<?, ?obj/s]
File "/Users/hannibal046/anaconda3/lib/python3.8/site-packages/datasets/builder.py", line 1104, in _download_and_prepare
super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos) | 0/1 [00:00<?, ?obj/s]
File "/Users/hannibal046/anaconda3/lib/python3.8/site-packages/datasets/builder.py", line 676, in _download_and_prepare
verify_checksums(s #13: 0%| | 0/1 [00:00<?, ?obj/s]
File "/Users/hannibal046/anaconda3/lib/python3.8/site-packages/datasets/utils/info_utils.py", line 35, in verify_checksums
raise UnexpectedDownloadedFile(str(set(recorded_checksums) - set(expected_checksums))) | 0/1 [00:00<?, ?obj/s]
datasets.utils.info_utils.UnexpectedDownloadedFile: {'https://s3.amazonaws.com/web-language-models/paracrawl/release1/paracrawl-release1.en-de.zipporah0-dedup-clean.tgz', 'https://huggingface.co/datasets/wmt/wmt13/resolve/main-zip/training-parallel-europarl-v7.zip', 'https://huggingface.co/datasets/wmt/wmt18/resolve/main-zip/translation-task/rapid2016.zip', 'https://huggingface.co/datasets/wmt/wmt18/resolve/main-zip/translation-task/training-parallel-nc-v13.zip', 'https://huggingface.co/datasets/wmt/wmt17/resolve/main-zip/translation-task/training-parallel-nc-v12.zip', 'https://huggingface.co/datasets/wmt/wmt14/resolve/main-zip/training-parallel-nc-v9.zip', 'https://huggingface.co/datasets/wmt/wmt15/resolve/main-zip/training-parallel-nc-v10.zip', 'https://huggingface.co/datasets/wmt/wmt16/resolve/main-zip/translation-task/training-parallel-nc-v11.zip'}
```
### Steps to reproduce the bug
see above
### Expected behavior
download data successfully
### Environment info
datasets==2.1.0
python==3.8
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/38466901?v=4",
"events_url": "https://api.github.com/users/Hannibal046/events{/privacy}",
"followers_url": "https://api.github.com/users/Hannibal046/followers",
"following_url": "https://api.github.com/users/Hannibal046/following{/other_user}",
"gists_url": "https://api.github.com/users/Hannibal046/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Hannibal046",
"id": 38466901,
"login": "Hannibal046",
"node_id": "MDQ6VXNlcjM4NDY2OTAx",
"organizations_url": "https://api.github.com/users/Hannibal046/orgs",
"received_events_url": "https://api.github.com/users/Hannibal046/received_events",
"repos_url": "https://api.github.com/users/Hannibal046/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Hannibal046/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Hannibal046/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Hannibal046",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5500/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5500/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 0:07:19
|
https://api.github.com/repos/huggingface/datasets/issues/5499
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5499/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5499/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5499/events
|
https://github.com/huggingface/datasets/issues/5499
| 1,568,937,026
|
I_kwDODunzps5dhBRC
| 5,499
|
`load_dataset` has ~4 seconds of overhead for cached data
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/4443482?v=4",
"events_url": "https://api.github.com/users/davidgilbertson/events{/privacy}",
"followers_url": "https://api.github.com/users/davidgilbertson/followers",
"following_url": "https://api.github.com/users/davidgilbertson/following{/other_user}",
"gists_url": "https://api.github.com/users/davidgilbertson/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/davidgilbertson",
"id": 4443482,
"login": "davidgilbertson",
"node_id": "MDQ6VXNlcjQ0NDM0ODI=",
"organizations_url": "https://api.github.com/users/davidgilbertson/orgs",
"received_events_url": "https://api.github.com/users/davidgilbertson/received_events",
"repos_url": "https://api.github.com/users/davidgilbertson/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/davidgilbertson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davidgilbertson/subscriptions",
"type": "User",
"url": "https://api.github.com/users/davidgilbertson",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] |
[
"Hi ! To skip the verification step that checks if newer data exist, you can enable offline mode with `HF_DATASETS_OFFLINE=1`.\r\n\r\nAlthough I agree this step should be much faster for datasets hosted on the HF Hub - we could just compare the commit hash from the local data and the remote git repository. We're not been leveraging the git commit hashes, since the library was built before we even had git repositories for each dataset on HF.",
"Thanks @lhoestq, for memory when I recorded those times I had `HF_DATASETS_OFFLINE` set."
] | 2023-02-02T23:34:50
| 2023-02-07T19:35:11
| null |
NONE
| null | null | null | null |
### Feature request
When loading a dataset that has been cached locally, the `load_dataset` function takes a lot longer than it should take to fetch the dataset from disk (or memory).
This is particularly noticeable for smaller datasets. For example, wikitext-2, comparing `load_data` (once cached) and `load_from_disk`, the `load_dataset` method takes 40 times longer.
⏱ 4.84s ⮜ load_dataset
⏱ 119ms ⮜ load_from_disk
### Motivation
I assume this is doing something like checking for a newer version.
If so, that's an age old problem: do you make the user wait _every single time they load from cache_ or do you do something like load from cache always, _then_ check for a newer version and alert if they have stale data. The decision usually revolves around what percentage of the time the data will have been updated, and how dangerous old data is.
For most datasets it's extremely unlikely that there will be a newer version on any given run, so 99% of the time this is just wasted time.
Maybe you don't want to make that decision for all users, but at least having the _option_ to not wait for checks would be an improvement.
### Your contribution
.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5499/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5499/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/5498
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5498/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5498/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5498/events
|
https://github.com/huggingface/datasets/issues/5498
| 1,568,190,529
|
I_kwDODunzps5deLBB
| 5,498
|
TypeError: 'bool' object is not iterable when filtering a datasets.arrow_dataset.Dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/91255010?v=4",
"events_url": "https://api.github.com/users/vmuel/events{/privacy}",
"followers_url": "https://api.github.com/users/vmuel/followers",
"following_url": "https://api.github.com/users/vmuel/following{/other_user}",
"gists_url": "https://api.github.com/users/vmuel/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/vmuel",
"id": 91255010,
"login": "vmuel",
"node_id": "MDQ6VXNlcjkxMjU1MDEw",
"organizations_url": "https://api.github.com/users/vmuel/orgs",
"received_events_url": "https://api.github.com/users/vmuel/received_events",
"repos_url": "https://api.github.com/users/vmuel/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/vmuel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vmuel/subscriptions",
"type": "User",
"url": "https://api.github.com/users/vmuel",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi! Instead of a single boolean, your filter function should return an iterable (of booleans) in the batched mode like so:\r\n```python\r\ntrain_dataset = train_dataset.filter(\r\n function=lambda batch: [image is not None for image in batch[\"image\"]], \r\n batched=True,\r\n batch_size=10)\r\n```\r\n\r\nPS: You can make this operation much faster by operating directly on the arrow data to skip the decoding part:\r\n```python\r\ntrain_dataset = train_dataset.with_format(\"arrow\")\r\ntrain_dataset = train_dataset.filter(\r\n function=lambda table: table[\"image\"].is_valid().to_pylist(), \r\n batched=True,\r\n batch_size=100)\r\ntrain_dataset = train_dataset.with_format(None)\r\n```",
"Thank a lot!",
"I hit the same issue and the error message isn't really clear on what's going wrong. It might be helpful to update the docs with a batched example."
] | 2023-02-02T14:46:49
| 2023-10-08T06:12:47
| 2023-02-04T17:19:36
|
NONE
| null | null | null | null |
### Describe the bug
Hi,
Thanks for the amazing work on the library!
**Describe the bug**
I think I might have noticed a small bug in the filter method.
Having loaded a dataset using `load_dataset`, when I try to filter out empty entries with `batched=True`, I get a TypeError.
### Steps to reproduce the bug
```
train_dataset = train_dataset.filter(
function=lambda example: example["image"] is not None,
batched=True,
batch_size=10)
```
Error message:
```
File .../lib/python3.9/site-packages/datasets/fingerprint.py:480, in fingerprint_transform.<locals>._fingerprint.<locals>.wrapper(*args, **kwargs)
476 validate_fingerprint(kwargs[fingerprint_name])
478 # Call actual function
--> 480 out = func(self, *args, **kwargs)
...
-> 5666 indices_array = [i for i, to_keep in zip(indices, mask) if to_keep]
5667 if indices_mapping is not None:
5668 indices_array = pa.array(indices_array, type=pa.uint64())
TypeError: 'bool' object is not iterable
```
**Removing batched=True allows to bypass the issue.**
### Expected behavior
According to the doc, "[batch_size corresponds to the] number of examples per batch provided to function if batched = True", so we shouldn't need to remove the batchd=True arg?
source: https://huggingface.co/docs/datasets/v2.9.0/en/package_reference/main_classes#datasets.Dataset.filter
### Environment info
- `datasets` version: 2.9.0
- Platform: Linux-5.4.0-122-generic-x86_64-with-glibc2.31
- Python version: 3.9.10
- PyArrow version: 10.0.1
- Pandas version: 1.5.3
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/91255010?v=4",
"events_url": "https://api.github.com/users/vmuel/events{/privacy}",
"followers_url": "https://api.github.com/users/vmuel/followers",
"following_url": "https://api.github.com/users/vmuel/following{/other_user}",
"gists_url": "https://api.github.com/users/vmuel/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/vmuel",
"id": 91255010,
"login": "vmuel",
"node_id": "MDQ6VXNlcjkxMjU1MDEw",
"organizations_url": "https://api.github.com/users/vmuel/orgs",
"received_events_url": "https://api.github.com/users/vmuel/received_events",
"repos_url": "https://api.github.com/users/vmuel/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/vmuel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vmuel/subscriptions",
"type": "User",
"url": "https://api.github.com/users/vmuel",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5498/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5498/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 2 days, 2:32:47
|
https://api.github.com/repos/huggingface/datasets/issues/5496
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5496/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5496/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5496/events
|
https://github.com/huggingface/datasets/issues/5496
| 1,567,301,765
|
I_kwDODunzps5dayCF
| 5,496
|
Add a `reduce` method
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/59542043?v=4",
"events_url": "https://api.github.com/users/zhangir-azerbayev/events{/privacy}",
"followers_url": "https://api.github.com/users/zhangir-azerbayev/followers",
"following_url": "https://api.github.com/users/zhangir-azerbayev/following{/other_user}",
"gists_url": "https://api.github.com/users/zhangir-azerbayev/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/zhangir-azerbayev",
"id": 59542043,
"login": "zhangir-azerbayev",
"node_id": "MDQ6VXNlcjU5NTQyMDQz",
"organizations_url": "https://api.github.com/users/zhangir-azerbayev/orgs",
"received_events_url": "https://api.github.com/users/zhangir-azerbayev/received_events",
"repos_url": "https://api.github.com/users/zhangir-azerbayev/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/zhangir-azerbayev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhangir-azerbayev/subscriptions",
"type": "User",
"url": "https://api.github.com/users/zhangir-azerbayev",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
| null |
[] |
[
"Hi! Sure, feel free to open a PR, so we can see the API you have in mind.",
"I would like to give it a go! #self-assign",
"Closing as `Dataset.map` can be used instead (see https://github.com/huggingface/datasets/pull/5533#issuecomment-1440571658 and https://github.com/huggingface/datasets/pull/5533#issuecomment-1446403263)",
"Hello, is it possible for this issue/PR to be revisited? The problem with the alternatives presented (besides multiple map stages) is that they don't use the cache. A reduce operation is just as expensive as a map operation because it also goes over the entire dataset. It's equally worth caching.\r\n\r\nPersonally, I have a situation where I would need this and map is far from ideal. I'm working on updating a project of mine to use Huggingface Datasets, and I need to port the loop at https://github.com/colonelwatch/abstracts-search/blob/b90f31ee4cc6e394f829d3a6d9d0311ca390ada9/train.py#L112-L138. Please forgive the code style, here's what it does in English. I have a dataset of about 95 million embeddings, out of which 16384 is taken as a \"query\" set. For each embedding in the query set, I need to find the ten closest neighbors. These nearest neighbors are used to tune the parameters of a faiss index. The solution is to set up an \"accumulator\" comprising of the ten closest so far and their distances, then do a single scan over the 95 million (memmapped), then save the results of the \"accumulator\" for when I want to prototype another index.\r\n\r\nThe closest approximation to this is multiple map stages, but with such a large \"accumulator\" having the RAM to do a big batch size becomes critical. At a batch size of 1000, the intermediate accumulators would in theory be about 120 GB! That can be more if I want higher precision than float32. It would already be about the same size as the original embeddings. Using larger batch sizes puts strain on the RAM because I'd be dealing with batch_size x 16384 distances. The best I'd gotten with my RAM, single-threaded, was 65536, and for speed I had to use that thread to feed a GPU. It'd be better if I could use multiple threads to get high throughput instead, or even do all the work in CPU, but to fit the threads I'd need the batch size to be smaller.\r\n\r\nAll of this intermediate memory could be eliminated if there was a reduce operation."
] | 2023-02-02T04:30:22
| 2024-11-12T05:58:14
| 2023-07-21T14:24:32
|
NONE
| null | null | null | null |
### Feature request
Right now the `Dataset` class implements `map()` and `filter()`, but leaves out the third functional idiom popular among Python users: `reduce`.
### Motivation
A `reduce` method is often useful when calculating dataset statistics, for example, the occurrence of a particular n-gram or the average line length of a code dataset.
### Your contribution
I haven't contributed to `datasets` before, but I don't expect this will be too difficult, since the implementation will closely follow that of `map` and `filter`. I could have a crack over the weekend.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5496/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5496/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 169 days, 9:54:10
|
https://api.github.com/repos/huggingface/datasets/issues/5495
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5495/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5495/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5495/events
|
https://github.com/huggingface/datasets/issues/5495
| 1,566,803,452
|
I_kwDODunzps5dY4X8
| 5,495
|
to_tf_dataset fails with datetime UTC columns even if not included in columns argument
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/2512762?v=4",
"events_url": "https://api.github.com/users/dwyatte/events{/privacy}",
"followers_url": "https://api.github.com/users/dwyatte/followers",
"following_url": "https://api.github.com/users/dwyatte/following{/other_user}",
"gists_url": "https://api.github.com/users/dwyatte/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dwyatte",
"id": 2512762,
"login": "dwyatte",
"node_id": "MDQ6VXNlcjI1MTI3NjI=",
"organizations_url": "https://api.github.com/users/dwyatte/orgs",
"received_events_url": "https://api.github.com/users/dwyatte/received_events",
"repos_url": "https://api.github.com/users/dwyatte/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dwyatte/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dwyatte/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dwyatte",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
},
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
}
] |
closed
| false
| null |
[] |
[
"Hi! This is indeed a bug in our zero-copy logic.\r\n\r\nTo fix it, instead of the line:\r\nhttps://github.com/huggingface/datasets/blob/7cfac43b980ab9e4a69c2328f085770996323005/src/datasets/features/features.py#L702\r\n\r\nwe should have:\r\n```python\r\nreturn pa.types.is_primitive(pa_type) and not (pa.types.is_boolean(pa_type) or pa.types.is_temporal(pa_type))\r\n```",
"@mariosasko submitted a small PR [here](https://github.com/huggingface/datasets/pull/5504)"
] | 2023-02-01T20:47:33
| 2023-02-08T14:33:19
| 2023-02-08T14:33:19
|
CONTRIBUTOR
| null | null | null | null |
### Describe the bug
There appears to be some eager behavior in `to_tf_dataset` that runs against every column in a dataset even if they aren't included in the columns argument. This is problematic with datetime UTC columns due to them not working with zero copy. If I don't have UTC information in my datetime column, then everything works as expected.
### Steps to reproduce the bug
```python
import numpy as np
import pandas as pd
from datasets import Dataset
df = pd.DataFrame(np.random.rand(2, 1), columns=["x"])
# df["dt"] = pd.to_datetime(["2023-01-01", "2023-01-01"]) # works fine
df["dt"] = pd.to_datetime(["2023-01-01 00:00:00.00000+00:00", "2023-01-01 00:00:00.00000+00:00"])
df.to_parquet("test.pq")
ds = Dataset.from_parquet("test.pq")
tf_ds = ds.to_tf_dataset(columns=["x"], batch_size=2, shuffle=True)
```
```
ArrowInvalid Traceback (most recent call last)
Cell In[1], line 12
8 df.to_parquet("test.pq")
11 ds = Dataset.from_parquet("test.pq")
---> 12 tf_ds = ds.to_tf_dataset(columns=["r"], batch_size=2, shuffle=True)
File ~/venv/lib/python3.8/site-packages/datasets/arrow_dataset.py:411, in TensorflowDatasetMixin.to_tf_dataset(self, batch_size, columns, shuffle, collate_fn, drop_remainder, collate_fn_args, label_cols, prefetch, num_workers)
407 dataset = self
409 # TODO(Matt, QL): deprecate the retention of label_ids and label
--> 411 output_signature, columns_to_np_types = dataset._get_output_signature(
412 dataset,
413 collate_fn=collate_fn,
414 collate_fn_args=collate_fn_args,
415 cols_to_retain=cols_to_retain,
416 batch_size=batch_size if drop_remainder else None,
417 )
419 if "labels" in output_signature:
420 if ("label_ids" in columns or "label" in columns) and "labels" not in columns:
File ~/venv/lib/python3.8/site-packages/datasets/arrow_dataset.py:254, in TensorflowDatasetMixin._get_output_signature(dataset, collate_fn, collate_fn_args, cols_to_retain, batch_size, num_test_batches)
252 for _ in range(num_test_batches):
253 indices = sample(range(len(dataset)), test_batch_size)
--> 254 test_batch = dataset[indices]
255 if cols_to_retain is not None:
256 test_batch = {key: value for key, value in test_batch.items() if key in cols_to_retain}
File ~/venv/lib/python3.8/site-packages/datasets/arrow_dataset.py:2590, in Dataset.__getitem__(self, key)
2588 def __getitem__(self, key): # noqa: F811
2589 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools)."""
-> 2590 return self._getitem(
2591 key,
2592 )
File ~/venv/lib/python3.8/site-packages/datasets/arrow_dataset.py:2575, in Dataset._getitem(self, key, **kwargs)
2573 formatter = get_formatter(format_type, features=self.features, **format_kwargs)
2574 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)
-> 2575 formatted_output = format_table(
2576 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns
2577 )
2578 return formatted_output
File ~/venv/lib/python3.8/site-packages/datasets/formatting/formatting.py:634, in format_table(table, key, formatter, format_columns, output_all_columns)
632 python_formatter = PythonFormatter(features=None)
633 if format_columns is None:
--> 634 return formatter(pa_table, query_type=query_type)
635 elif query_type == "column":
636 if key in format_columns:
File ~/venv/lib/python3.8/site-packages/datasets/formatting/formatting.py:410, in Formatter.__call__(self, pa_table, query_type)
408 return self.format_column(pa_table)
409 elif query_type == "batch":
--> 410 return self.format_batch(pa_table)
File ~/venv/lib/python3.8/site-packages/datasets/formatting/np_formatter.py:78, in NumpyFormatter.format_batch(self, pa_table)
77 def format_batch(self, pa_table: pa.Table) -> Mapping:
---> 78 batch = self.numpy_arrow_extractor().extract_batch(pa_table)
79 batch = self.python_features_decoder.decode_batch(batch)
80 batch = self.recursive_tensorize(batch)
File ~/venv/lib/python3.8/site-packages/datasets/formatting/formatting.py:164, in NumpyArrowExtractor.extract_batch(self, pa_table)
163 def extract_batch(self, pa_table: pa.Table) -> dict:
--> 164 return {col: self._arrow_array_to_numpy(pa_table[col]) for col in pa_table.column_names}
File ~/venv/lib/python3.8/site-packages/datasets/formatting/formatting.py:164, in <dictcomp>(.0)
163 def extract_batch(self, pa_table: pa.Table) -> dict:
--> 164 return {col: self._arrow_array_to_numpy(pa_table[col]) for col in pa_table.column_names}
File ~/venv/lib/python3.8/site-packages/datasets/formatting/formatting.py:185, in NumpyArrowExtractor._arrow_array_to_numpy(self, pa_array)
181 else:
182 zero_copy_only = _is_zero_copy_only(pa_array.type) and all(
183 not _is_array_with_nulls(chunk) for chunk in pa_array.chunks
184 )
--> 185 array: List = [
186 row for chunk in pa_array.chunks for row in chunk.to_numpy(zero_copy_only=zero_copy_only)
187 ]
188 else:
189 if isinstance(pa_array.type, _ArrayXDExtensionType):
190 # don't call to_pylist() to preserve dtype of the fixed-size array
File ~/venv/lib/python3.8/site-packages/datasets/formatting/formatting.py:186, in <listcomp>(.0)
181 else:
182 zero_copy_only = _is_zero_copy_only(pa_array.type) and all(
183 not _is_array_with_nulls(chunk) for chunk in pa_array.chunks
184 )
185 array: List = [
--> 186 row for chunk in pa_array.chunks for row in chunk.to_numpy(zero_copy_only=zero_copy_only)
187 ]
188 else:
189 if isinstance(pa_array.type, _ArrayXDExtensionType):
190 # don't call to_pylist() to preserve dtype of the fixed-size array
File ~/venv/lib/python3.8/site-packages/pyarrow/array.pxi:1475, in pyarrow.lib.Array.to_numpy()
File ~/venv/lib/python3.8/site-packages/pyarrow/error.pxi:100, in pyarrow.lib.check_status()
ArrowInvalid: Needed to copy 1 chunks with 0 nulls, but zero_copy_only was True
```
### Expected behavior
I think there are two potential issues/fixes
1. Proper handling of datetime UTC columns (perhaps there is something incorrect with zero copy handling here)
2. Not eagerly running against every column in a dataset when the columns argument of `to_tf_dataset` specifies a subset of columns (although I'm not sure if this is unavoidable)
### Environment info
- `datasets` version: 2.9.0
- Platform: macOS-13.2-x86_64-i386-64bit
- Python version: 3.8.12
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5495/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5495/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 6 days, 17:45:46
|
https://api.github.com/repos/huggingface/datasets/issues/5494
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5494/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5494/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5494/events
|
https://github.com/huggingface/datasets/issues/5494
| 1,566,655,348
|
I_kwDODunzps5dYUN0
| 5,494
|
Update audio installation doc page
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna",
"user_view_type": "public"
}
|
[
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] |
closed
| false
| null |
[] |
[
"Totally agree, the docs should be in sync with our code.\r\n\r\nIndeed to avoid confusing users, I think we should have updated the docs at the same time as this PR:\r\n- #5167",
"@albertvillanova yeah sure I should have, but I forgot back then, sorry for that 😶",
"No, @polinaeterna, nothing to be sorry about.\r\n\r\nMy comment was for all of us datasets team, as a reminder: when making a PR, but also when reviewing some other's PR, we should not forget to update the corresponding docstring and doc pages. It is something we can improve if we help each other in reminding about it... :hugs: ",
"@polinaeterna I think we can close this issue now as we no longer use `torchaudio` for decoding."
] | 2023-02-01T19:07:50
| 2023-03-02T16:08:17
| 2023-03-02T16:08:17
|
CONTRIBUTOR
| null | null | null | null |
Our [installation documentation page](https://huggingface.co/docs/datasets/installation#audio) says that one can use Datasets for mp3 only with `torchaudio<0.12`. `torchaudio>0.12` is actually supported too but requires a specific version of ffmpeg which is not easily installed on all linux versions but there is a custom ubuntu repo for it, we have insctructions in the code: https://github.com/huggingface/datasets/blob/main/src/datasets/features/audio.py#L327
So we should update the doc page. But first investigate [this issue](5488).
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna",
"user_view_type": "public"
}
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5494/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5494/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 28 days, 21:00:27
|
https://api.github.com/repos/huggingface/datasets/issues/5492
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5492/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5492/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5492/events
|
https://github.com/huggingface/datasets/issues/5492
| 1,566,604,216
|
I_kwDODunzps5dYHu4
| 5,492
|
Push_to_hub in a pull request
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4",
"events_url": "https://api.github.com/users/nateraw/events{/privacy}",
"followers_url": "https://api.github.com/users/nateraw/followers",
"following_url": "https://api.github.com/users/nateraw/following{/other_user}",
"gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/nateraw",
"id": 32437151,
"login": "nateraw",
"node_id": "MDQ6VXNlcjMyNDM3MTUx",
"organizations_url": "https://api.github.com/users/nateraw/orgs",
"received_events_url": "https://api.github.com/users/nateraw/received_events",
"repos_url": "https://api.github.com/users/nateraw/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nateraw/subscriptions",
"type": "User",
"url": "https://api.github.com/users/nateraw",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4",
"events_url": "https://api.github.com/users/nateraw/events{/privacy}",
"followers_url": "https://api.github.com/users/nateraw/followers",
"following_url": "https://api.github.com/users/nateraw/following{/other_user}",
"gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/nateraw",
"id": 32437151,
"login": "nateraw",
"node_id": "MDQ6VXNlcjMyNDM3MTUx",
"organizations_url": "https://api.github.com/users/nateraw/orgs",
"received_events_url": "https://api.github.com/users/nateraw/received_events",
"repos_url": "https://api.github.com/users/nateraw/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nateraw/subscriptions",
"type": "User",
"url": "https://api.github.com/users/nateraw",
"user_view_type": "public"
},
{
"avatar_url": "https://avatars.githubusercontent.com/u/38854604?v=4",
"events_url": "https://api.github.com/users/AJDERS/events{/privacy}",
"followers_url": "https://api.github.com/users/AJDERS/followers",
"following_url": "https://api.github.com/users/AJDERS/following{/other_user}",
"gists_url": "https://api.github.com/users/AJDERS/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/AJDERS",
"id": 38854604,
"login": "AJDERS",
"node_id": "MDQ6VXNlcjM4ODU0NjA0",
"organizations_url": "https://api.github.com/users/AJDERS/orgs",
"received_events_url": "https://api.github.com/users/AJDERS/received_events",
"repos_url": "https://api.github.com/users/AJDERS/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/AJDERS/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AJDERS/subscriptions",
"type": "User",
"url": "https://api.github.com/users/AJDERS",
"user_view_type": "public"
}
] |
[
"Assigned to myself and will get to it in the next week, but if someone finds this issue annoying and wants to submit a PR before I do, just ping me here and I'll reassign :). ",
"I would like to be assigned to this issue, @nateraw . #self-assign"
] | 2023-02-01T18:32:14
| 2023-10-16T13:30:48
| 2023-10-16T13:30:48
|
MEMBER
| null | null | null | null |
Right now `ds.push_to_hub()` can push a dataset on `main` or on a new branch with `branch=`, but there is no way to open a pull request. Even passing `branch=refs/pr/x` doesn't seem to work: it tries to create a branch with that name
cc @nateraw
It should be possible to tweak the use of `huggingface_hub` in `push_to_hub` to make it open a PR or push to an existing PR
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5492/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5492/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 256 days, 18:58:34
|
https://api.github.com/repos/huggingface/datasets/issues/5488
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5488/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5488/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5488/events
|
https://github.com/huggingface/datasets/issues/5488
| 1,565,025,262
|
I_kwDODunzps5dSGPu
| 5,488
|
Error loading MP3 files from CommonVoice
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/110259722?v=4",
"events_url": "https://api.github.com/users/kradonneoh/events{/privacy}",
"followers_url": "https://api.github.com/users/kradonneoh/followers",
"following_url": "https://api.github.com/users/kradonneoh/following{/other_user}",
"gists_url": "https://api.github.com/users/kradonneoh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kradonneoh",
"id": 110259722,
"login": "kradonneoh",
"node_id": "U_kgDOBpJuCg",
"organizations_url": "https://api.github.com/users/kradonneoh/orgs",
"received_events_url": "https://api.github.com/users/kradonneoh/received_events",
"repos_url": "https://api.github.com/users/kradonneoh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kradonneoh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kradonneoh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kradonneoh",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi @kradonneoh, thanks for reporting.\r\n\r\nPlease note that to work with audio datasets (and specifically with MP3 files) we have detailed installation instructions in our docs: https://huggingface.co/docs/datasets/installation#audio\r\n- one of the requirements is torchaudio<0.12.0\r\n\r\nLet us know if the problem persists after having followed them.",
"I saw that and have followed it (hence the Expected Behavior section of the bug report). \r\n\r\nIs there no intention of updating to the latest version? It does limit the version of `torch` I can use, which isn’t ideal.",
"@kradonneoh hey! actually with `ffmpeg4` loading of mp3 files should work, so this is a not expected behavior and we need to investigate it. It works on my side with `torchaudio==0.13` and `ffmpeg==4.2.7`. Which `torchaudio` version do you use?\r\n\r\n`datasets` should support decoding of mp3 files with `torchaudio` when its version is `>0.12` but as you noted it requires `ffmpeg>4`, we need to fix this in the documentation, thank you for pointing to this! \r\n\r\nBut according to your traceback it seems that it tries to use [`libsndfile`](https://github.com/libsndfile/libsndfile) backend for mp3 decoding. And `libsndfile` library supports mp3 decoding starting from version 1.1.0 which on Linux has to be compiled from source for now afaik. \r\n\r\nfyi - we are aiming at getting rid of `torchaudio` dependency at all by the next major library release in favor of `libsndfile` too.",
"We now decode MP3 with `soundfile`, so I'm closing this issue"
] | 2023-01-31T21:25:33
| 2023-03-02T16:25:14
| 2023-03-02T16:25:13
|
NONE
| null | null | null | null |
### Describe the bug
When loading a CommonVoice dataset with `datasets==2.9.0` and `torchaudio>=0.12.0`, I get an error reading the audio arrays:
```python
---------------------------------------------------------------------------
LibsndfileError Traceback (most recent call last)
~/.local/lib/python3.8/site-packages/datasets/features/audio.py in _decode_mp3(self, path_or_file)
310 try: # try torchaudio anyway because sometimes it works (depending on the os and os packages installed)
--> 311 array, sampling_rate = self._decode_mp3_torchaudio(path_or_file)
312 except RuntimeError:
~/.local/lib/python3.8/site-packages/datasets/features/audio.py in _decode_mp3_torchaudio(self, path_or_file)
351
--> 352 array, sampling_rate = torchaudio.load(path_or_file, format="mp3")
353 if self.sampling_rate and self.sampling_rate != sampling_rate:
~/.local/lib/python3.8/site-packages/torchaudio/backend/soundfile_backend.py in load(filepath, frame_offset, num_frames, normalize, channels_first, format)
204 """
--> 205 with soundfile.SoundFile(filepath, "r") as file_:
206 if file_.format != "WAV" or normalize:
~/.local/lib/python3.8/site-packages/soundfile.py in __init__(self, file, mode, samplerate, channels, subtype, endian, format, closefd)
654 format, subtype, endian)
--> 655 self._file = self._open(file, mode_int, closefd)
656 if set(mode).issuperset('r+') and self.seekable():
~/.local/lib/python3.8/site-packages/soundfile.py in _open(self, file, mode_int, closefd)
1212 err = _snd.sf_error(file_ptr)
-> 1213 raise LibsndfileError(err, prefix="Error opening {0!r}: ".format(self.name))
1214 if mode_int == _snd.SFM_WRITE:
LibsndfileError: Error opening <_io.BytesIO object at 0x7fa539462090>: File contains data in an unknown format.
```
I assume this is because there's some issue with the mp3 decoding process. I've verified that I have `ffmpeg>=4` (on a Linux distro), which appears to be the fallback backend for `torchaudio,` (at least according to #4889).
### Steps to reproduce the bug
```python
dataset = load_dataset("mozilla-foundation/common_voice_11_0", "be", split="train")
dataset[0]
```
### Expected behavior
Similar behavior to `torchaudio<0.12.0`, which doesn't result in a `LibsndfileError`
### Environment info
- `datasets` version: 2.9.0
- Platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 10.0.1
- Pandas version: 1.5.1
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5488/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5488/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 29 days, 18:59:40
|
https://api.github.com/repos/huggingface/datasets/issues/5487
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5487/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5487/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5487/events
|
https://github.com/huggingface/datasets/issues/5487
| 1,564,480,121
|
I_kwDODunzps5dQBJ5
| 5,487
|
Incorrect filepath for dill module
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/35349273?v=4",
"events_url": "https://api.github.com/users/avivbrokman/events{/privacy}",
"followers_url": "https://api.github.com/users/avivbrokman/followers",
"following_url": "https://api.github.com/users/avivbrokman/following{/other_user}",
"gists_url": "https://api.github.com/users/avivbrokman/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/avivbrokman",
"id": 35349273,
"login": "avivbrokman",
"node_id": "MDQ6VXNlcjM1MzQ5Mjcz",
"organizations_url": "https://api.github.com/users/avivbrokman/orgs",
"received_events_url": "https://api.github.com/users/avivbrokman/received_events",
"repos_url": "https://api.github.com/users/avivbrokman/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/avivbrokman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/avivbrokman/subscriptions",
"type": "User",
"url": "https://api.github.com/users/avivbrokman",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi! The correct path is still `dill._dill.XXXX` in the latest release. What do you get when you run `python -c \"import dill; print(dill.__version__)\"` in your environment?",
"`0.3.6` I feel like that's bad news, because it's probably not the issue.\r\n\r\nMy mistake, about the wrong path guess. I think I didn't notice that the first `dill` in the path isn't supposed to be included in the path specification in python.\r\n<img width=\"146\" alt=\"Screen Shot 2023-01-31 at 12 58 32 PM\" src=\"https://user-images.githubusercontent.com/35349273/215844209-74af6a8f-9bff-4c75-9495-44c658c8e9f7.png\">\r\n",
"Hi, @avivbrokman, this issue you report appeared only with old versions of dill. See:\r\n- #288\r\n\r\nAre you sure you are in the right Python environment?\r\n- Please note that Jupyter (where I guess you get the error) may have multiple execution backends (IPython kernels) that might be different from the Python environment your are using to get the dill version\r\n - Have you run `import dill; print(dill.__version__)` in the same Jupyter/IPython that you were using when you got the error while executing `import datasets`?",
"I'm using spyder, and I am still getting `0.3.6` for `dill`, so unfortunately #288 isn't applicable, I think. However, I found something odd that I believe is a clue: \r\n\r\n```\r\nimport inspect\r\nimport dill\r\n\r\ninspect.getfile(dill)\r\n>>> '/Users/avivbrokman/opt/anaconda3/lib/python3.9/site-packages/dill/__init__.py'\r\n```\r\n\r\nI checked out the directory, and there is no `dill` subdirectory within '/Users/avivbrokman/opt/anaconda3/lib/python3.9/site-packages/dill`, as there should be. Rather, `_dill.py` is in '/Users/avivbrokman/opt/anaconda3/lib/python3.9/site-packages/dill` itself. \r\n\r\n If I run `pip install dill` or `pip install --upgrade dill`, I get the message `Requirement already satisfied: dill in ./opt/anaconda3/lib/python3.9/site-packages (0.3.6)`. If I run `conda upgrade dill`, I get the message `Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source.` a couple of times, followed by\r\n\r\n```\r\nSolving environment: failed\r\nSolving environment: / \r\nFound conflicts! Looking for incompatible packages.\r\n```\r\n\r\nAnd then terminal proceeds to list conflicts between different packages I have.\r\n\r\nThis is all very strange to me because I recently uninstalled and reinstalled `anaconda`.\r\n",
"As I said above, I guess this is not a problem with `datasets`. I think you have different Python environments: one with the new dill version (the one you get while using pip) and other with the old dill version (the one where you get the AttributeError).\r\n\r\nYou should update `dill` in the Python environment you are using within spyder.\r\n\r\nPlease note that the `_dill` module is present in the `dill` package since their 2.8.0 version."
] | 2023-01-31T15:01:08
| 2023-02-24T16:18:36
| 2023-02-24T16:18:36
|
NONE
| null | null | null | null |
### Describe the bug
I installed the `datasets` package and when I try to `import` it, I get the following error:
```
Traceback (most recent call last):
File "/var/folders/jt/zw5g74ln6tqfdzsl8tx378j00000gn/T/ipykernel_3805/3458380017.py", line 1, in <module>
import datasets
File "/Users/avivbrokman/opt/anaconda3/lib/python3.9/site-packages/datasets/__init__.py", line 43, in <module>
from .arrow_dataset import Dataset
File "/Users/avivbrokman/opt/anaconda3/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 66, in <module>
from .arrow_writer import ArrowWriter, OptimizedTypedSequence
File "/Users/avivbrokman/opt/anaconda3/lib/python3.9/site-packages/datasets/arrow_writer.py", line 27, in <module>
from .features import Features, Image, Value
File "/Users/avivbrokman/opt/anaconda3/lib/python3.9/site-packages/datasets/features/__init__.py", line 17, in <module>
from .audio import Audio
File "/Users/avivbrokman/opt/anaconda3/lib/python3.9/site-packages/datasets/features/audio.py", line 12, in <module>
from ..download.streaming_download_manager import xopen
File "/Users/avivbrokman/opt/anaconda3/lib/python3.9/site-packages/datasets/download/__init__.py", line 9, in <module>
from .download_manager import DownloadManager, DownloadMode
File "/Users/avivbrokman/opt/anaconda3/lib/python3.9/site-packages/datasets/download/download_manager.py", line 36, in <module>
from ..utils.py_utils import NestedDataStructure, map_nested, size_str
File "/Users/avivbrokman/opt/anaconda3/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 602, in <module>
class Pickler(dill.Pickler):
File "/Users/avivbrokman/opt/anaconda3/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 605, in Pickler
dispatch = dill._dill.MetaCatchingDict(dill.Pickler.dispatch.copy())
AttributeError: module 'dill' has no attribute '_dill'
```
Looking at the github source code for dill, it appears that `datasets` has a bug or is not compatible with the latest `dill`. Specifically, rather than `dill._dill.XXXX` it should be `dill.dill._dill.XXXX`. But given the popularity of `datasets` I feel confused about me being the first person to have this issue, so it makes me wonder if I'm misdiagnosing the issue.
### Steps to reproduce the bug
Install `dill` and `datasets` packages and then `import datasets`
### Expected behavior
I expect `datasets` to import.
### Environment info
- `datasets` version: 2.9.0
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.9.13
- PyArrow version: 11.0.0
- Pandas version: 1.4.4
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5487/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5487/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 24 days, 1:17:28
|
https://api.github.com/repos/huggingface/datasets/issues/5486
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5486/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5486/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5486/events
|
https://github.com/huggingface/datasets/issues/5486
| 1,564,059,749
|
I_kwDODunzps5dOahl
| 5,486
|
Adding `sep` to TextConfig
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/29576434?v=4",
"events_url": "https://api.github.com/users/omar-araboghli/events{/privacy}",
"followers_url": "https://api.github.com/users/omar-araboghli/followers",
"following_url": "https://api.github.com/users/omar-araboghli/following{/other_user}",
"gists_url": "https://api.github.com/users/omar-araboghli/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/omar-araboghli",
"id": 29576434,
"login": "omar-araboghli",
"node_id": "MDQ6VXNlcjI5NTc2NDM0",
"organizations_url": "https://api.github.com/users/omar-araboghli/orgs",
"received_events_url": "https://api.github.com/users/omar-araboghli/received_events",
"repos_url": "https://api.github.com/users/omar-araboghli/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/omar-araboghli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omar-araboghli/subscriptions",
"type": "User",
"url": "https://api.github.com/users/omar-araboghli",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[
"Hi @omar-araboghli, thanks for your proposal.\r\n\r\nHave you tried to use \"csv\" loader instead of \"text\"? That already has a `sep` argument.",
"Hi @albertvillanova, thanks for the quick response!\r\n\r\nIndeed, I have been trying to use `csv` instead of `text`. However I am still not able to define range of rows as one sequence, that is achievable with passing `sample_by='paragraph'` to the `TextConfig`\r\n\r\nFor instance, the below code\r\n\r\n```python\r\nimport datasets\r\n\r\ndataset = datasets.load_dataset(\r\n path='csv',\r\n data_files={'train': TRAINING_SET_PATH},\r\n sep='\\t',\r\n header=None,\r\n column_names=['tokens', 'pos_tags', 'chunk_tags', 'ner_tags']\r\n)\r\n```\r\n\r\nleads to \r\n\r\n```python\r\ndataset\r\n>>> DatasetDict({\r\n train: Dataset({\r\n features: ['tokens', 'pos_tags', 'chunk_tags', 'ner_tags'],\r\n num_rows: 62543\r\n })\r\n})\r\n\r\ndataset['train'][0]\r\n>>> {'tokens': 'Distribution',\r\n 'pos_tags': 'NN',\r\n 'chunk_tags': 'O',\r\n 'ner_tags': 'O'\r\n}\r\n```\r\nIs there a way to deal with multiple csv rows as one dataset instance, where each column is a sequence of those rows ?"
] | 2023-01-31T10:39:53
| 2023-01-31T14:50:18
| null |
NONE
| null | null | null | null |
I have a local a `.txt` file that follows the `CONLL2003` format which I need to load using `load_script`. However, by using `sample_by='line'`, one can only split the dataset into lines without splitting each line into columns. Would it be reasonable to add a `sep` argument in combination with `sample_by='paragraph'` to parse a paragraph into an array for each column ? If so, I am happy to contribute!
## Environment
* `python 3.8.10`
* `datasets 2.9.0`
## Snippet of `train.txt`
```txt
Distribution NN O O
and NN O O
dynamics NN O O
of NN O O
electron NN O B-RP
complexes NN O I-RP
in NN O O
cyanobacterial NN O B-R
membranes NN O I-R
The NN O O
occurrence NN O O
of NN O O
prostaglandin NN O B-R
F2α NN O I-R
in NN O O
Pharbitis NN O B-R
seedlings NN O I-R
grown NN O O
under NN O O
short NN O B-P
days NN O I-P
or NN O I-P
days NN O I-P
```
## Current Behaviour
```python
# defining 4 features ['tokens', 'pos_tags', 'chunk_tags', 'ner_tags'] here would fail with `ValueError: Length of names (4) does not match length of arrays (1)`
dataset = datasets.load_dataset(path='text', features=features, data_files={'train': 'train.txt'}, sample_by='line')
dataset['train']['tokens'][0]
>>> 'Distribution\tNN\tO\tO'
```
## Expected Behaviour / Suggestion
```python
# suppose we defined 4 features ['tokens', 'pos_tags', 'chunk_tags', 'ner_tags']
dataset = datasets.load_dataset(path='text', features=features, data_files={'train': 'train.txt'}, sample_by='paragraph', sep='\t')
dataset['train']['tokens'][0]
>>> ['Distribution', 'and', 'dynamics', ... ]
dataset['train']['ner_tags'][0]
>>> ['O', 'O', 'O', ... ]
```
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5486/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5486/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/5483
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5483/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5483/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5483/events
|
https://github.com/huggingface/datasets/issues/5483
| 1,560,894,690
|
I_kwDODunzps5dCVzi
| 5,483
|
Unable to upload dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/57996478?v=4",
"events_url": "https://api.github.com/users/yuvalkirstain/events{/privacy}",
"followers_url": "https://api.github.com/users/yuvalkirstain/followers",
"following_url": "https://api.github.com/users/yuvalkirstain/following{/other_user}",
"gists_url": "https://api.github.com/users/yuvalkirstain/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yuvalkirstain",
"id": 57996478,
"login": "yuvalkirstain",
"node_id": "MDQ6VXNlcjU3OTk2NDc4",
"organizations_url": "https://api.github.com/users/yuvalkirstain/orgs",
"received_events_url": "https://api.github.com/users/yuvalkirstain/received_events",
"repos_url": "https://api.github.com/users/yuvalkirstain/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yuvalkirstain/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yuvalkirstain/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yuvalkirstain",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Seems to work now, perhaps it was something internal with our university's network."
] | 2023-01-28T15:18:26
| 2023-01-29T08:09:49
| 2023-01-29T08:09:49
|
NONE
| null | null | null | null |
### Describe the bug
Uploading a simple dataset ends with an exception
### Steps to reproduce the bug
I created a new conda env with python 3.10, pip installed datasets and:
```python
>>> from datasets import load_dataset, load_from_disk, Dataset
>>> d = Dataset.from_dict({"text": ["hello"] * 2})
>>> d.push_to_hub("ttt111")
/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/utils/_hf_folder.py:92: UserWarning: A token has been found in `/a/home/cc/students/cs/kirstain/.huggingface/token`. This is the old path where tokens were stored. The new location is `/home/olab/kirstain/.cache/huggingface/token` which is configurable using `HF_HOME` environment variable. Your token has been copied to this new location. You can now safely delete the old token file manually or use `huggingface-cli logout`.
warnings.warn(
Creating parquet from Arrow format: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 279.94ba/s]
Upload 1 LFS files: 0%| | 0/1 [00:02<?, ?it/s]
Pushing dataset shards to the dataset hub: 0%| | 0/1 [00:04<?, ?it/s]
Traceback (most recent call last):
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 264, in hf_raise_for_status
response.raise_for_status()
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/requests/models.py", line 1021, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: https://s3.us-east-1.amazonaws.com/lfs.huggingface.co/repos/cf/0c/cf0c5ab8a3f729e5f57a8b79a36ecea64a31126f13218591c27ed9a1c7bd9b41/ece885a4bb6bbc8c1bb51b45542b805283d74590f72cd4c45d3ba76628570386?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=AKIA4N7VTDGO27GPWFUO%2F20230128%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230128T151640Z&X-Amz-Expires=900&X-Amz-Signature=89e78e9a9d70add7ed93d453334f4f93c6f29d889d46750a1f2da04af73978db&X-Amz-SignedHeaders=host&x-amz-storage-class=INTELLIGENT_TIERING&x-id=PutObject
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/_commit_api.py", line 334, in _inner_upload_lfs_object
return _upload_lfs_object(
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/_commit_api.py", line 391, in _upload_lfs_object
lfs_upload(
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/lfs.py", line 273, in lfs_upload
_upload_single_part(
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/lfs.py", line 305, in _upload_single_part
hf_raise_for_status(upload_res)
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 318, in hf_raise_for_status
raise HfHubHTTPError(str(e), response=response) from e
huggingface_hub.utils._errors.HfHubHTTPError: 403 Client Error: Forbidden for url: https://s3.us-east-1.amazonaws.com/lfs.huggingface.co/repos/cf/0c/cf0c5ab8a3f729e5f57a8b79a36ecea64a31126f13218591c27ed9a1c7bd9b41/ece885a4bb6bbc8c1bb51b45542b805283d74590f72cd4c45d3ba76628570386?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=AKIA4N7VTDGO27GPWFUO%2F20230128%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230128T151640Z&X-Amz-Expires=900&X-Amz-Signature=89e78e9a9d70add7ed93d453334f4f93c6f29d889d46750a1f2da04af73978db&X-Amz-SignedHeaders=host&x-amz-storage-class=INTELLIGENT_TIERING&x-id=PutObject
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 4909, in push_to_hub
repo_id, split, uploaded_size, dataset_nbytes, repo_files, deleted_size = self._push_parquet_shards_to_hub(
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 4804, in _push_parquet_shards_to_hub
_retry(
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 281, in _retry
return func(*func_args, **func_kwargs)
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 124, in _inner_fn
return fn(*args, **kwargs)
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 2537, in upload_file
commit_info = self.create_commit(
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 124, in _inner_fn
return fn(*args, **kwargs)
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 2346, in create_commit
upload_lfs_files(
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 124, in _inner_fn
return fn(*args, **kwargs)
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/_commit_api.py", line 346, in upload_lfs_files
thread_map(
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/tqdm/contrib/concurrent.py", line 94, in thread_map
return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs)
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/tqdm/contrib/concurrent.py", line 76, in _executor_map
return list(tqdm_class(ex.map(fn, *iterables, **map_args), **kwargs))
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/tqdm/std.py", line 1195, in __iter__
for obj in iterable:
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/concurrent/futures/_base.py", line 621, in result_iterator
yield _result_or_cancel(fs.pop())
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/concurrent/futures/_base.py", line 319, in _result_or_cancel
return fut.result(timeout)
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/concurrent/futures/_base.py", line 458, in result
return self.__get_result()
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/_commit_api.py", line 338, in _inner_upload_lfs_object
raise RuntimeError(
RuntimeError: Error while uploading 'data/train-00000-of-00001-6df93048e66df326.parquet' to the Hub.
```
### Expected behavior
The dataset should be uploaded without any exceptions
### Environment info
- `datasets` version: 2.9.0
- Platform: Linux-4.15.0-65-generic-x86_64-with-glibc2.27
- Python version: 3.10.9
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/57996478?v=4",
"events_url": "https://api.github.com/users/yuvalkirstain/events{/privacy}",
"followers_url": "https://api.github.com/users/yuvalkirstain/followers",
"following_url": "https://api.github.com/users/yuvalkirstain/following{/other_user}",
"gists_url": "https://api.github.com/users/yuvalkirstain/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yuvalkirstain",
"id": 57996478,
"login": "yuvalkirstain",
"node_id": "MDQ6VXNlcjU3OTk2NDc4",
"organizations_url": "https://api.github.com/users/yuvalkirstain/orgs",
"received_events_url": "https://api.github.com/users/yuvalkirstain/received_events",
"repos_url": "https://api.github.com/users/yuvalkirstain/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yuvalkirstain/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yuvalkirstain/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yuvalkirstain",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5483/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5483/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 16:51:23
|
https://api.github.com/repos/huggingface/datasets/issues/5482
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5482/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5482/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5482/events
|
https://github.com/huggingface/datasets/issues/5482
| 1,560,853,137
|
I_kwDODunzps5dCLqR
| 5,482
|
Reload features from Parquet metadata
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "BDE59C",
"default": false,
"description": "Issues a bit more difficult than \"Good First\" issues",
"id": 3761482852,
"name": "good second issue",
"node_id": "LA_kwDODunzps7gM6xk",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20second%20issue"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/6368040?v=4",
"events_url": "https://api.github.com/users/MFreidank/events{/privacy}",
"followers_url": "https://api.github.com/users/MFreidank/followers",
"following_url": "https://api.github.com/users/MFreidank/following{/other_user}",
"gists_url": "https://api.github.com/users/MFreidank/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/MFreidank",
"id": 6368040,
"login": "MFreidank",
"node_id": "MDQ6VXNlcjYzNjgwNDA=",
"organizations_url": "https://api.github.com/users/MFreidank/orgs",
"received_events_url": "https://api.github.com/users/MFreidank/received_events",
"repos_url": "https://api.github.com/users/MFreidank/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/MFreidank/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MFreidank/subscriptions",
"type": "User",
"url": "https://api.github.com/users/MFreidank",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/6368040?v=4",
"events_url": "https://api.github.com/users/MFreidank/events{/privacy}",
"followers_url": "https://api.github.com/users/MFreidank/followers",
"following_url": "https://api.github.com/users/MFreidank/following{/other_user}",
"gists_url": "https://api.github.com/users/MFreidank/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/MFreidank",
"id": 6368040,
"login": "MFreidank",
"node_id": "MDQ6VXNlcjYzNjgwNDA=",
"organizations_url": "https://api.github.com/users/MFreidank/orgs",
"received_events_url": "https://api.github.com/users/MFreidank/received_events",
"repos_url": "https://api.github.com/users/MFreidank/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/MFreidank/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MFreidank/subscriptions",
"type": "User",
"url": "https://api.github.com/users/MFreidank",
"user_view_type": "public"
}
] |
[
"I'd be happy to have a look, if nobody else has started working on this yet @lhoestq. \r\n\r\nIt seems to me that for the `arrow` format features are currently attached as metadata [in `datasets.arrow_writer`](https://github.com/huggingface/datasets/blob/5f810b7011a8a4ab077a1847c024d2d9e267b065/src/datasets/arrow_writer.py#L412) and retrieved from the metadata at `load_dataset` time using [`datasets.features.features.from_arrow_schema`](https://github.com/huggingface/datasets/blob/5f810b7011a8a4ab077a1847c024d2d9e267b065/src/datasets/features/features.py#L1602). \r\n\r\nThis will need to be replicated for `parquet` via calls to [this api](https://arrow.apache.org/docs/python/generated/pyarrow.parquet.write_metadata.html) from `io.parquet.ParquetWriter` and `io.parquet.ParquetReader` [respectively](https://github.com/huggingface/datasets/blob/5f810b7011a8a4ab077a1847c024d2d9e267b065/src/datasets/io/parquet.py#L104).\r\n\r\nAny other important considerations?\r\n",
"Thanks @MFreidank ! That's correct :)\r\n\r\nReading the metadata to infer the features can be ideally done in the `parquet.py` file in `packaged_builder` when a parquet file is read. You can cast the arrow table to the schema you get from the features.arrow_schema",
"#self-assign"
] | 2023-01-28T13:12:31
| 2023-02-12T15:57:02
| 2023-02-12T15:57:02
|
MEMBER
| null | null | null | null |
The idea would be to allow this :
```python
ds.to_parquet("my_dataset/ds.parquet")
reloaded = load_dataset("my_dataset")
assert ds.features == reloaded.features
```
And it should also work with Image and Audio types (right now they're reloaded as a dict type)
This can be implemented by storing and reading the feature types in the parquet metadata, as we do for arrow files.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5482/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5482/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 15 days, 2:44:31
|
https://api.github.com/repos/huggingface/datasets/issues/5481
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5481/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5481/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5481/events
|
https://github.com/huggingface/datasets/issues/5481
| 1,560,468,195
|
I_kwDODunzps5dAtrj
| 5,481
|
Load a cached dataset as iterable
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "BDE59C",
"default": false,
"description": "Issues a bit more difficult than \"Good First\" issues",
"id": 3761482852,
"name": "good second issue",
"node_id": "LA_kwDODunzps7gM6xk",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20second%20issue"
}
] |
open
| false
| null |
[] |
[
"Can I work on this issue? I am pretty new to this.",
"Hi ! Sure :) you can comment `#self-assign` to assign yourself to this issue.\r\n\r\nI can give you some pointers to get started:\r\n\r\n`load_dataset` works roughly this way:\r\n1. it instantiate a dataset builder using `load_dataset_builder()`\r\n2. the builder download and prepare the dataset as Arrow files in the cache using `download_and_prepare()`\r\n3. the builder returns a Dataset object with `as_dataset()`\r\n\r\nOne way to approach this would be to implement `as_iterable_dataset()` in `builder.py`.\r\n\r\nAnd similarly to `as_dataset()`, you can use the `ArrowReader`. It has a `get_file_instructions()` method that can be helpful. It gives you the files to read as list of dictionaries with those keys: `filename`, `skip` and `take`.\r\n\r\nThe `skip` and `take` arguments are used in case the user wants to load a subset of the dataset, e.g.\r\n```python\r\nload_dataset(..., split=\"train[:10]\")\r\n```\r\n\r\nLet me know if you have questions or if I can help :)",
"This use-case is a bit specific, and `load_dataset` already has enough parameters (plus, `streaming=True` also returns an iterable dataset, so we would have to explain the difference), so I think it would be better to add `IterableDataset.from_file` to the API (more flexible and aligned with the goal from https://github.com/huggingface/datasets/issues/3444) instead.",
"> This use-case is a bit specific\r\n\r\nThis allows to use `datasets` for large scale training where map-style datasets are too slow and use too much memory in PyTorch. So I would still consider adding it.\r\n\r\nAlternatively we could add this feature one level bellow:\r\n```python\r\nbuilder = load_dataset_builder(...)\r\nbuilder.download_and_prepare()\r\nids = builder.as_iterable_dataset()\r\n```",
"Yes, I see how this can be useful. Still, I think `Dataset.to_iterable` + `IterableDataset.from_file` would be much cleaner in terms of the API design (and more flexible since `load_dataset` can only access the \"initial\" (unprocessed) version of a dataset).\r\n\r\nAnd since it can be tricky to manually find the \"initial\" version of a dataset in the cache, maybe `load_dataset` could return an iterable dataset streamed from the cache if `streaming=True` and the cache is up-to-date. ",
"> This allows to use datasets for large scale training where map-style datasets are too slow and use too much memory in PyTorch.\r\n\r\nI second that. e.g. In my last experiment Oscar-en uses 16GB RSS RAM per process and when using multiple processes the host quickly runs out cpu memory. ",
">And since it can be tricky to manually find the \"initial\" version of a dataset in the cache, maybe load_dataset could return an iterable dataset streamed from the cache if streaming=True and the cache is up-to-date.\r\n\r\nThis is exactly the need on JeanZay (HPC) - I have the dataset cache ready, but the compute node is offline, so making streaming work off a local cache would address that need.\r\n\r\nIf you will have a working POC I can be the tester. ",
"> Yes, I see how this can be useful. Still, I think Dataset.to_iterable + IterableDataset.from_file would be much cleaner in terms of the API design (and more flexible since load_dataset can only access the \"initial\" (unprocessed) version of a dataset).\r\n\r\nI like `IterableDataset.from_file` as well. On the other hand `Dataset.to_iterable` first requires to load a Dataset object, which can take time depending on your hardware and your dataset size (sometimes 1h+).\r\n\r\n> And since it can be tricky to manually find the \"initial\" version of a dataset in the cache, maybe load_dataset could return an iterable dataset streamed from the cache if streaming=True and the cache is up-to-date.\r\n\r\nThat would definitely do the job. I was suggesting a different parameter just to make explicit the difference between\r\n- streaming from the raw data\r\n- streaming from the local cache\r\n\r\nBut I'd be fine with streaming from cache is the cache is up-to-date since it's always faster. We could log a message as usual to make it explicit that the cache is used",
"> I was suggesting a different parameter just to make explicit the difference between\r\n\r\nMosaicML's `streaming` library does the same (tries to stream from the local cache if possible), so logging a message should be explicit enough :).",
"Ok ! Sounds good then :)",
"Hi Both! It has been a while since my first issue so I am gonna go for this one ! #self-assign",
"#self-assign",
"I like idea of `IterableDataset.from_file`. ",
"https://github.com/huggingface/datasets/pull/5821 should be helpful to implement `IterableDataset.from_file`, since it defines a new ArrowExamplesIterable that takes an Arrow tables generator function (e.g. from a file) and can be used in an IterableDataset",
"@lhoestq I have just started working on this issue. ",
"@lhoestq Thank you for taking over.",
"So what's recommanded usage of `IterableDataset.from_file` and `load_dataset`? How about I have multiple arrow files and `load_dataset` is often convenient to handle that.",
"If you have multiple Arrow files you can load them using\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ndata_files = {\"train\": [\"path/to/0.arrow\", \"path/to/1.arrow\", ..., \"path/to/n.arrow\"]}\r\n\r\nds = load_dataset(\"arrow\", data_files=data_files, streaming=True)\r\n```\r\n\r\nThis is equivalent to calling `IterableDataset.from_file` and `concatenate_datasets`.",
"Hi! 👋 I’d love to help with this feature and was wondering if @mariusz-jachimowicz-83 is still working on it. If not, I’d be happy to pick it up and continue. Let me know! 🙌",
"I don't think anyone is working on this at the moment, imo the simplest would be to do it one level below\n\n```python\nbuilder = load_dataset_builder(...)\nbuilder.download_and_prepare()\nids = builder.as_iterable_dataset()\n```",
"Thanks for the clarification @lhoestq 🙌 Will share a draft soon",
"Follow-up test added in #7629. for as_iterable_dataset() method.\nI’ve added the unit test in a separate PR to keep this one focused on the feature implementation, as the test is optional and can be reviewed independently."
] | 2023-01-27T21:43:51
| 2025-06-19T19:30:52
| null |
MEMBER
| null | null | null | null |
The idea would be to allow something like
```python
ds = load_dataset("c4", "en", as_iterable=True)
```
To be used to train models. It would load an IterableDataset from the cached Arrow files.
Cc @stas00
Edit : from the discussions we may load from cache when streaming=True
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 5,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 5,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5481/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5481/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/5479
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5479/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5479/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5479/events
|
https://github.com/huggingface/datasets/issues/5479
| 1,560,357,590
|
I_kwDODunzps5dASrW
| 5,479
|
audiofolder works on local env, but creates empty dataset in a remote one, what dependencies could I be missing/outdated
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/107211437?v=4",
"events_url": "https://api.github.com/users/joseph-y-cho/events{/privacy}",
"followers_url": "https://api.github.com/users/joseph-y-cho/followers",
"following_url": "https://api.github.com/users/joseph-y-cho/following{/other_user}",
"gists_url": "https://api.github.com/users/joseph-y-cho/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/joseph-y-cho",
"id": 107211437,
"login": "joseph-y-cho",
"node_id": "U_kgDOBmPqrQ",
"organizations_url": "https://api.github.com/users/joseph-y-cho/orgs",
"received_events_url": "https://api.github.com/users/joseph-y-cho/received_events",
"repos_url": "https://api.github.com/users/joseph-y-cho/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/joseph-y-cho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joseph-y-cho/subscriptions",
"type": "User",
"url": "https://api.github.com/users/joseph-y-cho",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[] | 2023-01-27T20:01:22
| 2023-01-29T05:23:14
| 2023-01-29T05:23:14
|
NONE
| null | null | null | null |
### Describe the bug
I'm using a custom audio dataset (400+ audio files) in the correct format for audiofolder. Although loading the dataset with audiofolder works in one local setup, it doesn't in a remote one (it just creates an empty dataset). I have both ffmpeg and libndfile installed on both computers, what could be missing/need to be updated in the one that doesn't work? On the remote env, libsndfile is 1.0.28 and ffmpeg is 4.2.1.
from datasets import load_dataset
ds = load_dataset("audiofolder", data_dir="...")
Here is the output (should be generating 400+ rows):
Downloading and preparing dataset audiofolder/default to ...
Downloading data files: 0%| | 0/2 [00:00<?, ?it/s]
Downloading data files: 0it [00:00, ?it/s]
Extracting data files: 0it [00:00, ?it/s]
Generating train split: 0 examples [00:00, ? examples/s]
Dataset audiofolder downloaded and prepared to ... Subsequent calls will reuse this data.
0%| | 0/1 [00:00<?, ?it/s]
DatasetDict({
train: Dataset({
features: ['audio', 'transcription'],
num_rows: 1
})
})
Here is my pip environment in the one that doesn't work (uses torch 1.11.a0 from shared env):
Package Version
------------------- -------------------
aiofiles 22.1.0
aiohttp 3.8.3
aiosignal 1.3.1
altair 4.2.1
anyio 3.6.2
appdirs 1.4.4
argcomplete 2.0.0
argon2-cffi 20.1.0
astunparse 1.6.3
async-timeout 4.0.2
attrs 21.2.0
audioread 3.0.0
backcall 0.2.0
bleach 4.0.0
certifi 2021.10.8
cffi 1.14.6
charset-normalizer 2.0.12
click 8.1.3
contourpy 1.0.7
cycler 0.11.0
datasets 2.9.0
debugpy 1.4.1
decorator 5.0.9
defusedxml 0.7.1
dill 0.3.6
distlib 0.3.4
entrypoints 0.3
evaluate 0.4.0
expecttest 0.1.3
fastapi 0.89.1
ffmpy 0.3.0
filelock 3.6.0
fonttools 4.38.0
frozenlist 1.3.3
fsspec 2023.1.0
future 0.18.2
gradio 3.16.2
h11 0.14.0
httpcore 0.16.3
httpx 0.23.3
huggingface-hub 0.12.0
idna 3.3
ipykernel 6.2.0
ipython 7.26.0
ipython-genutils 0.2.0
ipywidgets 7.6.3
jedi 0.18.0
Jinja2 3.0.1
jiwer 2.5.1
joblib 1.2.0
jsonschema 3.2.0
jupyter 1.0.0
jupyter-client 6.1.12
jupyter-console 6.4.0
jupyter-core 4.7.1
jupyterlab-pygments 0.1.2
jupyterlab-widgets 1.0.0
kiwisolver 1.4.4
Levenshtein 0.20.2
librosa 0.9.2
linkify-it-py 1.0.3
llvmlite 0.39.1
markdown-it-py 2.1.0
MarkupSafe 2.0.1
matplotlib 3.6.3
matplotlib-inline 0.1.2
mdit-py-plugins 0.3.3
mdurl 0.1.2
mistune 0.8.4
multidict 6.0.4
multiprocess 0.70.14
nbclient 0.5.4
nbconvert 6.1.0
nbformat 5.1.3
nest-asyncio 1.5.1
notebook 6.4.3
numba 0.56.4
numpy 1.20.3
orjson 3.8.5
packaging 21.0
pandas 1.5.3
pandocfilters 1.4.3
parso 0.8.2
pexpect 4.8.0
pickleshare 0.7.5
Pillow 9.4.0
pip 22.3.1
pipx 1.1.0
platformdirs 2.5.2
pooch 1.6.0
prometheus-client 0.11.0
prompt-toolkit 3.0.19
psutil 5.9.0
ptyprocess 0.7.0
pyarrow 10.0.1
pycparser 2.20
pycryptodome 3.16.0
pydantic 1.10.4
pydub 0.25.1
Pygments 2.10.0
pyparsing 2.4.7
pyrsistent 0.18.0
python-dateutil 2.8.2
python-multipart 0.0.5
pytz 2022.7.1
PyYAML 6.0
pyzmq 22.2.1
qtconsole 5.1.1
QtPy 1.10.0
rapidfuzz 2.13.7
regex 2022.10.31
requests 2.27.1
resampy 0.4.2
responses 0.18.0
rfc3986 1.5.0
scikit-learn 1.2.1
scipy 1.6.3
Send2Trash 1.8.0
setuptools 65.5.1
shiboken6 6.3.1
shiboken6-generator 6.3.1
six 1.16.0
sniffio 1.3.0
soundfile 0.11.0
starlette 0.22.0
terminado 0.11.0
testpath 0.5.0
threadpoolctl 3.1.0
tokenizers 0.13.2
toolz 0.12.0
torch 1.11.0a0+gitunknown
tornado 6.1
tqdm 4.64.1
traitlets 5.0.5
transformers 4.27.0.dev0
types-dataclasses 0.6.4
typing_extensions 4.1.1
uc-micro-py 1.0.1
urllib3 1.26.9
userpath 1.8.0
uvicorn 0.20.0
virtualenv 20.14.1
wcwidth 0.2.5
webencodings 0.5.1
websockets 10.4
wheel 0.37.1
widgetsnbextension 3.5.1
xxhash 3.2.0
yarl 1.8.2
### Steps to reproduce the bug
Create a pip environment with the packages listed above (make sure ffmpeg and libsndfile is installed with same versions listed above).
Create a custom audio dataset and load it in with load_dataset("audiofolder", ...)
### Expected behavior
load_dataset should create a dataset with 400+ rows.
### Environment info
- `datasets` version: 2.9.0
- Platform: Linux-3.10.0-1160.80.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.9.0
- PyArrow version: 10.0.1
- Pandas version: 1.5.3
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/107211437?v=4",
"events_url": "https://api.github.com/users/joseph-y-cho/events{/privacy}",
"followers_url": "https://api.github.com/users/joseph-y-cho/followers",
"following_url": "https://api.github.com/users/joseph-y-cho/following{/other_user}",
"gists_url": "https://api.github.com/users/joseph-y-cho/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/joseph-y-cho",
"id": 107211437,
"login": "joseph-y-cho",
"node_id": "U_kgDOBmPqrQ",
"organizations_url": "https://api.github.com/users/joseph-y-cho/orgs",
"received_events_url": "https://api.github.com/users/joseph-y-cho/received_events",
"repos_url": "https://api.github.com/users/joseph-y-cho/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/joseph-y-cho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joseph-y-cho/subscriptions",
"type": "User",
"url": "https://api.github.com/users/joseph-y-cho",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5479/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5479/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 1 day, 9:21:52
|
https://api.github.com/repos/huggingface/datasets/issues/5477
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5477/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5477/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5477/events
|
https://github.com/huggingface/datasets/issues/5477
| 1,559,909,892
|
I_kwDODunzps5c-lYE
| 5,477
|
Unpin sqlalchemy once issue is fixed
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"@albertvillanova It looks like that issue has been fixed so I made a PR to unpin sqlalchemy! ",
"The source issue:\r\n- https://github.com/pandas-dev/pandas/issues/40686\r\n\r\nhas been fixed:\r\n- https://github.com/pandas-dev/pandas/pull/48576\r\n\r\nThe fix was released yesterday (2023-04-03) only in `pandas-2.0.0`:\r\n- https://github.com/pandas-dev/pandas/releases/tag/v2.0.0\r\n\r\nbut it will not be back-ported to `pandas-1`:\r\n- https://github.com/pandas-dev/pandas/pull/48576#issuecomment-1466467159\r\n\r\nAlso note that `pandas-2.0.0` dropped support for Python 3.7:\r\n- https://github.com/pandas-dev/pandas/issues/41678\r\n- https://github.com/pandas-dev/pandas/pull/41989\r\n\r\nTherefore, we cannot unpin `sqlalchemy` until we drop support for Python 3.7 (these Python users cannot use `pandas-2`)."
] | 2023-01-27T15:01:55
| 2024-01-26T14:50:45
| 2024-01-26T14:50:45
|
MEMBER
| null | null | null | null |
Once the source issue is fixed:
- pandas-dev/pandas#51015
we should revert the pin introduced in:
- #5476
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5477/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5477/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 363 days, 23:48:50
|
https://api.github.com/repos/huggingface/datasets/issues/5475
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5475/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5475/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5475/events
|
https://github.com/huggingface/datasets/issues/5475
| 1,559,030,149
|
I_kwDODunzps5c7OmF
| 5,475
|
Dataset scan time is much slower than using native arrow
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/121845112?v=4",
"events_url": "https://api.github.com/users/jonny-cyberhaven/events{/privacy}",
"followers_url": "https://api.github.com/users/jonny-cyberhaven/followers",
"following_url": "https://api.github.com/users/jonny-cyberhaven/following{/other_user}",
"gists_url": "https://api.github.com/users/jonny-cyberhaven/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jonny-cyberhaven",
"id": 121845112,
"login": "jonny-cyberhaven",
"node_id": "U_kgDOB0M1eA",
"organizations_url": "https://api.github.com/users/jonny-cyberhaven/orgs",
"received_events_url": "https://api.github.com/users/jonny-cyberhaven/received_events",
"repos_url": "https://api.github.com/users/jonny-cyberhaven/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jonny-cyberhaven/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jonny-cyberhaven/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jonny-cyberhaven",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi ! In your code you only iterate on the Arrow buffers - you don't actually load the data as python objects. For a fair comparison, you can modify your code using:\r\n```diff\r\n- for _ in range(0, len(table), bsz):\r\n- _ = {k:table[k][_ : _ + bsz] for k in cols}\r\n+ for _ in range(0, len(table), bsz):\r\n+ _ = {k:table[k][_ : _ + bsz].to_pylist() for k in cols}\r\n```\r\n\r\nI re-ran your code and got a speed ratio of 1.00x and 1.02x",
"Ah I see, datasets is implicitly making this conversion. Thanks for pointing that out!\r\n\r\nIf it's not too much, I would also suggest updating some of your docs with the same `.to_pylist()` conversion in the code snippet that follows [here](https://huggingface.co/course/chapter5/4?fw=pt#:~:text=let%E2%80%99s%20run%20a%20little%20speed%20test%20by%20iterating%20over%20all%20the%20elements%20in%20the%20PubMed%20Abstracts%20dataset%3A).",
"This code snippet shows `datasets` code that reads the Arrow data as python objects already, there is no need to add to_pylist. Or were you thinking about something else ?"
] | 2023-01-27T01:32:25
| 2023-01-30T16:17:11
| 2023-01-30T16:17:11
|
CONTRIBUTOR
| null | null | null | null |
### Describe the bug
I'm basically running the same scanning experiment from the tutorials https://huggingface.co/course/chapter5/4?fw=pt except now I'm comparing to a native pyarrow version.
I'm finding that the native pyarrow approach is much faster (2 orders of magnitude). Is there something I'm missing that explains this phenomenon?
### Steps to reproduce the bug
https://colab.research.google.com/drive/11EtHDaGAf1DKCpvYnAPJUW-LFfAcDzHY?usp=sharing
### Expected behavior
I expect scan times to be on par with using pyarrow directly.
### Environment info
standard colab environment
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/121845112?v=4",
"events_url": "https://api.github.com/users/jonny-cyberhaven/events{/privacy}",
"followers_url": "https://api.github.com/users/jonny-cyberhaven/followers",
"following_url": "https://api.github.com/users/jonny-cyberhaven/following{/other_user}",
"gists_url": "https://api.github.com/users/jonny-cyberhaven/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jonny-cyberhaven",
"id": 121845112,
"login": "jonny-cyberhaven",
"node_id": "U_kgDOB0M1eA",
"organizations_url": "https://api.github.com/users/jonny-cyberhaven/orgs",
"received_events_url": "https://api.github.com/users/jonny-cyberhaven/received_events",
"repos_url": "https://api.github.com/users/jonny-cyberhaven/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jonny-cyberhaven/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jonny-cyberhaven/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jonny-cyberhaven",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5475/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5475/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 3 days, 14:44:46
|
https://api.github.com/repos/huggingface/datasets/issues/5474
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5474/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5474/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5474/events
|
https://github.com/huggingface/datasets/issues/5474
| 1,558,827,155
|
I_kwDODunzps5c6dCT
| 5,474
|
Column project operation on `datasets.Dataset`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/9336514?v=4",
"events_url": "https://api.github.com/users/daskol/events{/privacy}",
"followers_url": "https://api.github.com/users/daskol/followers",
"following_url": "https://api.github.com/users/daskol/following{/other_user}",
"gists_url": "https://api.github.com/users/daskol/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/daskol",
"id": 9336514,
"login": "daskol",
"node_id": "MDQ6VXNlcjkzMzY1MTQ=",
"organizations_url": "https://api.github.com/users/daskol/orgs",
"received_events_url": "https://api.github.com/users/daskol/received_events",
"repos_url": "https://api.github.com/users/daskol/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/daskol/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/daskol/subscriptions",
"type": "User",
"url": "https://api.github.com/users/daskol",
"user_view_type": "public"
}
|
[
{
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists",
"id": 1935892865,
"name": "duplicate",
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate"
},
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
| null |
[] |
[
"Hi ! This would be a nice addition indeed :) This sounds like a duplicate of https://github.com/huggingface/datasets/issues/5468\r\n\r\n> Not sure. Some of my PRs are still open and some do not have any discussions.\r\n\r\nSorry to hear that, feel free to ping me on those PRs"
] | 2023-01-26T21:47:53
| 2023-02-13T09:59:37
| 2023-02-13T09:59:37
|
CONTRIBUTOR
| null | null | null | null |
### Feature request
There is no operation to select a subset of columns of original dataset. Expected API follows.
```python
a = Dataset.from_dict({
'int': [0, 1, 2]
'char': ['a', 'b', 'c'],
'none': [None] * 3,
})
b = a.project('int', 'char') # usually, .select()
print(a.column_names) # stdout: ['int', 'char', 'none']
print(b.column_names) # stdout: ['int', 'char']
```
Method project can easily accept not only column names (as a `str)` but univariant function applied to corresponding column as an example. Or keyword arguments can be used in order to rename columns in advance (see `pandas`, `pyspark`, `pyarrow`, and SQL)..
### Motivation
Projection is a typical operation in every data processing library. And it is a basic block of a well-known data manipulation language like SQL. Without this operation `datasets.Dataset` interface is not complete.
### Your contribution
Not sure. Some of my PRs are still open and some do not have any discussions.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5474/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5474/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 17 days, 12:11:44
|
https://api.github.com/repos/huggingface/datasets/issues/5468
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5468/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5468/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5468/events
|
https://github.com/huggingface/datasets/issues/5468
| 1,558,066,625
|
I_kwDODunzps5c3jXB
| 5,468
|
Allow opposite of remove_columns on Dataset and DatasetDict
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/346853?v=4",
"events_url": "https://api.github.com/users/hollance/events{/privacy}",
"followers_url": "https://api.github.com/users/hollance/followers",
"following_url": "https://api.github.com/users/hollance/following{/other_user}",
"gists_url": "https://api.github.com/users/hollance/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hollance",
"id": 346853,
"login": "hollance",
"node_id": "MDQ6VXNlcjM0Njg1Mw==",
"organizations_url": "https://api.github.com/users/hollance/orgs",
"received_events_url": "https://api.github.com/users/hollance/received_events",
"repos_url": "https://api.github.com/users/hollance/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hollance/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hollance/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hollance",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
}
] |
closed
| false
| null |
[] |
[
"Hi! I agree it would be nice to have a method like that. Instead of `keep_columns`, we can name it `select_columns` to be more aligned with PyArrow's naming convention (`pa.Table.select`).",
"Hi, I am a newbie to open source and would like to contribute. @mariosasko can I take up this issue ?",
"Hey, I also want to work on this issue I am a newbie to open source. ",
"This sounds related to https://github.com/huggingface/datasets/issues/5474\r\n\r\nI'm fine with `select_columns`, or we could also override `select` to also accept a list of columns maybe ?",
"@lhoestq, I am planning to add a member function to the dataset class to perform the selection operation. Do you think its the right way to proceed? or there is a better option ?",
"Unless @mariosasko thinks otherwise, I think it can go in `Dataset.select()` :)\r\nThough some parameters like keep_in_memory, indices_cache_file_name or writer_batch_size wouldn't when selecting columns, so we would need to update the docstring as well",
"If someone wants to give it a shot, feel free to comment `#self-assign` and it will assign the issue to you.\r\n\r\nFeel free to ping us here if you have questions or if we can help :)",
"I would rather have this functionality as a separate method. IMO it's always better to be explicit than to have an API where a single method can do different/uncorrelated things (somewhat reminds me of Pandas, and there is probably a good reason why PyArrow is more rigid in this aspect).",
"In the end I also think it would be nice to have it as a separate method, this way we can also have it for `IterableDataset` (which can't have `select` for indices)"
] | 2023-01-26T12:28:09
| 2023-02-13T09:59:38
| 2023-02-13T09:59:38
|
NONE
| null | null | null | null |
### Feature request
In this blog post https://huggingface.co/blog/audio-datasets, I noticed the following code:
```python
COLUMNS_TO_KEEP = ["text", "audio"]
all_columns = gigaspeech["train"].column_names
columns_to_remove = set(all_columns) - set(COLUMNS_TO_KEEP)
gigaspeech = gigaspeech.remove_columns(columns_to_remove)
```
This kind of thing happens a lot when you don't need to keep all columns from the dataset. It would be more convenient (and less error prone) if you could just write:
```python
gigaspeech = gigaspeech.keep_columns(["text", "audio"])
```
Internally, `keep_columns` could still call `remove_columns`, but it expresses more clearly what the user's intent is.
### Motivation
Less code to write for the user of the dataset.
### Your contribution
-
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5468/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5468/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 17 days, 21:31:29
|
https://api.github.com/repos/huggingface/datasets/issues/5465
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5465/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5465/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5465/events
|
https://github.com/huggingface/datasets/issues/5465
| 1,557,510,618
|
I_kwDODunzps5c1bna
| 5,465
|
audiofolder creates empty dataset even though the dataset passed in follows the correct structure
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/107211437?v=4",
"events_url": "https://api.github.com/users/joseph-y-cho/events{/privacy}",
"followers_url": "https://api.github.com/users/joseph-y-cho/followers",
"following_url": "https://api.github.com/users/joseph-y-cho/following{/other_user}",
"gists_url": "https://api.github.com/users/joseph-y-cho/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/joseph-y-cho",
"id": 107211437,
"login": "joseph-y-cho",
"node_id": "U_kgDOBmPqrQ",
"organizations_url": "https://api.github.com/users/joseph-y-cho/orgs",
"received_events_url": "https://api.github.com/users/joseph-y-cho/received_events",
"repos_url": "https://api.github.com/users/joseph-y-cho/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/joseph-y-cho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joseph-y-cho/subscriptions",
"type": "User",
"url": "https://api.github.com/users/joseph-y-cho",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[] | 2023-01-26T01:45:45
| 2023-01-26T08:48:45
| 2023-01-26T08:48:45
|
NONE
| null | null | null | null |
### Describe the bug
The structure of my dataset folder called "my_dataset" is : data metadata.csv
The data folder consists of all mp3 files and metadata.csv consist of file locations like 'data/...mp3 and transcriptions. There's 400+ mp3 files and corresponding transcriptions for my dataset.
When I run the following:
ds = load_dataset("audiofolder", data_dir="my_dataset")
I get:
Using custom data configuration default-...
Downloading and preparing dataset audiofolder/default to /...
Downloading data files: 0%| | 0/2 [00:00<?, ?it/s]
Downloading data files: 0it [00:00, ?it/s]
Extracting data files: 0it [00:00, ?it/s]
Generating train split: 0 examples [00:00, ? examples/s]
Dataset audiofolder downloaded and prepared to /.... Subsequent calls will reuse this data.
0%| | 0/1 [00:00<?, ?it/s]
DatasetDict({
train: Dataset({
features: ['audio', 'transcription'],
num_rows: 1
})
})
### Steps to reproduce the bug
Create a dataset folder called 'my_dataset' with a subfolder called 'data' that has mp3 files. Also, create metadata.csv that has file locations like 'data/...mp3' and their corresponding transcription.
Run:
ds = load_dataset("audiofolder", data_dir="my_dataset")
### Expected behavior
It should generate a dataset with numerous rows.
### Environment info
Run on Jupyter notebook
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/107211437?v=4",
"events_url": "https://api.github.com/users/joseph-y-cho/events{/privacy}",
"followers_url": "https://api.github.com/users/joseph-y-cho/followers",
"following_url": "https://api.github.com/users/joseph-y-cho/following{/other_user}",
"gists_url": "https://api.github.com/users/joseph-y-cho/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/joseph-y-cho",
"id": 107211437,
"login": "joseph-y-cho",
"node_id": "U_kgDOBmPqrQ",
"organizations_url": "https://api.github.com/users/joseph-y-cho/orgs",
"received_events_url": "https://api.github.com/users/joseph-y-cho/received_events",
"repos_url": "https://api.github.com/users/joseph-y-cho/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/joseph-y-cho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joseph-y-cho/subscriptions",
"type": "User",
"url": "https://api.github.com/users/joseph-y-cho",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5465/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5465/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 7:03:00
|
https://api.github.com/repos/huggingface/datasets/issues/5464
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5464/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5464/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5464/events
|
https://github.com/huggingface/datasets/issues/5464
| 1,557,462,104
|
I_kwDODunzps5c1PxY
| 5,464
|
NonMatchingChecksumError for hendrycks_test
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8027676?v=4",
"events_url": "https://api.github.com/users/sarahwie/events{/privacy}",
"followers_url": "https://api.github.com/users/sarahwie/followers",
"following_url": "https://api.github.com/users/sarahwie/following{/other_user}",
"gists_url": "https://api.github.com/users/sarahwie/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sarahwie",
"id": 8027676,
"login": "sarahwie",
"node_id": "MDQ6VXNlcjgwMjc2NzY=",
"organizations_url": "https://api.github.com/users/sarahwie/orgs",
"received_events_url": "https://api.github.com/users/sarahwie/received_events",
"repos_url": "https://api.github.com/users/sarahwie/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sarahwie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sarahwie/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sarahwie",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Thanks for reporting, @sarahwie.\r\n\r\nPlease note this issue was already fixed in `datasets` 2.6.0 version:\r\n- #5040\r\n\r\nIf you update your `datasets` version, you will be able to load the dataset:\r\n```\r\npip install -U datasets\r\n```",
"Oops, missed that I needed to upgrade. Thanks!"
] | 2023-01-26T00:43:23
| 2023-01-27T05:44:31
| 2023-01-26T07:41:58
|
NONE
| null | null | null | null |
### Describe the bug
The checksum of the file has likely changed on the remote host.
### Steps to reproduce the bug
`dataset = nlp.load_dataset("hendrycks_test", "anatomy")`
### Expected behavior
no error thrown
### Environment info
- `datasets` version: 2.2.1
- Platform: macOS-13.1-arm64-arm-64bit
- Python version: 3.9.13
- PyArrow version: 9.0.0
- Pandas version: 1.5.1
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5464/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5464/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 6:58:35
|
https://api.github.com/repos/huggingface/datasets/issues/5461
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5461/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5461/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5461/events
|
https://github.com/huggingface/datasets/issues/5461
| 1,555,532,719
|
I_kwDODunzps5ct4uv
| 5,461
|
Discrepancy in `nyu_depth_v2` dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/36858976?v=4",
"events_url": "https://api.github.com/users/awsaf49/events{/privacy}",
"followers_url": "https://api.github.com/users/awsaf49/followers",
"following_url": "https://api.github.com/users/awsaf49/following{/other_user}",
"gists_url": "https://api.github.com/users/awsaf49/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/awsaf49",
"id": 36858976,
"login": "awsaf49",
"node_id": "MDQ6VXNlcjM2ODU4OTc2",
"organizations_url": "https://api.github.com/users/awsaf49/orgs",
"received_events_url": "https://api.github.com/users/awsaf49/received_events",
"repos_url": "https://api.github.com/users/awsaf49/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/awsaf49/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/awsaf49/subscriptions",
"type": "User",
"url": "https://api.github.com/users/awsaf49",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[
"Ccing @dwofk (the author of `fast-depth`). \r\n\r\nThanks, @awsaf49 for reporting this. I believe this is because the NYU Depth V2 shipped from `fast-depth` is already preprocessed. \r\n\r\nIf you think it might be better to have the NYU Depth V2 dataset from BTS [here](https://huggingface.co/datasets/sayakpaul/nyu_depth_v2) feel free to open a PR, I am happy to provide guidance :) ",
"Good catch ! Ideally it would be nice to have the datasets in the raw form, this way users can choose whatever processing they want to apply",
"> Ccing @dwofk (the author of `fast-depth`).\r\n> \r\n> Thanks, @awsaf49 for reporting this. I believe this is because the NYU Depth V2 shipped from `fast-depth` is already preprocessed.\r\n> \r\n> If you think it might be better to have the NYU Depth V2 dataset from BTS [here](https://huggingface.co/datasets/sayakpaul/nyu_depth_v2) feel free to open a PR, I am happy to provide guidance :)\r\n\r\n@sayakpaul I would love to create a PR on this. As this will be my first PR here, some guidance would be helpful.\r\n\r\nNeed a bit of advice on the dataset, there are three publicly available datasets. Which one should I consider for PR?\r\n1. [BTS](https://github.com/cleinc/bts): Containst train/test: 36K/654 data, dtype = `uint16` hence more precise\r\n2. [DenseDepth](https://github.com/ialhashim/DenseDepth) It contains train/test: 50K/654 data, dtype = `uint8` hence less precise\r\n3. [Official](https://cs.nyu.edu/~silberman/datasets/nyu_depth_v2.html#raw_parts): Size is big 400GB+, requires **MatLab** code for fixing **projection** and **sync**, DataType: `pgm` and `dump` hence can't be used directly.\r\n\r\ncc: @lhoestq\r\n\r\n",
"I think BTS. Repositories like https://github.com/vinvino02/GLPDepth usually use BTS. Also, just for clarity, the PR will be to https://huggingface.co/datasets/sayakpaul/nyu_depth_v2. Once we have worked it out, we can update the following things:\r\n\r\n* https://github.com/huggingface/blog/pull/718\r\n* https://huggingface.co/docs/datasets/main/en/depth_estimation\r\n\r\nDon't worry about it if it seems overwhelming. We will work it out together :) \r\n\r\n@lhoestq what do you think? ",
"@sayakpaul If I get this right I have to,\r\n1. Create a PR on https://huggingface.co/datasets/sayakpaul/nyu_depth_v2\r\n2. Create a PR on https://github.com/huggingface/blog\r\n3. Create a PR on https://github.com/huggingface/datasets to update https://github.com/huggingface/datasets/blob/main/docs/source/depth_estimation.mdx",
"The last two are low-hanging fruits. Don't worry about them. ",
"Yup opening a PR to use BTS on https://huggingface.co/datasets/sayakpaul/nyu_depth_v2 sounds good :) Thanks for the help !",
"Finally, I have found the origin of the **discretized depth map**. When I first loaded the datasets from HF I noticed it was 30GB but in DenseDepth data is only 4GB with dtype=uint8. This means data from fast-depth (before loading to HF) must have high precision. So when I tried to dig deeper by directly loading depth_map from `h5py`, I found depth_map from `h5py` came with `float32`. But when the data is processed in HF with `datasets.Image()` it was directly converted to `uint8` from `float32` hence the **discretized** depth map.\r\nhttps://github.com/huggingface/datasets/blob/c78559cacbb0ca6e0bc8bfc313cc0359f8c23ead/src/datasets/features/image.py#L91-L93\r\n\r\n## Solutions:\r\n\r\n#### 1. Array2D\r\nUse `Array2D` feature with `float32` for depth_map \r\n\r\n* Code:\r\n```py\r\nFeatures({'depth_map': Array2D(shape=(480, 640), dtype='float32')})\r\n```\r\n* Pros:\r\nNo precision loss.\r\n\r\n* Cons:\r\nAs depth_map is saved as Array I think it can't be visuzlied in [hf.co/dataset](https://huggingface.co/datasets/sayakpaul/nyu_depth_v2) page like segmentation mask.\r\n\r\n#### 2. Uint16\r\nUse `uint16` as dtype for Image in `_h5_loader` for saving depth maps and accept `uint16` dtype in `datasets.Image()` feature.\r\n\r\n* Code\r\n```py\r\ndepth = np.array(h5f[\"depth\"])\r\ndepth /= 10.0 # [0, max_depth] -> [0, 1]\r\ndepth *= (2**16 -1) # transform from [0, 1] -> [0, 2^16 - 1]\r\ndepth = depth.astype('uint16')\r\n```\r\n* Pros:\r\n * We can visualize depth map in hf.co/datasets page like segmentation mask.\r\n * No need for post-processing.\r\n\r\n* Cons:\r\n * We need to make two change\r\n * Modify `_h5_loader` in https://huggingface.co/datasets/sayakpaul/nyu_depth_v2 to convert depth_map from `float32` to `uint16`.\r\n * Make sure `datasets.Image()` converts `np.ndarray` to `uint16` checking max value\r\n * Precision loss due to `float32` to `uint16`\r\n * Post-processing required for depth_map to transform from `[0, 2^16 - 1]` to `[0, max_depth]` before feeding them to model.",
"Thanks so much for digging into this. \r\n\r\nSince the second solution entails changes to core datatypes in `datasets`, I think it's better to go with the first solution. \r\n\r\n@lhoestq WDYT?",
"@sayakpaul Yes, Solution 1 requires minimal change and provides no precision loss. But I think support for `uint16` image would be a great addition as many datasets come with `uint16` image. For example [UW-Madison GI Tract Image Segmentation](https://www.kaggle.com/competitions/uw-madison-gi-tract-image-segmentation) dataset, here the image itself comes with `uint16` dtype rather than mask. So, saving `uint16` image with `uint8` will result in precision loss.\r\n\r\nPerhaps we can adapt solution 1 for this issue and Add support for `uint16` image separately?",
"Using Array2D makes it not practical to use to train a model - in `transformers` we expect an image type.\r\n\r\nThere is a pull request to support more precision than uint8 in Image() here: https://github.com/huggingface/datasets/pull/5365/files\r\n\r\nwe can probably merge it today and do a release right away",
"Fantastic, @lhoestq! \r\n\r\n@awsaf49 then let's wait for the PR to get merged and then take the next steps? ",
"Sure",
"The PR adds support for uint16 which is ok for BTS if I understand correctly, would it be ok for you ?",
"If the main issue with the current version of NYU we have on the Hub is related to the precision loss stemming from `Image()`, I'd prefer if `Image()` supported float32 as well. ",
"I also prefer `float32` as it offers more precision. But I'm not sure if we'll be able to visualize image with `float32` precision.",
"We could have a separate loading for the float32 one using Array2D, but I feel like it's less convenient to use due to the amount of disk space and because it's not an Image() type. That's why I think uint16 is a better solution for users",
"A bit confused here, If https://github.com/huggingface/datasets/pull/5365 gets merged won't this issue will be resolved automatically?",
"Yes in theory :)",
"actually float32 also seems to work in this PR (it just doesn't work for multi-channel)",
"In that case, a new PR isn't necessary, right?",
"Yep. I just tested from the PR and it works:\r\n```python\r\n>>> train_dataset = load_dataset(\"sayakpaul/nyu_depth_v2\", split=\"train\", streaming=True) \r\nDownloading readme: 100%|██████████████████| 8.71k/8.71k [00:00<00:00, 3.60MB/s]\r\n>>> next(iter(train_dataset))\r\n{'image': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=640x480 at 0x1382ED7F0>,\r\n 'depth_map': <PIL.TiffImagePlugin.TiffImageFile image mode=F size=640x480 at 0x1382EDF28>}\r\n>>> x = next(iter(train_dataset))\r\n>>> np.asarray(x[\"depth_map\"]) \r\narray([[0. , 0. , 0. , ..., 0. , 0. ,\r\n 0. ],\r\n [0. , 0. , 0. , ..., 0. , 0. ,\r\n 0. ],\r\n [0. , 0. , 0. , ..., 0. , 0. ,\r\n 0. ],\r\n ...,\r\n [0. , 2.2861192, 2.2861192, ..., 2.234162 , 2.234162 ,\r\n 0. ],\r\n [0. , 2.2861192, 2.2861192, ..., 2.234162 , 2.234162 ,\r\n 0. ],\r\n [0. , 2.2861192, 2.2861192, ..., 2.234162 , 2.234162 ,\r\n 0. ]], dtype=float32)\r\n```",
"Great! the case is closed! This issue has been solved and I have to say, it was quite the thrill ride. I felt like Sherlock Holmes, solving a mystery and finding the bug🕵️♂️. But in all seriousness, it was a pleasure working on this issue and I'm glad we could get to the bottom of it.\r\n\r\nOn another note, should I consider closing the issue? I think we still need to make updates on https://github.com/huggingface/blog and https://github.com/huggingface/datasets/blob/main/docs/source/depth_estimation.mdx",
"Haha thanks Mr Holmes :p\r\n\r\nmaybe let's close this issue when we're done updating the blog post and the documentation",
"@awsaf49 thank you for your hard work! \r\n\r\nI am a little unsure why the other links need to be updated, though. They all rely on datasets internally. ",
"I think depth_map still shows discretized version. It would be nice to have corrected one.\r\n<img src=\"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/datasets/depth_est_target_viz.png\" width = 300>",
"Also, I think we need to make some changes in the code to visualize depth_map as it is `float32` . `plot.imshow()` supports either [0, 1] + float32 or [0. 255] + uint8",
"Oh yes! Do you want to start with the fixes? Please feel free to say no but I wanted to make sure your contributions are reflected properly in our doc and the blog :)",
"Yes I think that would be nice :)",
"I'll make the changes tomorrow. I hope it's okay..."
] | 2023-01-24T19:15:46
| 2023-02-06T20:52:00
| null |
CONTRIBUTOR
| null | null | null | null |
### Describe the bug
I think there is a discrepancy between depth map of `nyu_depth_v2` dataset [here](https://huggingface.co/docs/datasets/main/en/depth_estimation) and actual depth map. Depth values somehow got **discretized/clipped** resulting in depth maps that are different from actual ones. Here is a side-by-side comparison,

I tried to find the origin of this issue but sadly as I mentioned in tensorflow/datasets/issues/4674, the download link from `fast-depth` doesn't work anymore hence couldn't verify if the error originated there or during porting data from there to HF.
Hi @sayakpaul, as you worked on huggingface/datasets/issues/5255, if you still have access to that data could you please share the data or perhaps checkout this issue?
### Steps to reproduce the bug
This [notebook](https://colab.research.google.com/drive/1K3ZU8XUPRDOYD38MQS9nreQXJYitlKSW?usp=sharing#scrollTo=UEW7QSh0jf0i) from @sayakpaul could be used to generate depth maps and actual ground truths could be checked from this [dataset](https://www.kaggle.com/datasets/awsaf49/nyuv2-bts-dataset) from BTS repo.
> Note: BTS dataset has only 36K data compared to the train-test 50K. They sampled the data as adjacent frames look quite the same
### Expected behavior
Expected depth maps should be smooth rather than discrete/clipped.
### Environment info
- `datasets` version: 2.8.1.dev0
- Platform: Linux-5.10.147+-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 9.0.0
- Pandas version: 1.3.5
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5461/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5461/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/5458
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5458/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5458/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5458/events
|
https://github.com/huggingface/datasets/issues/5458
| 1,555,054,737
|
I_kwDODunzps5csECR
| 5,458
|
slice split while streaming
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/122370631?v=4",
"events_url": "https://api.github.com/users/SvenDS9/events{/privacy}",
"followers_url": "https://api.github.com/users/SvenDS9/followers",
"following_url": "https://api.github.com/users/SvenDS9/following{/other_user}",
"gists_url": "https://api.github.com/users/SvenDS9/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/SvenDS9",
"id": 122370631,
"login": "SvenDS9",
"node_id": "U_kgDOB0s6Rw",
"organizations_url": "https://api.github.com/users/SvenDS9/orgs",
"received_events_url": "https://api.github.com/users/SvenDS9/received_events",
"repos_url": "https://api.github.com/users/SvenDS9/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/SvenDS9/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SvenDS9/subscriptions",
"type": "User",
"url": "https://api.github.com/users/SvenDS9",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi! Yes, that's correct. When `streaming` is `True`, only split names can be specified as `split`, and for slicing, you have to use `.skip`/`.take` instead.\r\n\r\nE.g. \r\n`load_dataset(\"lhoestq/demo1\",revision=None, streaming=True, split=\"train[:3]\")`\r\n\r\nrewritten with `.skip`/`.take`:\r\n`load_dataset(\"lhoestq/demo1\",revision=None, streaming=True, split=\"train\").take(3)`\r\n\r\n\r\n",
"Thank you for your quick response!"
] | 2023-01-24T14:08:17
| 2023-01-24T15:11:47
| 2023-01-24T15:11:47
|
NONE
| null | null | null | null |
### Describe the bug
When using the `load_dataset` function with streaming set to True, slicing splits is apparently not supported.
Did I miss this in the documentation?
### Steps to reproduce the bug
`load_dataset("lhoestq/demo1",revision=None, streaming=True, split="train[:3]")`
causes ValueError: Bad split: train[:3]. Available splits: ['train', 'test'] in builder.py, line 1213, in as_streaming_dataset
### Expected behavior
The first 3 entries of the dataset as a stream
### Environment info
- `datasets` version: 2.8.0
- Platform: Windows-10-10.0.19045-SP0
- Python version: 3.10.9
- PyArrow version: 10.0.1
- Pandas version: 1.5.2
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/122370631?v=4",
"events_url": "https://api.github.com/users/SvenDS9/events{/privacy}",
"followers_url": "https://api.github.com/users/SvenDS9/followers",
"following_url": "https://api.github.com/users/SvenDS9/following{/other_user}",
"gists_url": "https://api.github.com/users/SvenDS9/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/SvenDS9",
"id": 122370631,
"login": "SvenDS9",
"node_id": "U_kgDOB0s6Rw",
"organizations_url": "https://api.github.com/users/SvenDS9/orgs",
"received_events_url": "https://api.github.com/users/SvenDS9/received_events",
"repos_url": "https://api.github.com/users/SvenDS9/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/SvenDS9/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SvenDS9/subscriptions",
"type": "User",
"url": "https://api.github.com/users/SvenDS9",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5458/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5458/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 1:03:30
|
https://api.github.com/repos/huggingface/datasets/issues/5457
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5457/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5457/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5457/events
|
https://github.com/huggingface/datasets/issues/5457
| 1,554,171,264
|
I_kwDODunzps5cosWA
| 5,457
|
prebuilt dataset relies on `downloads/extracted`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stas00",
"id": 10676103,
"login": "stas00",
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"repos_url": "https://api.github.com/users/stas00/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stas00",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[
"Hi! \r\n\r\nThis issue is due to our audio/image datasets not being self-contained. This allows us to save disk space (files are written only once) but also leads to the issues like this one. We plan to make all our datasets self-contained in Datasets 3.0.\r\n\r\nIn the meantime, you can run the following map to ensure your dataset is self-contained:\r\n```python\r\nfrom datasets.table import embed_table_storage\r\n# load_dataset ...\r\ndset = dset.with_format(\"arrow\")\r\ndset.map(embed_table_storage, batched=True)\r\ndset = dset.with_format(\"python\")\r\n```\r\n",
"Understood. Thank you, Mario.\r\n\r\nPerhaps the solution could be very simple - move the extracted files into the directory of the cached dataset? Which would make it self-contained already and won't require waiting for a new major release. Unless I'm missing some back-compat nuance.\r\n\r\nBut regardless if X relies on Y - it could check if Y is still there when loading X. so not checking full consistency but just the top-level directory it relies on.",
"Hello, \r\n\r\nI also face some problem with prebuilt dataset that relies on the same directory on \r\n\r\n`.cache\\\\huggingface\\\\datasets\\\\downloads\\\\extracted\\\\b557ce52f22c65030869d849d199d7b3fd5af18b335143729c717d29f6221baa\\\\ADEChallengeData2016\\\\annotations\\\\training\\\\ADE_train_00000023.png'`\r\n\r\nThe images exist but the training function somehow cannot reached it. Is this also related to the same problem?\r\n\r\nCurrently the directory map looked like this:\r\n```\r\n\r\n> (hf-pretrain38) C:\\Users\\Len\\.cache\\huggingface>tree\r\n> Folder PATH listing\r\n> C:.\r\n> ├───datasets\r\n> │ ├───downloads\r\n> │ │ └───extracted\r\n> │ │ ├───64c6a0967481dbc192dceabeac06c02b47b992a106357d49e1916dfcdc23a2ea\r\n> │ │ │ └───release_test\r\n> │ │ │ └───testing\r\n> │ │ └───b557ce52f22c65030869d849d199d7b3fd5af18b335143729c717d29f6221baa\r\n> │ │ └───ADEChallengeData2016\r\n> │ │ ├───annotations\r\n> │ │ │ ├───training\r\n> │ │ │ └───validation\r\n> │ │ └───images\r\n> │ │ ├───training\r\n> │ │ └───validation\r\n> │ ├───parquet\r\n> │ │ └───yelp_review_full-66f1f8c8d1a2da02\r\n> │ │ └───0.0.0\r\n> │ │ └───14a00e99c0d15a23649d0db8944380ac81082d4b021f398733dd84f3a6c569a7\r\n> │ └───scene_parse_150\r\n> │ └───scene_parsing\r\n> │ └───1.0.0\r\n> │ └───d998c54e1b5c5bad12b4d2ec7e1a5f74eee4c153bc1b089a0001677ae9b3fd75\r\n> ├───evaluate\r\n> │ └───downloads\r\n> ├───hub\r\n> │ ├───.locks\r\n> │ │ ├───datasets--scene_parse_150\r\n> │ │ ├───models--facebook--mask2former-swin-large-cityscapes-instance\r\n> │ │ ├───models--facebook--mask2former-swin-large-cityscapes-panoptic\r\n> │ │ ├───models--nvidia--mit-b0\r\n> │ │ └───models--nvidia--segformer-b1-finetuned-cityscapes-1024-1024\r\n> │ ├───datasets--huggingface--label-files\r\n> │ │ ├───blobs\r\n> │ │ ├───refs\r\n> │ │ └───snapshots\r\n> │ │ └───9462154cba99c3c7f569d3b4f1ba26614afd558c\r\n> │ ├───datasets--scene_parse_150\r\n> │ │ ├───.no_exist\r\n> │ │ │ └───ac1c0c0e23875e74cd77aca0fd725fd6a35c3667\r\n> │ │ ├───blobs\r\n> │ │ ├───refs\r\n> │ │ └───snapshots\r\n> │ │ └───ac1c0c0e23875e74cd77aca0fd725fd6a35c3667\r\n> │ ├───models--bert-base-cased\r\n> │ │ ├───.no_exist\r\n> │ │ │ └───cd5ef92a9fb2f889e972770a36d4ed042daf221e\r\n> │ │ ├───blobs\r\n> │ │ ├───refs\r\n> │ │ └───snapshots\r\n> │ │ └───cd5ef92a9fb2f889e972770a36d4ed042daf221e\r\n> │ ├───models--bert-case-cased\r\n> │ ├───models--facebook--detr-resnet-50-panoptic\r\n> │ │ ├───blobs\r\n> │ │ ├───refs\r\n> │ │ └───snapshots\r\n> │ │ └───d53b52a799403a8867920f82c869e40732b47037\r\n> │ ├───models--facebook--mask2former-swin-base-coco-panoptic\r\n> │ │ ├───blobs\r\n> │ │ ├───refs\r\n> │ │ └───snapshots\r\n> │ │ └───8351ef9576a965d65196da91a5015dcaf6c6b5d2\r\n> │ ├───models--facebook--mask2former-swin-large-cityscapes-instance\r\n> │ │ ├───blobs\r\n> │ │ ├───refs\r\n> │ │ └───snapshots\r\n> │ │ └───70fed72d02a138560da931a1c6a2dcfbb56cd2ff\r\n> │ ├───models--facebook--mask2former-swin-large-cityscapes-panoptic\r\n> │ │ ├───blobs\r\n> │ │ ├───refs\r\n> │ │ └───snapshots\r\n> │ │ └───544d76fe93971ee046dacae19b6d4f6ecb5d9088\r\n> │ ├───models--google_bert--bert-base-cased\r\n> │ ├───models--nvidia--mit-b0\r\n> │ │ ├───.no_exist\r\n> │ │ │ └───80983a413c30d36a39c20203974ae7807835e2b4\r\n> │ │ ├───blobs\r\n> │ │ ├───refs\r\n> │ │ │ └───refs\r\n> │ │ │ └───pr\r\n> │ │ └───snapshots\r\n> │ │ ├───25ce79d97e6d9d509ed12e17cb2eb89b0a83a2dc\r\n> │ │ └───80983a413c30d36a39c20203974ae7807835e2b4\r\n> │ ├───models--nvidia--segformer-b0-finetuned-cityscapes-768-768\r\n> │ │ ├───blobs\r\n> │ │ ├───refs\r\n> │ │ └───snapshots\r\n> │ │ └───d3b7801ed329668d5bff04cd33365fa37f538c3b\r\n> │ └───models--nvidia--segformer-b1-finetuned-cityscapes-1024-1024\r\n> │ ├───.no_exist\r\n> │ │ └───ec86afeba68e656629ccf47e0c8d2902f964917b\r\n> │ ├───blobs\r\n> │ ├───refs\r\n> │ │ └───refs\r\n> │ │ └───pr\r\n> │ └───snapshots\r\n> │ ├───ad2bb0101129289844ea62577e6a22adc2752004\r\n> │ └───ec86afeba68e656629ccf47e0c8d2902f964917b\r\n> ├───metrics\r\n> │ └───mean_io_u\r\n> │ └───default\r\n> └───modules\r\n> ├───datasets_modules\r\n> │ ├───datasets\r\n> │ │ ├───scene_parse_150\r\n> │ │ │ ├───d998c54e1b5c5bad12b4d2ec7e1a5f74eee4c153bc1b089a0001677ae9b3fd75\r\n> │ │ │ │ └───__pycache__\r\n> │ │ │ └───__pycache__\r\n> │ │ └───__pycache__\r\n> │ └───__pycache__\r\n> └───evaluate_modules\r\n> ├───metrics\r\n> │ ├───evaluate-metric--mean_iou\r\n> │ │ ├───9e450724f21f05592bfb0255fe2fa576df8171fa060d11121d8aecfff0db80d0\r\n> │ │ │ └───__pycache__\r\n> │ │ └───__pycache__\r\n> │ └───__pycache__\r\n> └───__pycache__\r\n```\r\n\r\nWill appreciate for some help and will help in completing further details, thanks in advance"
] | 2023-01-24T02:09:32
| 2024-11-18T07:43:51
| null |
CONTRIBUTOR
| null | null | null | null |
### Describe the bug
I pre-built the dataset:
```
python -c 'import sys; from datasets import load_dataset; ds=load_dataset(sys.argv[1])' HuggingFaceM4/general-pmd-synthetic-testing
```
and it can be used just fine.
now I wipe out `downloads/extracted` and it no longer works.
```
rm -r ~/.cache/huggingface/datasets/downloads
```
That is I can still load it:
```
python -c 'import sys; from datasets import load_dataset; ds=load_dataset(sys.argv[1])' HuggingFaceM4/general-pmd-synthetic-testing
No config specified, defaulting to: general-pmd-synthetic-testing/100.unique
Found cached dataset general-pmd-synthetic-testing (/home/stas/.cache/huggingface/datasets/HuggingFaceM4___general-pmd-synthetic-testing/100.unique/1.1.1/86bc445e3e48cb5ef79de109eb4e54ff85b318cd55c3835c4ee8f86eae33d9d2)
```
but if I try to use it:
```
E stderr: Traceback (most recent call last):
E stderr: File "/mnt/nvme0/code/huggingface/m4-master-6/m4/training/main.py", line 116, in <module>
E stderr: train_loader, val_loader = get_dataloaders(
E stderr: File "/mnt/nvme0/code/huggingface/m4-master-6/m4/training/dataset.py", line 170, in get_dataloaders
E stderr: train_loader = get_dataloader_from_config(
E stderr: File "/mnt/nvme0/code/huggingface/m4-master-6/m4/training/dataset.py", line 443, in get_dataloader_from_config
E stderr: dataloader = get_dataloader(
E stderr: File "/mnt/nvme0/code/huggingface/m4-master-6/m4/training/dataset.py", line 264, in get_dataloader
E stderr: is_pmd = "meta" in hf_dataset[0] and "source" in hf_dataset[0]
E stderr: File "/mnt/nvme0/code/huggingface/datasets-master/src/datasets/arrow_dataset.py", line 2601, in __getitem__
E stderr: return self._getitem(
E stderr: File "/mnt/nvme0/code/huggingface/datasets-master/src/datasets/arrow_dataset.py", line 2586, in _getitem
E stderr: formatted_output = format_table(
E stderr: File "/mnt/nvme0/code/huggingface/datasets-master/src/datasets/formatting/formatting.py", line 634, in format_table
E stderr: return formatter(pa_table, query_type=query_type)
E stderr: File "/mnt/nvme0/code/huggingface/datasets-master/src/datasets/formatting/formatting.py", line 406, in __call__
E stderr: return self.format_row(pa_table)
E stderr: File "/mnt/nvme0/code/huggingface/datasets-master/src/datasets/formatting/formatting.py", line 442, in format_row
E stderr: row = self.python_features_decoder.decode_row(row)
E stderr: File "/mnt/nvme0/code/huggingface/datasets-master/src/datasets/formatting/formatting.py", line 225, in decode_row
E stderr: return self.features.decode_example(row) if self.features else row
E stderr: File "/mnt/nvme0/code/huggingface/datasets-master/src/datasets/features/features.py", line 1846, in decode_example
E stderr: return {
E stderr: File "/mnt/nvme0/code/huggingface/datasets-master/src/datasets/features/features.py", line 1847, in <dictcomp>
E stderr: column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id)
E stderr: File "/mnt/nvme0/code/huggingface/datasets-master/src/datasets/features/features.py", line 1304, in decode_nested_example
E stderr: return decode_nested_example([schema.feature], obj)
E stderr: File "/mnt/nvme0/code/huggingface/datasets-master/src/datasets/features/features.py", line 1296, in decode_nested_example
E stderr: if decode_nested_example(sub_schema, first_elmt) != first_elmt:
E stderr: File "/mnt/nvme0/code/huggingface/datasets-master/src/datasets/features/features.py", line 1309, in decode_nested_example
E stderr: return schema.decode_example(obj, token_per_repo_id=token_per_repo_id)
E stderr: File "/mnt/nvme0/code/huggingface/datasets-master/src/datasets/features/image.py", line 144, in decode_example
E stderr: image = PIL.Image.open(path)
E stderr: File "/home/stas/anaconda3/envs/py38-pt113/lib/python3.8/site-packages/PIL/Image.py", line 3092, in open
E stderr: fp = builtins.open(filename, "rb")
E stderr: FileNotFoundError: [Errno 2] No such file or directory: '/mnt/nvme0/code/data/cache/huggingface/datasets/downloads/extracted/134227b9b94c4eccf19b205bf3021d4492d0227b9be6c2ddb6bf517d8d55a8cb/data/101/images_01.jpg'
```
Only if I wipe out the cached dir and rebuild then it starts working as `download/extracted` is back again with extracted files.
```
rm -r ~/.cache/huggingface/datasets/HuggingFaceM4___general-pmd-synthetic-testing
python -c 'import sys; from datasets import load_dataset; ds=load_dataset(sys.argv[1])' HuggingFaceM4/general-pmd-synthetic-testing
```
I think there are 2 issues here:
1. why does it still rely on extracted files after `arrow` files were printed - did I do something incorrectly when creating this dataset?
2. why doesn't the dataset know that it has been gutted and loads just fine? If it has a dependency on `download/extracted` then `load_dataset` should check if it's there and fail or force rebuilding. I am sure this could be a very expensive operation, so probably really solving #1 will not require this check. and this second item is probably an overkill. Other than perhaps if it had an optional `check_consistency` flag to do that.
### Environment info
datasets@main
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5457/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5457/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/5454
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5454/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5454/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5454/events
|
https://github.com/huggingface/datasets/issues/5454
| 1,552,890,419
|
I_kwDODunzps5cjzoz
| 5,454
|
Save and resume the state of a DataLoader
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "c5def5",
"default": false,
"description": "Generic discussion on the library",
"id": 2067400324,
"name": "generic discussion",
"node_id": "MDU6TGFiZWwyMDY3NDAwMzI0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion"
}
] |
open
| false
| null |
[] |
[
"Something that'd be nice to have is \"manual update of state\". One of the learning from training LLMs is the ability to skip some batches whenever we notice huge spike might be handy.",
"Your outline spec is very sound and clear, @lhoestq - thank you!\r\n\r\n@thomasw21, indeed that would be a wonderful extra feature. In Megatron-Deepspeed we manually drained the dataloader for the range we wanted. I wasn't very satisfied with the way we did it, since its behavior would change if you were to do multiple range skips. I think it should remember all the ranges it skipped and not just skip the last range - since otherwise the data is inconsistent (but we probably should discuss this in a separate issue not to derail this much bigger one).",
"Hi there! I think this is a critical issue and have an urgent need for it, in my attempt to train on a super large-scale dataset using `datasets`. It is impossible to resume a time-consuming (like one month) experiment by iterating all seen data again, which could possibly cost several days.\r\n\r\n@stas00 @thomasw21 @lhoestq Any updates on this problem after 1 year passed?",
"any update?",
"No update so far, I wonder if someone implemented a resumable pytorch Sampler somwhere.\r\n\r\nThen regarding resuming a streaming dataset, we'd first like to have an efficient way to skip shards automatically but this is not implemented yet",
"I opened a draft here for IterableDataset: https://github.com/huggingface/datasets/pull/6658\r\n\r\n\r\n\r\n```python\r\n\"\"\"Requires https://github.com/huggingface/datasets/pull/6658 (WIP)\"\"\"\r\nfrom datasets import load_dataset\r\nfrom torch.utils.data import DataLoader\r\n\r\nds = load_dataset(..., streaming=True)\r\n# ds = ds.map(tokenize)\r\n# ds = ds.shuffle(seed=42, buffer_size=1000)\r\n\r\n# Init the dataset state_dict, or load it from a checkpoint\r\ndataset_state_dict = ds.state_dict()\r\n\r\n# Resumable training loop\r\nds.load_state_dict(dataset_state_dict)\r\ndataloader = DataLoader(ds, batch_size=batch_size)\r\nfor step, batch in enumerate(dataloader):\r\n ...\r\n if step % save_steps == 0:\r\n dataset_state_dict = ds.state_dict()\r\n```",
"Hi @lhoestq - can you provide more information and how to implement on saving and restoring vanilla DataLoader states with map-style datasets?\r\n\r\n",
"For now the easiest is probably to use the vanilla DataLoader only for batching and multiprocessing, and implement the resuming logic using a `Dataset` (it has `.select()` to skip examples) and a `dataset_state_dict`:\r\n\r\n\r\n```python\r\nfrom datasets import load_dataset\r\nfrom torch.utils.data import DataLoader\r\n\r\nds = load_dataset(...)\r\n# ds = ds.map(tokenize)\r\n# ds = ds.shuffle(seed=42)\r\n\r\n# Init the dataset state_dict, or load it from a checkpoint\r\ndataset_state_dict = {\"step\": 0} \r\n\r\n# Resumable training loop\r\nstart_step = dataset_state_dict[\"step\"]\r\ndataloader = DataLoader(ds.select(range(start_step * batch_size, len(ds))), batch_size=batch_size)\r\nfor step, batch in enumerate(dataloader, start=start_step):\r\n ...\r\n if step % save_steps == 0:\r\n dataset_state_dict = {\"step\": step}\r\n```",
"Hello, I found a similar implementation online that seems to solve your problem. https://github.com/facebookresearch/vissl/blob/main/vissl/data/data_helper.py#L93\r\nit looks like we can set_start_iter in StatefulDistributedSampler to implement the stateful resume requirement we want.\r\n\r\n",
"Hi y'all, @lhoestq I wanted to flag that we currently have a StatefulDataLoader in `pytorch/data/torchdata` that has state_dict/load_state_dict methods, which will call a dataset's state_dict/load_state_dict methods but also handle multiprocessing under the hood. Any chance we can collaborate on this and try to get them to work well together? Please have a look here for some basic examples: https://github.com/pytorch/data/tree/main/torchdata/stateful_dataloader#saving-and-loading-state ",
"Fantastic ! This will help pushing our IterableDataset state_dict implementation at https://github.com/huggingface/datasets/pull/6658 :) I'll check if there is anything missing to maker them work together, and add tests and some docs referring to the StatefulDataLoader :)",
"Ah I just saw this disclaimer in the torchdata README and it feels like people should not rely on it. Should the StatefulDataLoader live elsewhere @andrewkho ?\r\n\r\n> ⚠️ As of July 2023, we have paused active development on TorchData and have paused new releases. We have learnt a lot from building it and hearing from users, but also believe we need to re-evaluate the technical design and approach given how much the industry has changed since we began the project. During the rest of 2023 we will be re-evaluating our plans in this space. Please reach out if you suggestions or comments (please use https://github.com/pytorch/data/issues/1196 for feedback).",
"@lhoestq Good find, we are in the midst of updating this disclaimer as we're re-starting development and regular releases, though our approach will be to iterate on DL V1 (ie StatefulDataLoader) instead of continuing development on datapipes+DLV2. Let's discuss on a call at some point to figure out the best path forward! ",
"As a heads up, `IterableDataset` state_dict has been added in https://github.com/huggingface/datasets/pull/6658\r\n\r\n...and it works out of the box with the `torchdata` `StatefulDataLoader` :)\r\n\r\nSee the docs at https://huggingface.co/docs/datasets/main/en/use_with_pytorch#checkpoint-and-resume",
"amazing! Thank you, @lhoestq \r\n\r\ndoes it work with non-iterable dataset as well? the docs only mention iterable dataset",
"It's for iterable dataset only. For regular dataset I believe the sampler should implement state_dict, but maybe @andrewkho might know best how to resume a regular dataset with torchdata",
"@stas00 stateful dataloader will save and resume samplers for map style datasets. If no state_dict/load_state_dict is provided by the sampler, it will naively skip samples to fast forward. See here for more details https://github.com/pytorch/data/blob/main/torchdata/stateful_dataloader/README.md \n\nHope this helps! ",
"Thank you very much for clarifying that, Andrew.\r\n\r\n",
"👋 I am trying to use `HF Streaming Dataset + TorchDDP + Stateful Dataloader`, to train using multiple nodes and large datasets. \r\n\r\nSo far, I have been able to use HF Streaming Dataset + TorchDDP with Vanilla Datasets. To do so, I implemented a custom iterable to make sure that shards are distributed across the multiple nodes, while letting the `dataset` take care of the multiple workers. The implementation uses `split_dataset_by_node`:\r\n\r\n```\r\nimport torch\r\nfrom torch.distributed import get_rank, get_world_size\r\nfrom torch.utils.data import DataLoader, IterableDataset\r\n\r\nclass MyIterableDataset(IterableDataset):\r\n def __init__(self, dataset):\r\n super().__init__()\r\n self.dataset = dataset\r\n self._iterable_by_node = None\r\n\r\n def __iter__(self):\r\n if torch.distributed.is_available() and torch.distributed.is_initialized():\r\n world_size = get_world_size()\r\n process_rank = get_rank()\r\n else:\r\n world_size = 1\r\n process_rank = 0\r\n\r\n if world_size > 1:\r\n self._iterable_by_node = split_dataset_by_node(\r\n self.dataset, rank=process_rank, world_size=world_size\r\n )\r\n else:\r\n self._iterable_by_node = self.dataset\r\n\r\n for example in self._iterable_by_node:\r\n # Trying with _state_dict, since `.state_dict()` creates a copy\r\n self._state_dict.update(self._iterable_by_node._state_dict)\r\n yield example\r\n\r\n def state_dict(self):\r\n return self._state_dict\r\n\r\n def load_state_dict(self, state):\r\n pass # Not implemented yet\r\n\r\n```\r\n \r\nThis doesn't seem to work with `StatefulDataLoader` though. I can see the state of the worker's dataset being updated in its corresponding workers' processes, but somehow the updates are not propagated back to the main process. I have tried with different variants of the above code without success. \r\n\r\nI confirmed that if I skip the custom class and pass `dataset` directly to the loader as in the [docs](https://huggingface.co/docs/datasets/main/en/use_with_pytorch#checkpoint-and-resume), the StatefulDataLoader sees the updates for each worker. However, if I do this, multiple nodes will see the same examples, which I definitely don't want.\r\n\r\nIs there something I am missing? It would be nice if streaming `dataset`s would support by default the multinode training (unless it already does it and I am missing something).\r\n\r\n\r\n",
"Hi ! Have you tried using `split_dataset_by_node()` and pass the result to the StatefulDataLoader ?\r\n\r\n```python\r\ndataloader = StatefulDataLoader(split_dataset_by_node(dataset, rank=process_rank, world_size=world_size))\r\n```",
"> Hi ! Have you tried using split_dataset_by_node() and pass the result to the StatefulDataLoader ?\r\n\r\n@lhoestq it took me some time to test, but it works like a charm. Thanks for the pointer. Totally missed this 🤦. "
] | 2023-01-23T10:58:54
| 2024-11-27T01:19:21
| null |
MEMBER
| null | null | null | null |
It would be nice when using `datasets` with a PyTorch DataLoader to be able to resume a training from a DataLoader state (e.g. to resume a training that crashed)
What I have in mind (but lmk if you have other ideas or comments):
For map-style datasets, this requires to have a PyTorch Sampler state that can be saved and reloaded per node and worker.
For iterable datasets, this requires to save the state of the dataset iterator, which includes:
- the current shard idx and row position in the current shard
- the epoch number
- the rng state
- the shuffle buffer
Right now you can already resume the data loading of an iterable dataset by using `IterableDataset.skip` but it takes a lot of time because it re-iterates on all the past data until it reaches the resuming point.
cc @stas00 @sgugger
| null |
{
"+1": 3,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 8,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 11,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5454/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5454/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/5451
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5451/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5451/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5451/events
|
https://github.com/huggingface/datasets/issues/5451
| 1,552,336,300
|
I_kwDODunzps5chsWs
| 5,451
|
ImageFolder BadZipFile: Bad offset for central directory
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1524208?v=4",
"events_url": "https://api.github.com/users/hmartiro/events{/privacy}",
"followers_url": "https://api.github.com/users/hmartiro/followers",
"following_url": "https://api.github.com/users/hmartiro/following{/other_user}",
"gists_url": "https://api.github.com/users/hmartiro/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hmartiro",
"id": 1524208,
"login": "hmartiro",
"node_id": "MDQ6VXNlcjE1MjQyMDg=",
"organizations_url": "https://api.github.com/users/hmartiro/orgs",
"received_events_url": "https://api.github.com/users/hmartiro/received_events",
"repos_url": "https://api.github.com/users/hmartiro/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hmartiro/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hmartiro/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hmartiro",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi ! Could you share the full stack trace ? Which dataset did you try to load ?\r\n\r\nit may be related to https://github.com/huggingface/datasets/pull/5640",
"The `BadZipFile` error means the ZIP file is corrupted, so I'm closing this issue as it's not directly related to `datasets`.",
"For others that find this issue following a `BadZipFile` error, I had the same problem because I had a file in a folder dataset `my-image.target` and the datasets library was incorrectly determining that the (PNG) file was a zip archive. When it tried to extract the file, this error occurred. \r\n\r\nUpdating to `datasets==2.12.0` fixed the problem for me."
] | 2023-01-22T23:50:12
| 2023-05-23T10:35:48
| 2023-02-10T16:31:36
|
NONE
| null | null | null | null |
### Describe the bug
I'm getting the following exception:
```
lib/python3.10/zipfile.py:1353 in _RealGetContents │
│ │
│ 1350 │ │ # self.start_dir: Position of start of central directory │
│ 1351 │ │ self.start_dir = offset_cd + concat │
│ 1352 │ │ if self.start_dir < 0: │
│ ❱ 1353 │ │ │ raise BadZipFile("Bad offset for central directory") │
│ 1354 │ │ fp.seek(self.start_dir, 0) │
│ 1355 │ │ data = fp.read(size_cd) │
│ 1356 │ │ fp = io.BytesIO(data) │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
BadZipFile: Bad offset for central directory
Extracting data files: 35%|█████████████████▊ | 38572/110812 [00:10<00:20, 3576.26it/s]
```
### Steps to reproduce the bug
```
load_dataset(
args.dataset_name,
args.dataset_config_name,
cache_dir=args.cache_dir,
),
```
### Expected behavior
loads the dataset
### Environment info
datasets==2.8.0
Python 3.10.8
Linux 129-146-3-202 5.15.0-52-generic #58~20.04.1-Ubuntu SMP Thu Oct 13 13:09:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5451/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5451/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 18 days, 16:41:24
|
https://api.github.com/repos/huggingface/datasets/issues/5450
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5450/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5450/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5450/events
|
https://github.com/huggingface/datasets/issues/5450
| 1,551,109,365
|
I_kwDODunzps5cdAz1
| 5,450
|
to_tf_dataset with a TF collator causes bizarrely persistent slowdown
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Rocketknight1",
"id": 12866554,
"login": "Rocketknight1",
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Rocketknight1",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"wtf",
"Couldn't find what's causing this, this will need more investigation",
"A possible hint: The function it seems to be spending a lot of time in (when iterating over the original dataset) is `_get_mp` in the PIL JPEG decoder: \r\n\r\n",
"If \"mp\" is multiprocessing, this might suggest some kind of negative interaction between the JPEG decoder and TF's handling of processes/threads. Note that we haven't merged the parallel `to_tf_dataset` PR yet, so it's not caused by that PR!",
"Update: MP isn't multiprocessing at all, it's an internal PIL method for loading metadata from JPEG files. No idea why that would be a bottleneck, but I'll see if a Python profiler can't figure out where the time is actually being spent.",
"After further profiling, the slowdown is in the C methods for JPEG decoding that are included as part of PIL. Because Python profilers can't inspect inside that, I don't have any further information on which lines exactly are responsible for the slowdown or why.\r\n\r\nIn the meantime, I'm going to suggest switching from `return_tensors=\"tf\"` to `return_tensors=\"np\"` in most of our `transformers` code - this generally works better for pre-processing. Two relevant PRs are [here](https://github.com/huggingface/transformers/pull/21266) and [here](https://github.com/huggingface/notebooks/pull/308).",
"Closing this issue as we've done what we can with this one! "
] | 2023-01-20T16:08:37
| 2023-02-13T14:13:34
| 2023-02-13T14:13:34
|
MEMBER
| null | null | null | null |
### Describe the bug
This will make more sense if you take a look at [a Colab notebook that reproduces this issue.](https://colab.research.google.com/drive/1rxyeciQFWJTI0WrZ5aojp4Ls1ut18fNH?usp=sharing)
Briefly, there are several datasets that, when you iterate over them with `to_tf_dataset` **and** a data collator that returns `tf` tensors, become very slow. We haven't been able to figure this one out - it can be intermittent, and we have no idea what could possibly cause it. The weirdest thing is that **the slowdown affects other attempts to access the underlying dataset**. If you try to iterate over the `tf.data.Dataset`, then interrupt execution, and then try to iterate over the original dataset, the original dataset is now also very slow! This is true even if the dataset format is not set to `tf` - the iteration is slow even though it's not calling TF at all!
There is a simple workaround for this - we can simply get our data collators to return `np` tensors. When we do this, the bug is never triggered and everything is fine. In general, `np` is preferred for this kind of preprocessing work anyway, when the preprocessing is not going to be compiled into a pure `tf.data` pipeline! However, the issue is fascinating, and the TF team were wondering if anyone in datasets (cc @lhoestq @mariosasko) might have an idea of what could cause this.
### Steps to reproduce the bug
Run the attached Colab.
### Expected behavior
The slowdown should go away, or at least not persist after we stop iterating over the `tf.data.Dataset`
### Environment info
The issue occurs on multiple versions of Python and TF, both on local machines and on Colab.
All testing was done using the latest versions of `transformers` and `datasets` from `main`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Rocketknight1",
"id": 12866554,
"login": "Rocketknight1",
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Rocketknight1",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 1,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5450/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5450/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 23 days, 22:04:57
|
https://api.github.com/repos/huggingface/datasets/issues/5448
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5448/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5448/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5448/events
|
https://github.com/huggingface/datasets/issues/5448
| 1,550,618,514
|
I_kwDODunzps5cbI-S
| 5,448
|
Support fsspec 2023.1.0 in CI
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] |
[] | 2023-01-20T10:26:31
| 2023-01-20T13:26:05
| 2023-01-20T13:26:05
|
MEMBER
| null | null | null | null |
Once we find out the root cause of:
- #5445
we should revert the temporary pin on fsspec introduced by:
- #5447
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5448/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5448/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 2:59:34
|
https://api.github.com/repos/huggingface/datasets/issues/5445
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5445/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5445/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5445/events
|
https://github.com/huggingface/datasets/issues/5445
| 1,550,588,703
|
I_kwDODunzps5cbBsf
| 5,445
|
CI tests are broken: AttributeError: 'mappingproxy' object has no attribute 'target'
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] |
[] | 2023-01-20T10:03:10
| 2023-01-20T10:28:44
| 2023-01-20T10:28:44
|
MEMBER
| null | null | null | null |
CI tests are broken, raising `AttributeError: 'mappingproxy' object has no attribute 'target'`. See: https://github.com/huggingface/datasets/actions/runs/3966497597/jobs/6797384185
```
...
ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_rglob[mock://top_level-date=2019-10-0[1-4]/*-expected_paths4] - AttributeError: 'mappingproxy' object has no attribute 'target'
===== 2076 passed, 19 skipped, 15 warnings, 47 errors in 115.54s (0:01:55) =====
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5445/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5445/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 0:25:34
|
https://api.github.com/repos/huggingface/datasets/issues/5444
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5444/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5444/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5444/events
|
https://github.com/huggingface/datasets/issues/5444
| 1,550,185,071
|
I_kwDODunzps5cZfJv
| 5,444
|
info messages logged as warnings
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/4443482?v=4",
"events_url": "https://api.github.com/users/davidgilbertson/events{/privacy}",
"followers_url": "https://api.github.com/users/davidgilbertson/followers",
"following_url": "https://api.github.com/users/davidgilbertson/following{/other_user}",
"gists_url": "https://api.github.com/users/davidgilbertson/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/davidgilbertson",
"id": 4443482,
"login": "davidgilbertson",
"node_id": "MDQ6VXNlcjQ0NDM0ODI=",
"organizations_url": "https://api.github.com/users/davidgilbertson/orgs",
"received_events_url": "https://api.github.com/users/davidgilbertson/received_events",
"repos_url": "https://api.github.com/users/davidgilbertson/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/davidgilbertson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davidgilbertson/subscriptions",
"type": "User",
"url": "https://api.github.com/users/davidgilbertson",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Looks like a duplicate of https://github.com/huggingface/datasets/issues/1948. \r\n\r\nI also think these should be logged as INFO messages, but let's see what @lhoestq thinks.",
"It can be considered unexpected to see a `map` function return instantaneously. The warning is here to explain this case by mentioning that the cache was used. I don't expect first time users (only seeing warnings) to guess that the cache works this way",
"Oh, so it's intentional? Do all Hugging Face packages use `warning` when using cache?\r\nI guess feel free to close this issue then.",
"Yes it's intentional for `map`. For `load_dataset` it's also intentional but for a different reason: it shows where in the cache the dataset is located, in case the user wants to clear the cache.",
"OK I see. It's surprising to me that these are considered \"something unexpected happened\", the concept of cache is pretty common.\r\n\r\nHas a user every actually complained that they ran their code once, and it took a minute while the data downloaded, then ran their code again and it ran really fast (and completed successfully) but they were so baffled by the fact that it ran quickly, _and_ didn't set the log level to INFO, _and_ hadn't read the docs (or thought about it) to know that datasets are cached, that they logged an issue asking that this information be output as a warning every time they run their code?\r\n\r\nThat seems like a very niche scenario to cater for, given that the side effect is to flood the console with irrelevant warnings for every other user every other time they run a bit of `datasets` code. And the real world impact is that people TURN OFF warnings, which is a pretty bad habit to get into.\r\n\r\nAnyhoo, if there's no chance I'm going to change your mind, please close the issue :)",
"I see your point and I'm not closed to switching to INFO, but I think those logs are important to make the library less opaque. I also just checked `transformers` scripts and they default to INFO which is nice. However for colab users the default is still WARNING iirc, and it counts as one of the main env where `datasets` is used.\r\n\r\nWe also use progress bars a lot in `datasets`, that are shown if the logger is at the WARNING level. But we offer a function to disable the progress bars if necessary.",
"These kinds of messages are logged as INFO in Transformers, so we should probably be consistent with them"
] | 2023-01-20T01:19:18
| 2023-07-12T17:19:31
| 2023-07-12T17:19:31
|
NONE
| null | null | null | null |
### Describe the bug
Code in `datasets` is using `logger.warning` when it should be using `logger.info`.
Some of these are probably a matter of opinion, but I think anything starting with `logger.warning(f"Loading chached` clearly falls into the info category.
Definitions from the Python docs for reference:
* INFO: Confirmation that things are working as expected.
* WARNING: An indication that something unexpected happened, or indicative of some problem in the near future (e.g. ‘disk space low’). The software is still working as expected.
In theory, a user should be able to resolve things such that there are no warnings.
### Steps to reproduce the bug
Load any dataset that's already cached.
### Expected behavior
No output when log level is at the default WARNING level.
### Environment info
- `datasets` version: 2.8.0
- Platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.31
- Python version: 3.10.8
- PyArrow version: 9.0.0
- Pandas version: 1.5.2
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5444/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5444/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 173 days, 16:00:13
|
https://api.github.com/repos/huggingface/datasets/issues/5442
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5442/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5442/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5442/events
|
https://github.com/huggingface/datasets/issues/5442
| 1,550,084,450
|
I_kwDODunzps5cZGli
| 5,442
|
OneDrive Integrations with HF Datasets
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/59222637?v=4",
"events_url": "https://api.github.com/users/Mohammed20201991/events{/privacy}",
"followers_url": "https://api.github.com/users/Mohammed20201991/followers",
"following_url": "https://api.github.com/users/Mohammed20201991/following{/other_user}",
"gists_url": "https://api.github.com/users/Mohammed20201991/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Mohammed20201991",
"id": 59222637,
"login": "Mohammed20201991",
"node_id": "MDQ6VXNlcjU5MjIyNjM3",
"organizations_url": "https://api.github.com/users/Mohammed20201991/orgs",
"received_events_url": "https://api.github.com/users/Mohammed20201991/received_events",
"repos_url": "https://api.github.com/users/Mohammed20201991/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Mohammed20201991/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mohammed20201991/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Mohammed20201991",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
| null |
[] |
[
"Hi! \r\n\r\nWe use [`fsspec`](https://github.com/fsspec/filesystem_spec) to integrate with storage providers. You can find more info (and the usage examples) in [our docs](https://huggingface.co/docs/datasets/v2.8.0/filesystems#download-and-prepare-a-dataset-into-a-cloud-storage).\r\n\r\n[`gdrivefs`](https://github.com/fsspec/gdrivefs) makes it possible to use Google Drive as a storage service in Datasets, but this is not the case for OneDrive, since its[ Python SDK](https://github.com/OneDrive/onedrive-sdk-python) is not integrated with `fsspec`. Can you please request the integration with `fsspec` in their repo to address this limitation?",
"I'm closing this issue as implementing a fsspec-compliant OneDrive filesystem is not our responsibility."
] | 2023-01-19T23:12:08
| 2023-02-24T16:17:51
| 2023-02-24T16:17:51
|
NONE
| null | null | null | null |
### Feature request
First of all , I would like to thank all community who are developed DataSet storage and make it free available
How to integrate our Onedrive account or any other possible storage clouds (like google drive,...) with the **HF** datasets section.
For example, if I have **50GB** on my **Onedrive** account and I want to move between drive and Hugging face repo or vis versa
### Motivation
make the dataset section more flexible with other possible storage
like the integration between Google Collab and Google drive the storage
### Your contribution
Can be done using Hugging face CLI
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5442/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5442/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 35 days, 17:05:43
|
https://api.github.com/repos/huggingface/datasets/issues/5439
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5439/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5439/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5439/events
|
https://github.com/huggingface/datasets/issues/5439
| 1,537,973,564
|
I_kwDODunzps5bq508
| 5,439
|
[dataset request] Add Common Voice 12.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/31034499?v=4",
"events_url": "https://api.github.com/users/MohammedRakib/events{/privacy}",
"followers_url": "https://api.github.com/users/MohammedRakib/followers",
"following_url": "https://api.github.com/users/MohammedRakib/following{/other_user}",
"gists_url": "https://api.github.com/users/MohammedRakib/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/MohammedRakib",
"id": 31034499,
"login": "MohammedRakib",
"node_id": "MDQ6VXNlcjMxMDM0NDk5",
"organizations_url": "https://api.github.com/users/MohammedRakib/orgs",
"received_events_url": "https://api.github.com/users/MohammedRakib/received_events",
"repos_url": "https://api.github.com/users/MohammedRakib/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/MohammedRakib/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MohammedRakib/subscriptions",
"type": "User",
"url": "https://api.github.com/users/MohammedRakib",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna",
"user_view_type": "public"
}
] |
[
"@polinaeterna any tentative date on when the Common Voice 12.0 dataset will be added ?",
"This dataset is now hosted on the Hub here: https://huggingface.co/datasets/mozilla-foundation/common_voice_12_0"
] | 2023-01-18T13:07:05
| 2023-07-21T14:26:10
| 2023-07-21T14:26:09
|
NONE
| null | null | null | null |
### Feature request
Please add the common voice 12_0 datasets. Apart from English, a significant amount of audio-data has been added to the other minor-language datasets.
### Motivation
The dataset link:
https://commonvoice.mozilla.org/en/datasets
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5439/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5439/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 184 days, 1:19:04
|
https://api.github.com/repos/huggingface/datasets/issues/5437
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5437/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5437/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5437/events
|
https://github.com/huggingface/datasets/issues/5437
| 1,536,837,144
|
I_kwDODunzps5bmkYY
| 5,437
|
Can't load png dataset with 4 channel (RGBA)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/41611046?v=4",
"events_url": "https://api.github.com/users/WiNE-iNEFF/events{/privacy}",
"followers_url": "https://api.github.com/users/WiNE-iNEFF/followers",
"following_url": "https://api.github.com/users/WiNE-iNEFF/following{/other_user}",
"gists_url": "https://api.github.com/users/WiNE-iNEFF/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/WiNE-iNEFF",
"id": 41611046,
"login": "WiNE-iNEFF",
"node_id": "MDQ6VXNlcjQxNjExMDQ2",
"organizations_url": "https://api.github.com/users/WiNE-iNEFF/orgs",
"received_events_url": "https://api.github.com/users/WiNE-iNEFF/received_events",
"repos_url": "https://api.github.com/users/WiNE-iNEFF/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/WiNE-iNEFF/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WiNE-iNEFF/subscriptions",
"type": "User",
"url": "https://api.github.com/users/WiNE-iNEFF",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi! Can you please share the directory structure of your image folder and the `load_dataset` call? We decode images with Pillow, and Pillow supports RGBA PNGs, so this shouldn't be a problem.\r\n\r\n",
"> Hi! Can you please share the directory structure of your image folder and the `load_dataset` call? We decode images with Pillow, and Pillow supports RGBA PNGs, so this shouldn't be a problem.\n> \n> \n\nI have only 1 folder that I use in the load_dataset function with the name \"IMGDATA\" and all my 9000 images are located in this folder.\n`\nfrom datasets import load_dataset\n\ndataset = load_dataset(\"IMGDATA\")\n`\nAt the same time, using another data set with images consisting of 3 RGB channels, everything works",
"Okay, I figured out what was wrong. When uploading my dataset via Google Drive, the images broke and Pillow couldn't open them. As a result, I solved the problem by downloading the ZIP archive"
] | 2023-01-17T18:22:27
| 2023-01-18T20:20:15
| 2023-01-18T20:20:15
|
NONE
| null | null | null | null |
I try to create dataset which contains about 9000 png images 64x64 in size, and they are all 4-channel (RGBA). When trying to use load_dataset() then a dataset is created from only 2 images. What exactly interferes I can not understand.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5437/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5437/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 1 day, 1:57:48
|
https://api.github.com/repos/huggingface/datasets/issues/5435
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5435/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5435/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5435/events
|
https://github.com/huggingface/datasets/issues/5435
| 1,536,099,300
|
I_kwDODunzps5bjwPk
| 5,435
|
Wrong statement in "Load a Dataset in Streaming mode" leads to data leakage
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/80093591?v=4",
"events_url": "https://api.github.com/users/DanielYang59/events{/privacy}",
"followers_url": "https://api.github.com/users/DanielYang59/followers",
"following_url": "https://api.github.com/users/DanielYang59/following{/other_user}",
"gists_url": "https://api.github.com/users/DanielYang59/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/DanielYang59",
"id": 80093591,
"login": "DanielYang59",
"node_id": "MDQ6VXNlcjgwMDkzNTkx",
"organizations_url": "https://api.github.com/users/DanielYang59/orgs",
"received_events_url": "https://api.github.com/users/DanielYang59/received_events",
"repos_url": "https://api.github.com/users/DanielYang59/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/DanielYang59/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DanielYang59/subscriptions",
"type": "User",
"url": "https://api.github.com/users/DanielYang59",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Just for your information, Tensorflow confirmed this issue [here.](https://github.com/tensorflow/tensorflow/issues/59279)",
"Thanks for reporting, @HaoyuYang59.\r\n\r\nPlease note that these are different \"dataset\" objects: our docs refer to Hugging Face `datasets.Dataset` and not to TensorFlow `tf.data.Dataset`.\r\n\r\nOur `datasets.Dataset.shuffle` method does not have a `reshuffle_each_iteration` argument. Therefore, I would say the statement in our docs is True because they refer to `datasets.Dataset.shuffle`, `datasets.Dataset.skip` and `datasets.Dataset.take`.\r\n\r\nI think this issue is restricted to TensorFlow dataset, and this would be addressed by them in the issue you opened in their repo: https://github.com/tensorflow/tensorflow/issues/59279",
"Also note that you are referring to an outdated documentation page: datasets 1.10.2 version\r\n\r\nCurrent datasets version is 2.8.0 and the corresponding documentation page is: https://huggingface.co/docs/datasets/stream#split-dataset",
"Hi @albertvillanova thanks for your reply and your explaination here. \r\n\r\nSorry for the confusion as I'm not actually a user of your repo and I just happen to find the thread by Google (and didn't read carefully).\r\n\r\nGreat to know that and you made everything very clear now.\r\n\r\nThanks for your time and sorry for the consusion.\r\n\r\nWishing you a wonderful time. \r\n\r\nRegards"
] | 2023-01-17T10:04:16
| 2023-01-19T09:56:03
| 2023-01-19T09:56:03
|
NONE
| null | null | null | null |
### Describe the bug
In the [Split your dataset with take and skip](https://huggingface.co/docs/datasets/v1.10.2/dataset_streaming.html#split-your-dataset-with-take-and-skip), it states:
> Using take (or skip) prevents future calls to shuffle from shuffling the dataset shards order, otherwise the taken examples could come from other shards. In this case it only uses the shuffle buffer. Therefore it is advised to shuffle the dataset before splitting using take or skip. See more details in the [Shuffling the dataset: shuffle](https://huggingface.co/docs/datasets/v1.10.2/dataset_streaming.html#iterable-dataset-shuffling) section.`
>> \# You can also create splits from a shuffled dataset
>> train_dataset = shuffled_dataset.skip(1000)
>> eval_dataset = shuffled_dataset.take(1000)
Where the shuffled dataset comes from:
`shuffled_dataset = dataset.shuffle(buffer_size=10_000, seed=42)`
At least in Tensorflow 2.9/2.10/2.11, [docs](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#shuffle) states the `reshuffle_each_iteration` argument is `True` by default. This means the dataset would be shuffled after each epoch, and as a result **the validation data would leak into training test**.
### Steps to reproduce the bug
N/A
### Expected behavior
The `reshuffle_each_iteration` argument should be set to `False`.
### Environment info
Tensorflow 2.9/2.10/2.11
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/80093591?v=4",
"events_url": "https://api.github.com/users/DanielYang59/events{/privacy}",
"followers_url": "https://api.github.com/users/DanielYang59/followers",
"following_url": "https://api.github.com/users/DanielYang59/following{/other_user}",
"gists_url": "https://api.github.com/users/DanielYang59/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/DanielYang59",
"id": 80093591,
"login": "DanielYang59",
"node_id": "MDQ6VXNlcjgwMDkzNTkx",
"organizations_url": "https://api.github.com/users/DanielYang59/orgs",
"received_events_url": "https://api.github.com/users/DanielYang59/received_events",
"repos_url": "https://api.github.com/users/DanielYang59/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/DanielYang59/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DanielYang59/subscriptions",
"type": "User",
"url": "https://api.github.com/users/DanielYang59",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5435/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5435/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 1 day, 23:51:47
|
https://api.github.com/repos/huggingface/datasets/issues/5434
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5434/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5434/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5434/events
|
https://github.com/huggingface/datasets/issues/5434
| 1,536,090,042
|
I_kwDODunzps5bjt-6
| 5,434
|
sample_dataset module not found
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/15816213?v=4",
"events_url": "https://api.github.com/users/nickums/events{/privacy}",
"followers_url": "https://api.github.com/users/nickums/followers",
"following_url": "https://api.github.com/users/nickums/following{/other_user}",
"gists_url": "https://api.github.com/users/nickums/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/nickums",
"id": 15816213,
"login": "nickums",
"node_id": "MDQ6VXNlcjE1ODE2MjEz",
"organizations_url": "https://api.github.com/users/nickums/orgs",
"received_events_url": "https://api.github.com/users/nickums/received_events",
"repos_url": "https://api.github.com/users/nickums/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/nickums/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nickums/subscriptions",
"type": "User",
"url": "https://api.github.com/users/nickums",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi! Can you describe what the actual error is?",
"working on the setfit example script\r\n\r\n from setfit import SetFitModel, SetFitTrainer, sample_dataset\r\n\r\nImportError: cannot import name 'sample_dataset' from 'setfit' (C:\\Python\\Python38\\lib\\site-packages\\setfit\\__init__.py)\r\n\r\n apart from that, I also had to hack these loads to import thses modules:\r\n from datasets.load import load_dataset \r\n from datasets.arrow_dataset import Dataset\r\n from datasets.dataset_dict import DatasetDict",
"Hi! This issue is related to the [SetFit](https://github.com/huggingface/setfit) project, so can you please open it there?"
] | 2023-01-17T09:57:54
| 2023-01-19T13:52:12
| 2023-01-19T07:55:11
|
NONE
| null | null | null | null | null |
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5434/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5434/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 1 day, 21:57:17
|
https://api.github.com/repos/huggingface/datasets/issues/5433
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5433/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5433/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5433/events
|
https://github.com/huggingface/datasets/issues/5433
| 1,536,017,901
|
I_kwDODunzps5bjcXt
| 5,433
|
Support latest Docker image in CI benchmarks
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
| null |
[] |
[
"Sorry, it was us:[^1] https://github.com/iterative/cml/pull/1317 & https://github.com/iterative/cml/issues/1319#issuecomment-1385599559; should be fixed with [v0.18.17](https://github.com/iterative/cml/releases/tag/v0.18.17).\r\n\r\n[^1]: More or less, see https://github.com/yargs/yargs/issues/873.",
"Opened https://github.com/huggingface/datasets/pull/5436 unpinning again the container image.",
"Hi @0x2b3bfa0, thanks a lot for the investigation, the context about the the root cause and for fixing it!!\r\n\r\nWe are reviewing your PR to unpin the container image."
] | 2023-01-17T09:06:08
| 2023-01-18T06:29:08
| 2023-01-18T06:29:08
|
MEMBER
| null | null | null | null |
Once we find out the root cause of:
- #5431
we should revert the temporary pin on the Docker image version introduced by:
- #5432
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5433/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5433/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 21:23:00
|
https://api.github.com/repos/huggingface/datasets/issues/5431
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5431/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5431/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5431/events
|
https://github.com/huggingface/datasets/issues/5431
| 1,535,862,621
|
I_kwDODunzps5bi2dd
| 5,431
|
CI benchmarks are broken: Unknown arguments: runnerPath, path
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"color": "d4c5f9",
"default": false,
"description": "Maintenance tasks",
"id": 4296013012,
"name": "maintenance",
"node_id": "LA_kwDODunzps8AAAABAA_01A",
"url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] |
[] | 2023-01-17T06:49:57
| 2023-01-18T06:33:24
| 2023-01-17T08:51:18
|
MEMBER
| null | null | null | null |
Our CI benchmarks are broken, raising `Unknown arguments` error: https://github.com/huggingface/datasets/actions/runs/3932397079/jobs/6724905161
```
Unknown arguments: runnerPath, path
```
Stack trace:
```
100%|██████████| 500/500 [00:01<00:00, 338.98ba/s]
Updating lock file 'dvc.lock'
To track the changes with git, run:
git add dvc.lock
To enable auto staging, run:
dvc config core.autostage true
Use `dvc push` to send your updates to remote storage.
cml send-comment <markdown file>
Global Options:
--log Logging verbosity
[string] [choices: "error", "warn", "info", "debug"] [default: "info"]
--driver Git provider where the repository is hosted
[string] [choices: "github", "gitlab", "bitbucket"] [default: infer from the
environment]
--repo Repository URL or slug
[string] [default: infer from the environment]
--driver-token, --token CI driver personal/project access token (PAT)
[string] [default: infer from the environment]
--help Show help [boolean]
Options:
--target Comment type (`commit`, `pr`, `commit/f00bar`,
`pr/42`, `issue/1337`),default is automatic (`pr`
but fallback to `commit`). [string]
--watch Watch for changes and automatically update the
comment [boolean]
--publish Upload any local images found in the Markdown
report [boolean] [default: true]
--publish-url Self-hosted image server URL
[string] [default: "https://asset.cml.dev/"]
--publish-native, --native Uses driver's native capabilities to upload assets
instead of CML's storage; not available on GitHub
[boolean]
--watermark-title Hidden comment marker (used for targeting in
subsequent `cml comment update`); "{workflow}" &
"{run}" are auto-replaced [string] [default: ""]
Unknown arguments: runnerPath, path
Error: Process completed with exit code 1.
```
Issue reported to iterative/cml:
- iterative/cml#1319
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5431/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5431/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 2:01:21
|
https://api.github.com/repos/huggingface/datasets/issues/5430
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5430/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5430/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5430/events
|
https://github.com/huggingface/datasets/issues/5430
| 1,535,856,503
|
I_kwDODunzps5bi093
| 5,430
|
Support Apache Beam >= 2.44.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] |
[
"Some of the shard files now have 0 number of rows.\r\n\r\nWe have opened an issue in the Apache Beam repo:\r\n- https://github.com/apache/beam/issues/25041"
] | 2023-01-17T06:42:12
| 2024-02-06T19:24:21
| 2024-02-06T19:24:21
|
MEMBER
| null | null | null | null |
Once we find out the root cause of:
- #5426
we should revert the temporary pin on apache-beam introduced by:
- #5429
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5430/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5430/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 385 days, 12:42:09
|
https://api.github.com/repos/huggingface/datasets/issues/5428
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5428/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5428/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5428/events
|
https://github.com/huggingface/datasets/issues/5428
| 1,535,166,139
|
I_kwDODunzps5bgMa7
| 5,428
|
Load/Save FAISS index using fsspec
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4",
"events_url": "https://api.github.com/users/Dref360/events{/privacy}",
"followers_url": "https://api.github.com/users/Dref360/followers",
"following_url": "https://api.github.com/users/Dref360/following{/other_user}",
"gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Dref360",
"id": 8976546,
"login": "Dref360",
"node_id": "MDQ6VXNlcjg5NzY1NDY=",
"organizations_url": "https://api.github.com/users/Dref360/orgs",
"received_events_url": "https://api.github.com/users/Dref360/received_events",
"repos_url": "https://api.github.com/users/Dref360/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Dref360/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Dref360",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
| null |
[] |
[
"Hi! Sure, feel free to submit a PR. Maybe if we want to be consistent with the existing API, it would be cleaner to directly add support for `fsspec` paths in `Dataset.load_faiss_index`/`Dataset.save_faiss_index` in the same manner as it was done in `Dataset.load_from_disk`/`Dataset.save_to_disk`.",
"That's a great idea! I'll do that instead. "
] | 2023-01-16T16:08:12
| 2023-03-27T15:18:22
| 2023-03-27T15:18:22
|
CONTRIBUTOR
| null | null | null | null |
### Feature request
From what I understand `faiss` already support this [link](https://github.com/facebookresearch/faiss/wiki/Index-IO,-cloning-and-hyper-parameter-tuning#generic-io-support)
I would like to use a stream as input to `Dataset.load_faiss_index` and `Dataset.save_faiss_index`.
### Motivation
In my case, I'm saving faiss index in cloud storage and use `fsspec` to load them. It would be ideal if I could send the stream directly instead of copying the file locally (or mounting the bucket) and then load the index.
### Your contribution
I can submit the PR
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5428/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5428/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 69 days, 23:10:10
|
https://api.github.com/repos/huggingface/datasets/issues/5427
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5427/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5427/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5427/events
|
https://github.com/huggingface/datasets/issues/5427
| 1,535,162,889
|
I_kwDODunzps5bgLoJ
| 5,427
|
Unable to download dataset id_clickbait
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/45941585?v=4",
"events_url": "https://api.github.com/users/ilos-vigil/events{/privacy}",
"followers_url": "https://api.github.com/users/ilos-vigil/followers",
"following_url": "https://api.github.com/users/ilos-vigil/following{/other_user}",
"gists_url": "https://api.github.com/users/ilos-vigil/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ilos-vigil",
"id": 45941585,
"login": "ilos-vigil",
"node_id": "MDQ6VXNlcjQ1OTQxNTg1",
"organizations_url": "https://api.github.com/users/ilos-vigil/orgs",
"received_events_url": "https://api.github.com/users/ilos-vigil/received_events",
"repos_url": "https://api.github.com/users/ilos-vigil/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ilos-vigil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ilos-vigil/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ilos-vigil",
"user_view_type": "public"
}
|
[] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] |
[
"Thanks for reporting, @ilos-vigil.\r\n\r\nWe have transferred this issue to the corresponding dataset on the Hugging Face Hub: https://huggingface.co/datasets/id_clickbait/discussions/1 "
] | 2023-01-16T16:05:36
| 2023-01-18T09:51:28
| 2023-01-18T09:25:19
|
NONE
| null | null | null | null |
### Describe the bug
I tried to download dataset `id_clickbait`, but receive this error message.
```
FileNotFoundError: Couldn't find file at https://md-datasets-cache-zipfiles-prod.s3.eu-west-1.amazonaws.com/k42j7x2kpn-1.zip
```
When i open the link using browser, i got this XML data.
```xml
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>NoSuchBucket</Code><Message>The specified bucket does not exist</Message><BucketName>md-datasets-cache-zipfiles-prod</BucketName><RequestId>NVRM6VEEQD69SD00</RequestId><HostId>W/SPDxLGvlCGi0OD6d7mSDvfOAUqLAfvs9nTX50BkJrjMny+X9Jnqp/Li2lG9eTUuT4MUkAA2jjTfCrCiUmu7A==</HostId></Error>
```
### Steps to reproduce the bug
Code snippet:
```
from datasets import load_dataset
load_dataset('id_clickbait', 'annotated')
load_dataset('id_clickbait', 'raw')
```
Link to Kaggle notebook: https://www.kaggle.com/code/ilosvigil/bug-check-on-id-clickbait-dataset
### Expected behavior
Successfully download and load `id_newspaper` dataset.
### Environment info
- `datasets` version: 2.8.0
- Platform: Linux-5.15.65+-x86_64-with-debian-bullseye-sid
- Python version: 3.7.12
- PyArrow version: 8.0.0
- Pandas version: 1.3.5
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5427/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5427/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 1 day, 17:19:43
|
https://api.github.com/repos/huggingface/datasets/issues/5426
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5426/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5426/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5426/events
|
https://github.com/huggingface/datasets/issues/5426
| 1,535,158,555
|
I_kwDODunzps5bgKkb
| 5,426
|
CI tests are broken: SchemaInferenceError
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] |
[] | 2023-01-16T16:02:07
| 2023-06-02T06:40:32
| 2023-01-16T16:49:04
|
MEMBER
| null | null | null | null |
CI test (unit, ubuntu-latest, deps-minimum) is broken, raising a `SchemaInferenceError`: see https://github.com/huggingface/datasets/actions/runs/3930901593/jobs/6721492004
```
FAILED tests/test_beam.py::BeamBuilderTest::test_download_and_prepare_sharded - datasets.arrow_writer.SchemaInferenceError: Please pass `features` or at least one example when writing data
```
Stack trace:
```
______________ BeamBuilderTest.test_download_and_prepare_sharded _______________
[gw1] linux -- Python 3.7.15 /opt/hostedtoolcache/Python/3.7.15/x64/bin/python
self = <tests.test_beam.BeamBuilderTest testMethod=test_download_and_prepare_sharded>
@require_beam
def test_download_and_prepare_sharded(self):
import apache_beam as beam
original_write_parquet = beam.io.parquetio.WriteToParquet
expected_num_examples = len(get_test_dummy_examples())
with tempfile.TemporaryDirectory() as tmp_cache_dir:
builder = DummyBeamDataset(cache_dir=tmp_cache_dir, beam_runner="DirectRunner")
with patch("apache_beam.io.parquetio.WriteToParquet") as write_parquet_mock:
write_parquet_mock.side_effect = partial(original_write_parquet, num_shards=2)
> builder.download_and_prepare()
tests/test_beam.py:97:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages/datasets/builder.py:864: in download_and_prepare
**download_and_prepare_kwargs,
/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages/datasets/builder.py:1976: in _download_and_prepare
num_examples, num_bytes = beam_writer.finalize(metrics.query(m_filter))
/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages/datasets/arrow_writer.py:694: in finalize
shard_num_bytes, _ = parquet_to_arrow(source, destination)
/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages/datasets/arrow_writer.py:740: in parquet_to_arrow
num_bytes, num_examples = writer.finalize()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <datasets.arrow_writer.ArrowWriter object at 0x7f6dcbb3e810>
close_stream = True
def finalize(self, close_stream=True):
self.write_rows_on_file()
# In case current_examples < writer_batch_size, but user uses finalize()
if self._check_duplicates:
self.check_duplicate_keys()
# Re-intializing to empty list for next batch
self.hkey_record = []
self.write_examples_on_file()
# If schema is known, infer features even if no examples were written
if self.pa_writer is None and self.schema:
self._build_writer(self.schema)
if self.pa_writer is not None:
self.pa_writer.close()
self.pa_writer = None
if close_stream:
self.stream.close()
else:
if close_stream:
self.stream.close()
> raise SchemaInferenceError("Please pass `features` or at least one example when writing data")
E datasets.arrow_writer.SchemaInferenceError: Please pass `features` or at least one example when writing data
/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages/datasets/arrow_writer.py:593: SchemaInferenceError
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5426/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5426/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 0:46:57
|
https://api.github.com/repos/huggingface/datasets/issues/5425
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5425/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5425/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5425/events
|
https://github.com/huggingface/datasets/issues/5425
| 1,534,581,850
|
I_kwDODunzps5bd9xa
| 5,425
|
Sort on multiple keys with datasets.Dataset.sort()
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/101344863?v=4",
"events_url": "https://api.github.com/users/rocco-fortuna/events{/privacy}",
"followers_url": "https://api.github.com/users/rocco-fortuna/followers",
"following_url": "https://api.github.com/users/rocco-fortuna/following{/other_user}",
"gists_url": "https://api.github.com/users/rocco-fortuna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rocco-fortuna",
"id": 101344863,
"login": "rocco-fortuna",
"node_id": "U_kgDOBgpmXw",
"organizations_url": "https://api.github.com/users/rocco-fortuna/orgs",
"received_events_url": "https://api.github.com/users/rocco-fortuna/received_events",
"repos_url": "https://api.github.com/users/rocco-fortuna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rocco-fortuna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rocco-fortuna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rocco-fortuna",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
}
] |
closed
| false
| null |
[] |
[
"Hi! \r\n\r\n`Dataset.sort` calls `df.sort_values` internally, and `df.sort_values` brings all the \"sort\" columns in memory, so sorting on multiple keys could be very expensive. This makes me think that maybe we can replace `df.sort_values` with `pyarrow.compute.sort_indices` - the latter can also sort on multiple keys and currently loads the data into memory; however, there is a plan to eventually implement \"memory-map\" friendly kernels for the Arrow compute ops (using the Acero execution engine). \r\n\r\nSo to address this issue, you should replace `df.sort_values` with `pyarrow.compute.sort_indices` in `Dataset.sort` and adjust the signature of this function (deprecate the `kind` parameter, etc.).\r\n\r\nPS: Feel free to ping us if you need some additional help/pointers",
"@mariosasko If I understand the code right, using `pyarrow.compute.sort_indices` would also require changes to the `select` method if it is meant to sort multiple keys. That's because `select` only accepts 1D input for `indices`, not an iterable or similar which would be required for multiple keys unless you want some looping over selects. Doesn't seem that straight-forward but I might be missing something here... ",
"@MichlF No, it doesn't require modifying select because sorting on multiple keys also returns a 1D array.\r\n\r\nIt's easier to understand with an example:\r\n```python\r\n>>> import pyarrow as pa\r\n>>> import pyarrow.compute as pc\r\n>>> table = pa.table({\r\n... \"name\": [\"John\", \"Eve\", \"Peter\", \"John\"],\r\n... \"surname\": [\"Johnson\", \"Smith\", \"Smith\", \"Doe\"],\r\n... \"age\": [20, 40, 30, 50],\r\n... })\r\n>>> indices = pc.sort_indices(table, sort_keys=[(\"name\", \"ascending\"), (\"surname\", \"ascending\")])\r\n>>> print(indices)\r\n[\r\n 1,\r\n 3,\r\n 0,\r\n 2\r\n]\r\n```\r\n\r\n",
"Thanks for clarifying.\r\nI can prepare a PR to address this issue. This would be my first PR here so I have a few maybe silly questions but:\r\n- What is the preferred input type of `sort_keys` for the sort method? A sequence with name, order tuples like pyarrow's `sort_indices` requires?\r\n- What about backwards compatability: is it supposed to also accept the old way of calling sort() or should both `column` and `kind` be deprecated?\r\n- If `sort_keys` is provided in the same format as for pyarrow's `sort_indices` - i.e. along with order for each column -, `reverse` doesn't make much sense either and should be deprecated as well I assume.",
"I think we can have the following signature:\r\n```python\r\ndef sort(\r\n self,\r\n column_names: Union[str, Sequence[str]],\r\n reverse: Union[bool, Sequence[bool]] = False,\r\n kind=\"deprecated\",\r\n null_placement: str = \"last\",\r\n keep_in_memory: bool = False,\r\n load_from_cache_file: bool = True,\r\n indices_cache_file_name: Optional[str] = None,\r\n writer_batch_size: Optional[int] = 1000,\r\n new_fingerprint: Optional[str] = None,\r\n ) -> \"Dataset\":\r\n``` \r\n\r\nSo we should:\r\n* rename`column` to `column_names`. `column` is a positional argument, so it's OK to rename it (not marked as positional-only with \"/\", but still should be fine)\r\n* deprecate `kind`\r\n* keep `reverse` instead of introducing `sort_keys`, but we should allow passing a list of booleans that defines the sort order of each column from `column_names` to it (`reverse = False` would be equal to `[False] * len(column_names)` and `reverse = True` to `[True] * len(column_names)`)",
"I am pretty much done with the PR. Just one clarification: `Sequence` in `arrow_dataset.py` is a custom dataclass from `features.py` instead of the `type.hinting` class `Sequence` from Python. Do you suggest using that custom `Sequence` class somehow ? Otherwise signature currently reads instead:\r\n```Python\r\n def sort(\r\n self,\r\n column_names: Union[str, List[str]],\r\n reverse: Union[bool, List[bool]] = False,\r\n kind = \"deprecated\",\r\n null_placement: str = \"last\",\r\n keep_in_memory: bool = False,\r\n load_from_cache_file: bool = True,\r\n indices_cache_file_name: Optional[str] = None,\r\n writer_batch_size: Optional[int] = 1000,\r\n new_fingerprint: Optional[str] = None,\r\n )\r\n```\r\n\r\nAlso, to maintain backwards compatibility, I added conditionals for `null_placement`, because pyarrow's `null_placement` only accepts `at_start` and `at_end`, and not `last` and `first`.\r\nIf that is all good, I think I can open the PR.",
"I meant `typing.Sequence` (`datasets.Sequence` is a feature type). \r\n\r\nRegarding `null_placement`, I think we can support both `at_start` and `at_end`, and `last` and `first` (for backward compatibility; convert internally to `at_end` and `at_start` respectively).",
"> I meant typing.Sequence (datasets.Sequence is a feature type).\r\n\r\nSorry, I actually meant `typing.Sequence` and not `type.hinting`. However, the issue is still that `dataset.Sequence` is imported in `arrow_dataset.py` so I cannot import and use `typing.Sequence` for the `sort`'s signature without overwriting the `dataset.Sequence` import. The latter is used in the `align_labels_with_mapping` method so it's a necessary import for `arrow_dataset.py`. \r\nTo import `typing.Sequence` as something else than `Sequence` to avoid overwriting may only be confusing and doesn't seem good practice!? The other solution is to keep `List` type hinting as in the signature I posted in my previous post but this excludes other Sequence types and may cause problems further down the line.\r\nPlease advise,\r\nThanks for all the clarifications!",
"You can avoid the name collision by renaming `typing.Sequence` to `Sequence_` when importing:\r\n```python\r\nfrom typing import Sequence as Sequence_\r\n```",
"Resolved via #5502 "
] | 2023-01-16T09:22:26
| 2023-02-24T16:15:11
| 2023-02-24T16:15:11
|
NONE
| null | null | null | null |
### Feature request
From discussion on forum: https://discuss.huggingface.co/t/datasets-dataset-sort-does-not-preserve-ordering/29065/1
`sort()` does not preserve ordering, and it does not support sorting on multiple columns, nor a key function.
The suggested solution:
> ... having something similar to pandas and be able to specify multiple columns for sorting. We’re already using pandas under the hood to do the sorting in datasets.
The suggested workaround:
> convert your dataset to pandas and use `df.sort_values()`
### Motivation
Preserved ordering when sorting is very handy when one needs to sort on multiple columns, A and B, so that e.g. whenever A is equal for two or more rows, B is kept sorted.
Having a parameter to do this in 🤗datasets would be cleaner than going through pandas and back, and it wouldn't add much complexity to the library.
Alternatives:
- the possibility to specify multiple keys to sort by with decreasing priority (suggested solution),
- the ability to provide a key function for sorting, so that one can manually specify the sorting criteria.
### Your contribution
I'll be happy to contribute by submitting a PR. Will get documented on `CONTRIBUTING.MD`.
Would love to get thoughts on this, if anyone has anything to add.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5425/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5425/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 39 days, 6:52:45
|
https://api.github.com/repos/huggingface/datasets/issues/5424
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5424/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5424/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5424/events
|
https://github.com/huggingface/datasets/issues/5424
| 1,534,394,756
|
I_kwDODunzps5bdQGE
| 5,424
|
When applying `ReadInstruction` to custom load it's not DatasetDict but list of Dataset?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/25720695?v=4",
"events_url": "https://api.github.com/users/macabdul9/events{/privacy}",
"followers_url": "https://api.github.com/users/macabdul9/followers",
"following_url": "https://api.github.com/users/macabdul9/following{/other_user}",
"gists_url": "https://api.github.com/users/macabdul9/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/macabdul9",
"id": 25720695,
"login": "macabdul9",
"node_id": "MDQ6VXNlcjI1NzIwNjk1",
"organizations_url": "https://api.github.com/users/macabdul9/orgs",
"received_events_url": "https://api.github.com/users/macabdul9/received_events",
"repos_url": "https://api.github.com/users/macabdul9/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/macabdul9/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/macabdul9/subscriptions",
"type": "User",
"url": "https://api.github.com/users/macabdul9",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi! You can get a `DatasetDict` if you pass a dictionary with read instructions as follows:\r\n```python\r\ninstructions = [\r\n ReadInstruction(split_name=\"train\", from_=0, to=10, unit='%', rounding='closest'),\r\n ReadInstruction(split_name=\"dev\", from_=0, to=10, unit='%', rounding='closest'),\r\n ReadInstruction(split_name=\"test\", from_=0, to=5, unit='%', rounding='closest')\r\n]\r\n\r\ndataset = load_dataset('csv', data_dir=\"data/\", data_files={\"train\":\"train.tsv\", \"dev\":\"dev.tsv\", \"test\":\"test.tsv\"}, delimiter=\"\\t\", split={inst.split_name: inst for inst in instructions})\r\n```\r\n"
] | 2023-01-16T06:54:28
| 2023-02-24T16:19:00
| 2023-02-24T16:19:00
|
NONE
| null | null | null | null |
### Describe the bug
I am loading datasets from custom `tsv` files stored locally and applying split instructions for each split. Although the ReadInstruction is being applied correctly and I was expecting it to be `DatasetDict` but instead it is a list of `Dataset`.
### Steps to reproduce the bug
Steps to reproduce the behaviour:
1. Import
`from datasets import load_dataset, ReadInstruction`
2. Instruction to load the dataset
```
instructions = [
ReadInstruction(split_name="train", from_=0, to=10, unit='%', rounding='closest'),
ReadInstruction(split_name="dev", from_=0, to=10, unit='%', rounding='closest'),
ReadInstruction(split_name="test", from_=0, to=5, unit='%', rounding='closest')
]
```
3. Load
`dataset = load_dataset('csv', data_dir="data/", data_files={"train":"train.tsv", "dev":"dev.tsv", "test":"test.tsv"}, delimiter="\t", split=instructions)`
### Expected behavior
**Current behaviour**

:
**Expected behaviour**

### Environment info
``datasets==2.8.0
``
`Python==3.8.5
`
`Platform - Ubuntu 20.04.4 LTS`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5424/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5424/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 39 days, 9:24:32
|
https://api.github.com/repos/huggingface/datasets/issues/5422
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5422/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5422/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5422/events
|
https://github.com/huggingface/datasets/issues/5422
| 1,533,385,239
|
I_kwDODunzps5bZZoX
| 5,422
|
Datasets load error for saved github issues
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/7360564?v=4",
"events_url": "https://api.github.com/users/folterj/events{/privacy}",
"followers_url": "https://api.github.com/users/folterj/followers",
"following_url": "https://api.github.com/users/folterj/following{/other_user}",
"gists_url": "https://api.github.com/users/folterj/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/folterj",
"id": 7360564,
"login": "folterj",
"node_id": "MDQ6VXNlcjczNjA1NjQ=",
"organizations_url": "https://api.github.com/users/folterj/orgs",
"received_events_url": "https://api.github.com/users/folterj/received_events",
"repos_url": "https://api.github.com/users/folterj/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/folterj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/folterj/subscriptions",
"type": "User",
"url": "https://api.github.com/users/folterj",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[
"I can confirm that the error exists!\r\nI'm trying to read 3 parquet files locally:\r\n```python\r\nfrom datasets import load_dataset, Features, Value, ClassLabel\r\n\r\nreview_dataset = load_dataset(\r\n \"parquet\",\r\n data_files={\r\n \"train\": os.path.join(sentiment_analysis_data_path, \"train.parquet\"),\r\n \"validation\": os.path.join(sentiment_analysis_data_path, \"validation.parquet\"),\r\n \"test\": os.path.join(sentiment_analysis_data_path, \"test.parquet\"),\r\n },\r\n)\r\n```\r\n\r\nBut you can fix it, by specifying `features` for `load_dataset()` function like this:\r\n```python\r\nfrom datasets import load_dataset, Features, Value, ClassLabel\r\n\r\nfeatures = Features(\r\n {\r\n \"label\": ClassLabel(\r\n num_classes=3,\r\n names=[\"negative\", \"neutral\", \"positive\"],\r\n ),\r\n \"text\": Value(dtype=\"string\"),\r\n }\r\n)\r\n\r\nreview_dataset = load_dataset(\r\n \"parquet\",\r\n data_files={\r\n \"train\": os.path.join(sentiment_analysis_data_path, \"train.parquet\"),\r\n \"validation\": os.path.join(sentiment_analysis_data_path, \"validation.parquet\"),\r\n \"test\": os.path.join(sentiment_analysis_data_path, \"test.parquet\"),\r\n },\r\n features=features,\r\n)\r\n\r\nprint(review_dataset)\r\n```",
"@Extremesarova I think this is a different issue, but understand using features could be a work-around.\r\nIt seems the field `closed_at` is `null` in many cases.\r\n\r\nI've not found a way to specify only a single feature without (succesfully) specifiying the full and quite detailed set of expected features. Using this features set gives an error the column names don't match.\r\n`features = Features({'closed_at': Value(dtype='timestamp[s]', id=None)})`\r\n\r\n",
"Found this when searching for the same error, looks like based on #3965 it's just an issue with the data. I found that changing `df = pd.DataFrame.from_records(all_issues)` to `df = pd.DataFrame.from_records(all_issues).dropna(axis=1, how='all').drop(['milestone'], axis=1)` from the fetch_issues function fixed the issue. \r\nThe \"milestone\" column seemed to be problematic (only ~50 non null rows) and dropped any columns that were all null as well just in case.",
"I have this same issue. I saved a dataset to disk and now I can't load it.",
"Ok the solution was to use load_from_disk instead of load_dataset.",
"Hi @folterj , I faced same issue while creating `issues_dataset` (https://huggingface.co/learn/nlp-course/chapter5/5?fw=pt). The fix which worked for me was loading the *.jsonl file as pd.read_json and then converting it into a Dataset using datasets API.\r\n```\r\nimport pandas as pd\r\ndf=pd.read_json(\"datasets-issues.jsonl\", lines=True)\r\ndf.head()\r\n\r\nfrom datasets import Dataset\r\nissues_dataset = Dataset.from_pandas(df)\r\nissues_dataset\r\nsample = issues_dataset.shuffle(seed=666).select(range(3))\r\nsample[0]\r\n```",
"I understand some work-around suggestions would be to not use load_dataset(), and instead using a different API function. Another alternative would be using the same function using streaming, as I had already suggested in my original post.\r\nHowever, the fact remains that there is an issue in this function as reported."
] | 2023-01-14T17:29:38
| 2023-09-14T11:39:57
| null |
NONE
| null | null | null | null |
### Describe the bug
Loading a previously downloaded & saved dataset as described in the HuggingFace course:
issues_dataset = load_dataset("json", data_files="issues/datasets-issues.jsonl", split="train")
Gives this error:
datasets.builder.DatasetGenerationError: An error occurred while generating the dataset
A work-around I found was to use streaming.
### Steps to reproduce the bug
Reproduce by executing the code provided:
https://huggingface.co/course/chapter5/5?fw=pt
From the heading:
'let’s create a function that can download all the issues from a GitHub repository'
### Expected behavior
No error
### Environment info
Datasets version 2.8.0. Note that version 2.6.1 gives the same error (related to null timestamp).
**[EDIT]**
This is the complete error trace confirming the issue is related to the timestamp (`Couldn't cast array of type timestamp[s] to null`)
```
Using custom data configuration default-950028611d2860c8
Downloading and preparing dataset json/default to [...]/.cache/huggingface/datasets/json/default-950028611d2860c8/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51...
Downloading data files: 100%|██████████| 1/1 [00:00<?, ?it/s]
Extracting data files: 100%|██████████| 1/1 [00:00<00:00, 500.63it/s]
Generating train split: 2619 examples [00:00, 7155.72 examples/s]Traceback (most recent call last):
File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\builder.py", line 1831, in _prepare_split_single
writer.write_table(table)
File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\arrow_writer.py", line 567, in write_table
pa_table = table_cast(pa_table, self._schema)
File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\table.py", line 2282, in table_cast
return cast_table_to_schema(table, schema)
File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\table.py", line 2241, in cast_table_to_schema
arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\table.py", line 2241, in <listcomp>
arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\table.py", line 1807, in wrapper
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\table.py", line 1807, in <listcomp>
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\table.py", line 2035, in cast_array_to_feature
arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()]
File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\table.py", line 2035, in <listcomp>
arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()]
File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\table.py", line 1809, in wrapper
return func(array, *args, **kwargs)
File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\table.py", line 2101, in cast_array_to_feature
return array_cast(array, feature(), allow_number_to_str=allow_number_to_str)
File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\table.py", line 1809, in wrapper
return func(array, *args, **kwargs)
File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\table.py", line 1990, in array_cast
raise TypeError(f"Couldn't cast array of type {array.type} to {pa_type}")
TypeError: Couldn't cast array of type timestamp[s] to null
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Program Files\JetBrains\PyCharm 2022.1.3\plugins\python\helpers\pydev\pydevconsole.py", line 364, in runcode
coro = func()
File "<input>", line 1, in <module>
File "C:\Program Files\JetBrains\PyCharm 2022.1.3\plugins\python\helpers\pydev\_pydev_bundle\pydev_umd.py", line 198, in runfile
pydev_imports.execfile(filename, global_vars, local_vars) # execute the script
File "C:\Program Files\JetBrains\PyCharm 2022.1.3\plugins\python\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "[...]\PycharmProjects\TransformersTesting\dataset_issues.py", line 20, in <module>
issues_dataset = load_dataset("json", data_files="issues/datasets-issues.jsonl", split="train")
File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\load.py", line 1757, in load_dataset
builder_instance.download_and_prepare(
File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\builder.py", line 860, in download_and_prepare
self._download_and_prepare(
File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\builder.py", line 953, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\builder.py", line 1706, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\builder.py", line 1849, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.builder.DatasetGenerationError: An error occurred while generating the dataset
Generating train split: 2619 examples [00:19, 7155.72 examples/s]
```
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5422/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5422/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/5421
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5421/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5421/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5421/events
|
https://github.com/huggingface/datasets/issues/5421
| 1,532,278,307
|
I_kwDODunzps5bVLYj
| 5,421
|
Support case-insensitive Hub dataset name in load_dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
| null |
[] |
[
"Closing as case-insensitivity should be only for URL redirection on the Hub. In the APIs, we will only support the canonical name (https://github.com/huggingface/moon-landing/pull/2399#issuecomment-1382085611)"
] | 2023-01-13T13:07:07
| 2023-01-13T20:12:32
| 2023-01-13T20:12:32
|
COLLABORATOR
| null | null | null | null |
### Feature request
The dataset name on the Hub is case-insensitive (see https://github.com/huggingface/moon-landing/pull/2399, internal issue), i.e., https://huggingface.co/datasets/GLUE redirects to https://huggingface.co/datasets/glue.
Ideally, we could load the glue dataset using the following:
```
from datasets import load_dataset
load_dataset('GLUE', 'cola')
```
It breaks because the loading script `GLUE.py` does not exist (`glue.py` should be selected instead).
Minor additional comment: in other cases without a loading script, we can load the dataset, but the automatically generated config name depends on the casing:
- `load_dataset('severo/danish-wit')` generates the config name `severo--danish-wit-e6fda5b070deb133`, while
- `load_dataset('severo/danish-WIT')` generates the config name `severo--danish-WIT-e6fda5b070deb133`
### Motivation
To follow the same UX on the Hub and in the datasets library.
### Your contribution
...
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo",
"user_view_type": "public"
}
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5421/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5421/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 7:05:25
|
https://api.github.com/repos/huggingface/datasets/issues/5419
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5419/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5419/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5419/events
|
https://github.com/huggingface/datasets/issues/5419
| 1,531,999,850
|
I_kwDODunzps5bUHZq
| 5,419
|
label_column='labels' in datasets.TextClassification and 'label' or 'label_ids' in transformers.DataColator
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/172385?v=4",
"events_url": "https://api.github.com/users/CreatixEA/events{/privacy}",
"followers_url": "https://api.github.com/users/CreatixEA/followers",
"following_url": "https://api.github.com/users/CreatixEA/following{/other_user}",
"gists_url": "https://api.github.com/users/CreatixEA/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/CreatixEA",
"id": 172385,
"login": "CreatixEA",
"node_id": "MDQ6VXNlcjE3MjM4NQ==",
"organizations_url": "https://api.github.com/users/CreatixEA/orgs",
"received_events_url": "https://api.github.com/users/CreatixEA/received_events",
"repos_url": "https://api.github.com/users/CreatixEA/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/CreatixEA/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CreatixEA/subscriptions",
"type": "User",
"url": "https://api.github.com/users/CreatixEA",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi! Thanks for pointing out this inconsistency. Changing the default value at this point is probably not worth it, considering we've started discussing the state of the task API internally - we will most likely deprecate the current one and replace it with a more robust solution that relies on the `train_eval_index` field stored in the YAML section of the dataset cards.",
"The task templates API has been deprecated (will be removed in version 3.0), so I'm closing this issue."
] | 2023-01-13T09:40:07
| 2023-07-21T14:27:08
| 2023-07-21T14:27:08
|
NONE
| null | null | null | null |
### Describe the bug
When preparing a dataset for a task using `datasets.TextClassification`, the output feature is named `labels`. When preparing the trainer using the `transformers.DataCollator` the default column name is `label` if binary or `label_ids` if multi-class problem.
It is required to rename the column accordingly to the expected name : `label` or `label_ids`
### Steps to reproduce the bug
```python
from datasets import TextClassification, AutoTokenized, DataCollatorWithPadding
ds_prepared = my_dataset.prepare_for_task(TextClassification(text_column='TEXT', label_column='MY_LABEL_COLUMN_1_OR_0'))
print(ds_prepared)
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
ds_tokenized = ds_prepared.map(lambda x: tokenizer(x['text'], truncation=True), batched=True)
print(ds_tokenized)
data_collator = DataCollatorWithPadding(tokenizer=tokenizer, return_tensors="tf")
tf_data = model.prepare_tf_dataset(ds_tokenized, shuffle=True, batch_size=16, collate_fn=data_collator)
print(tf_data)
```
### Expected behavior
Without renaming the the column, the target column is not in the final tf_data since it is not in the column name expected by the data_collator.
To correct this, we have to rename the column:
```python
ds_prepared = my_dataset.prepare_for_task(TextClassification(text_column='TEXT', label_column='MY_LABEL_COLUMN_1_OR_0')).rename_column('labels', 'label')
```
### Environment info
- `datasets` version: 2.8.0
- Platform: Linux-5.15.79.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
- Python version: 3.10.6
- PyArrow version: 10.0.1
- Pandas version: 1.5.2
- `transformers` version: 4.26.0.dev0
- Platform: Linux-5.15.79.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
- Python version: 3.10.6
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): 2.11.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5419/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5419/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 189 days, 4:47:01
|
https://api.github.com/repos/huggingface/datasets/issues/5418
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5418/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5418/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5418/events
|
https://github.com/huggingface/datasets/issues/5418
| 1,530,111,184
|
I_kwDODunzps5bM6TQ
| 5,418
|
Add ProgressBar for `to_parquet`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/33707069?v=4",
"events_url": "https://api.github.com/users/zanussbaum/events{/privacy}",
"followers_url": "https://api.github.com/users/zanussbaum/followers",
"following_url": "https://api.github.com/users/zanussbaum/following{/other_user}",
"gists_url": "https://api.github.com/users/zanussbaum/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/zanussbaum",
"id": 33707069,
"login": "zanussbaum",
"node_id": "MDQ6VXNlcjMzNzA3MDY5",
"organizations_url": "https://api.github.com/users/zanussbaum/orgs",
"received_events_url": "https://api.github.com/users/zanussbaum/received_events",
"repos_url": "https://api.github.com/users/zanussbaum/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/zanussbaum/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zanussbaum/subscriptions",
"type": "User",
"url": "https://api.github.com/users/zanussbaum",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/33707069?v=4",
"events_url": "https://api.github.com/users/zanussbaum/events{/privacy}",
"followers_url": "https://api.github.com/users/zanussbaum/followers",
"following_url": "https://api.github.com/users/zanussbaum/following{/other_user}",
"gists_url": "https://api.github.com/users/zanussbaum/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/zanussbaum",
"id": 33707069,
"login": "zanussbaum",
"node_id": "MDQ6VXNlcjMzNzA3MDY5",
"organizations_url": "https://api.github.com/users/zanussbaum/orgs",
"received_events_url": "https://api.github.com/users/zanussbaum/received_events",
"repos_url": "https://api.github.com/users/zanussbaum/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/zanussbaum/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zanussbaum/subscriptions",
"type": "User",
"url": "https://api.github.com/users/zanussbaum",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/33707069?v=4",
"events_url": "https://api.github.com/users/zanussbaum/events{/privacy}",
"followers_url": "https://api.github.com/users/zanussbaum/followers",
"following_url": "https://api.github.com/users/zanussbaum/following{/other_user}",
"gists_url": "https://api.github.com/users/zanussbaum/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/zanussbaum",
"id": 33707069,
"login": "zanussbaum",
"node_id": "MDQ6VXNlcjMzNzA3MDY5",
"organizations_url": "https://api.github.com/users/zanussbaum/orgs",
"received_events_url": "https://api.github.com/users/zanussbaum/received_events",
"repos_url": "https://api.github.com/users/zanussbaum/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/zanussbaum/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zanussbaum/subscriptions",
"type": "User",
"url": "https://api.github.com/users/zanussbaum",
"user_view_type": "public"
}
] |
[
"Thanks for your proposal, @zanussbaum. Yes, I agree that would definitely be a nice feature to have!",
"@albertvillanova I’m happy to make a quick PR for the feature! let me know ",
"That would be awesome ! You can comment `#self-assign` to assign you to this issue and open a PR :) Will be happy to review",
"Closing as this has been merged @lhoestq "
] | 2023-01-12T05:06:20
| 2023-01-24T18:18:24
| 2023-01-24T18:18:24
|
CONTRIBUTOR
| null | null | null | null |
### Feature request
Add a progress bar for `Dataset.to_parquet`, similar to how `to_json` works.
### Motivation
It's a bit frustrating to not know how long a dataset will take to write to file and if it's stuck or not without a progress bar
### Your contribution
Sure I can help if needed
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/33707069?v=4",
"events_url": "https://api.github.com/users/zanussbaum/events{/privacy}",
"followers_url": "https://api.github.com/users/zanussbaum/followers",
"following_url": "https://api.github.com/users/zanussbaum/following{/other_user}",
"gists_url": "https://api.github.com/users/zanussbaum/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/zanussbaum",
"id": 33707069,
"login": "zanussbaum",
"node_id": "MDQ6VXNlcjMzNzA3MDY5",
"organizations_url": "https://api.github.com/users/zanussbaum/orgs",
"received_events_url": "https://api.github.com/users/zanussbaum/received_events",
"repos_url": "https://api.github.com/users/zanussbaum/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/zanussbaum/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zanussbaum/subscriptions",
"type": "User",
"url": "https://api.github.com/users/zanussbaum",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5418/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5418/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 12 days, 13:12:04
|
https://api.github.com/repos/huggingface/datasets/issues/5415
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5415/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5415/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5415/events
|
https://github.com/huggingface/datasets/issues/5415
| 1,526,904,861
|
I_kwDODunzps5bArgd
| 5,415
|
RuntimeError: Sharding is ambiguous for this dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] |
[] | 2023-01-10T07:36:11
| 2023-01-18T14:09:04
| 2023-01-18T14:09:03
|
MEMBER
| null | null | null | null |
### Describe the bug
When loading some datasets, a RuntimeError is raised.
For example, for "ami" dataset: https://huggingface.co/datasets/ami/discussions/3
```
.../huggingface/datasets/src/datasets/builder.py in _prepare_split(self, split_generator, check_duplicate_keys, file_format, num_proc, max_shard_size)
1415 fpath = path_join(self._output_dir, fname)
1416
-> 1417 num_input_shards = _number_of_shards_in_gen_kwargs(split_generator.gen_kwargs)
1418 if num_input_shards <= 1 and num_proc is not None:
1419 logger.warning(
.../huggingface/datasets/src/datasets/utils/sharding.py in _number_of_shards_in_gen_kwargs(gen_kwargs)
10 lists_lengths = {key: len(value) for key, value in gen_kwargs.items() if isinstance(value, list)}
11 if len(set(lists_lengths.values())) > 1:
---> 12 raise RuntimeError(
13 (
14 "Sharding is ambiguous for this dataset: "
RuntimeError: Sharding is ambiguous for this dataset: we found several data sources lists of different lengths, and we don't know over which list we should parallelize:
- key samples_paths has length 6
- key ids has length 7
- key verification_ids has length 6
To fix this, check the 'gen_kwargs' and make sure to use lists only for data sources, and use tuples otherwise. In the end there should only be one single list, or several lists with the same length.
```
This behavior was introduced when implementing multiprocessing by PR:
- #5107
### Steps to reproduce the bug
```python
ds = load_dataset("ami", "microphone-single", split="train", revision="2d7620bb7c3f1aab9f329615c3bdb598069d907a")
```
### Expected behavior
No error raised.
### Environment info
Since datasets 2.7.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5415/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5415/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 8 days, 6:32:52
|
https://api.github.com/repos/huggingface/datasets/issues/5414
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5414/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5414/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5414/events
|
https://github.com/huggingface/datasets/issues/5414
| 1,525,733,818
|
I_kwDODunzps5a8Nm6
| 5,414
|
Sharding error with Multilingual LibriSpeech
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/19574344?v=4",
"events_url": "https://api.github.com/users/Nithin-Holla/events{/privacy}",
"followers_url": "https://api.github.com/users/Nithin-Holla/followers",
"following_url": "https://api.github.com/users/Nithin-Holla/following{/other_user}",
"gists_url": "https://api.github.com/users/Nithin-Holla/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Nithin-Holla",
"id": 19574344,
"login": "Nithin-Holla",
"node_id": "MDQ6VXNlcjE5NTc0MzQ0",
"organizations_url": "https://api.github.com/users/Nithin-Holla/orgs",
"received_events_url": "https://api.github.com/users/Nithin-Holla/received_events",
"repos_url": "https://api.github.com/users/Nithin-Holla/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Nithin-Holla/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Nithin-Holla/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Nithin-Holla",
"user_view_type": "public"
}
|
[] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] |
[
"Thanks for reporting, @Nithin-Holla.\r\n\r\nThis is a known issue for multiple datasets and we are investigating it:\r\n- See e.g.: https://huggingface.co/datasets/ami/discussions/3",
"Main issue:\r\n- #5415",
"@albertvillanova Thanks! As a workaround for now, can I use the dataset in streaming mode?",
"Yes, @Nithin-Holla, in the meantime you can use this dataset in streaming mode."
] | 2023-01-09T14:45:31
| 2023-01-18T14:09:04
| 2023-01-18T14:09:04
|
NONE
| null | null | null | null |
### Describe the bug
Loading the German Multilingual LibriSpeech dataset results in a RuntimeError regarding sharding with the following stacktrace:
```
Downloading and preparing dataset multilingual_librispeech/german to /home/nithin/datadrive/cache/huggingface/datasets/facebook___multilingual_librispeech/german/2.1.0/1904af50f57a5c370c9364cc337699cfe496d4e9edcae6648a96be23086362d0...
Downloading data files: 100%
3/3 [00:00<00:00, 107.23it/s]
Downloading data files: 100%
1/1 [00:00<00:00, 35.08it/s]
Downloading data files: 100%
6/6 [00:00<00:00, 303.36it/s]
Downloading data files: 100%
3/3 [00:00<00:00, 130.37it/s]
Downloading data files: 100%
1049/1049 [00:00<00:00, 4491.40it/s]
Downloading data files: 100%
37/37 [00:00<00:00, 1096.78it/s]
Downloading data files: 100%
40/40 [00:00<00:00, 1003.93it/s]
Extracting data files: 100%
3/3 [00:11<00:00, 2.62s/it]
Generating train split:
469942/0 [34:13<00:00, 273.21 examples/s]
Output exceeds the size limit. Open the full output data in a text editor
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-14-74fa6d092bdc> in <module>
----> 1 mls = load_dataset(MLS_DATASET,
2 LANGUAGE,
3 cache_dir="~/datadrive/cache/huggingface/datasets",
4 ignore_verifications=True)
/anaconda/envs/py38_default/lib/python3.8/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, num_proc, **config_kwargs)
1755
1756 # Download and prepare data
-> 1757 builder_instance.download_and_prepare(
1758 download_config=download_config,
1759 download_mode=download_mode,
/anaconda/envs/py38_default/lib/python3.8/site-packages/datasets/builder.py in download_and_prepare(self, output_dir, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs)
858 if num_proc is not None:
859 prepare_split_kwargs["num_proc"] = num_proc
--> 860 self._download_and_prepare(
861 dl_manager=dl_manager,
862 verify_infos=verify_infos,
/anaconda/envs/py38_default/lib/python3.8/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_splits_kwargs)
1609
1610 def _download_and_prepare(self, dl_manager, verify_infos, **prepare_splits_kwargs):
...
RuntimeError: Sharding is ambiguous for this dataset: we found several data sources lists of different lengths, and we don't know over which list we should parallelize:
- key audio_archives has length 1049
- key local_extracted_archive has length 1049
- key limited_ids_paths has length 1
To fix this, check the 'gen_kwargs' and make sure to use lists only for data sources, and use tuples otherwise. In the end there should only be one single list, or several lists with the same length.
```
### Steps to reproduce the bug
Here is the code to reproduce it:
```python
from datasets import load_dataset
MLS_DATASET = "facebook/multilingual_librispeech"
LANGUAGE = "german"
mls = load_dataset(MLS_DATASET,
LANGUAGE,
cache_dir="~/datadrive/cache/huggingface/datasets",
ignore_verifications=True)
```
### Expected behavior
The expected behaviour is that the dataset is successfully loaded.
### Environment info
- `datasets` version: 2.8.0
- Platform: Linux-5.4.0-1094-azure-x86_64-with-glibc2.10
- Python version: 3.8.8
- PyArrow version: 10.0.1
- Pandas version: 1.2.4
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5414/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5414/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 8 days, 23:23:33
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.