url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 48
51
| id
int64 600M
3.67B
| node_id
stringlengths 18
24
| number
int64 2
7.88k
| title
stringlengths 1
290
| user
dict | labels
listlengths 0
4
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
4
| comments
listlengths 0
30
| created_at
timestamp[s]date 2020-04-14 18:18:51
2025-11-26 16:16:56
| updated_at
timestamp[s]date 2020-04-29 09:23:05
2025-11-30 03:52:07
| closed_at
timestamp[s]date 2020-04-29 09:23:05
2025-11-21 12:31:19
⌀ | author_association
stringclasses 4
values | type
null | active_lock_reason
null | draft
null | pull_request
null | body
stringlengths 0
228k
⌀ | closed_by
dict | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | state_reason
stringclasses 4
values | sub_issues_summary
dict | issue_dependencies_summary
dict | is_pull_request
bool 1
class | closed_at_time_taken
duration[s] |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/7883
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7883/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7883/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7883/events
|
https://github.com/huggingface/datasets/issues/7883
| 3,668,182,561
|
I_kwDODunzps7apAYh
| 7,883
|
Data.to_csv() cannot be recognized by pylance
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/154290630?v=4",
"events_url": "https://api.github.com/users/xi4ngxin/events{/privacy}",
"followers_url": "https://api.github.com/users/xi4ngxin/followers",
"following_url": "https://api.github.com/users/xi4ngxin/following{/other_user}",
"gists_url": "https://api.github.com/users/xi4ngxin/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/xi4ngxin",
"id": 154290630,
"login": "xi4ngxin",
"node_id": "U_kgDOCTJJxg",
"organizations_url": "https://api.github.com/users/xi4ngxin/orgs",
"received_events_url": "https://api.github.com/users/xi4ngxin/received_events",
"repos_url": "https://api.github.com/users/xi4ngxin/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/xi4ngxin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xi4ngxin/subscriptions",
"type": "User",
"url": "https://api.github.com/users/xi4ngxin",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[] | 2025-11-26T16:16:56
| 2025-11-26T16:16:56
| null |
NONE
| null | null | null | null |
### Describe the bug
Hi, everyone ! I am a beginner with datasets.
I am testing reading multiple CSV files from a zip archive. The result of reading the dataset shows success, and it can ultimately be correctly saved to CSV.
Intermediate results:
```
Generating train split: 62973 examples [00:00, 175939.01 examples/s]
DatasetDict({
train: Dataset({
features: ['交易时间\t', '收支方向\t', '业务(产品)种类\t', '交易金额\t', '币种\t', '时点余额\t', '对手方名称\t', '对方机构名称\t', ' 对方钱包ID/账号\t', '交易对手名称\t', '交易对手编号\t', '交易流水号\t', '摘要\t', '附言\t', '备注\t', '用途\t', '客户流水号\t'],
num_rows: 62973
})
})
```
However, Pylance gives me the following error:
```
Cannot access attribute "to_csv" for class "DatasetDict"
Attribute "to_csv" is unknownPylance[reportAttributeAccessIssue](https://github.com/microsoft/pylance-release/blob/main/docs/diagnostics/reportAttributeAccessIssue.md)```
Cannot access attribute "to_csv" for class "IterableDatasetDict"
Attribute "to_csv" is unknownPylance[reportAttributeAccessIssue](https://github.com/microsoft/pylance-release/blob/main/docs/diagnostics/reportAttributeAccessIssue.md)
(method) to_csv: Unknown | ((path_or_buf: datasets.utils.typing.PathLike | BinaryIO, batch_size: int | None = None, num_proc: int | None = None, storage_options: dict[Unknown, Unknown] | None = None, **to_csv_kwargs: Unknown) -> int) | ((path_or_buf: datasets.utils.typing.PathLike | BinaryIO, batch_size: int | None = None, storage_options: dict[Unknown, Unknown] | None = None, **to_csv_kwargs: Unknown) -> int)
```
I ignored the error and continued executing to get the correct result:
```
Dataset({
features: ['交易时间\t', '收支方向\t', '业务(产品)种类\t', '交易金额\t', '币种\t', '时点余额\t', '对手方名称\t', '对方机构名称\t', '对方 钱包ID/账号\t', '交易对手名称\t', '交易对手编号\t', '交易流水号\t', '摘要\t', '附言\t', '备注\t', '用途\t', '客户流水号\t'],
num_rows: 62973
})
```
Since the data volume is small, I manually merged the CSV files, and the final result is consistent with what the program saved.
looks like :
<img width="1264" height="150" alt="Image" src="https://github.com/user-attachments/assets/743540d7-ad8c-4531-ae7e-de71a5243a32" />
### Steps to reproduce the bug
this is my code.
```
from datasets import load_dataset
def main():
url = "data/test.zip"
data_files = {"train": url}
dataset = load_dataset("csv", data_files=data_files,split="train", encoding="gbk", skiprows=2)
# print(dataset)
dataset.to_csv("data/test.csv")
if __name__ == "__main__":
main()
```
### Expected behavior
I want to know why this happens. Is there something wrong with my code?
### Environment info
OS: Windows 11 **upgrade from** OS: Windows_NT x64 10.0.22631
Editor:
VS Code Version: 1.106.2 (user setup)
"datasets" version = "4.4.1"
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7883/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7883/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/7882
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7882/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7882/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7882/events
|
https://github.com/huggingface/datasets/issues/7882
| 3,667,664,527
|
I_kwDODunzps7anB6P
| 7,882
|
Inconsistent loading of LFS-hosted files in epfml/FineWeb-HQ dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/6270922?v=4",
"events_url": "https://api.github.com/users/Oligou/events{/privacy}",
"followers_url": "https://api.github.com/users/Oligou/followers",
"following_url": "https://api.github.com/users/Oligou/following{/other_user}",
"gists_url": "https://api.github.com/users/Oligou/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Oligou",
"id": 6270922,
"login": "Oligou",
"node_id": "MDQ6VXNlcjYyNzA5MjI=",
"organizations_url": "https://api.github.com/users/Oligou/orgs",
"received_events_url": "https://api.github.com/users/Oligou/received_events",
"repos_url": "https://api.github.com/users/Oligou/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Oligou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Oligou/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Oligou",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[] | 2025-11-26T14:06:02
| 2025-11-26T14:06:02
| null |
NONE
| null | null | null | null |
### Describe the bug
Some files in the `epfml/FineWeb-HQ` dataset fail to load via the Hugging Face `datasets` library.
- xet-hosted files load fine
- LFS-hosted files sometimes fail
Example:
- Fails: https://huggingface.co/datasets/epfml/FineWeb-HQ/blob/main/data/CC-MAIN-2024-26/000_00003.parquet
- Works: https://huggingface.co/datasets/epfml/FineWeb-HQ/blob/main/data/CC-MAIN-2024-42/000_00027.parquet
Discussion: https://huggingface.co/datasets/epfml/FineWeb-HQ/discussions/2
### Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset(
"epfml/FineWeb-HQ",
data_files="data/CC-MAIN-2024-26/000_00003.parquet",
)
```
Error message:
```
HfHubHTTPError: 403 Forbidden: None.
Cannot access content at: https://cdn-lfs-us-1.hf.co/repos/...
Make sure your token has the correct permissions.
...
<Error><Code>AccessDenied</Code><Message>Access Denied</Message></Error>
```
### Expected behavior
It should load the dataset for all files.
### Environment info
- python 3.10
- datasets 4.4.1
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7882/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7882/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/7880
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7880/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7880/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7880/events
|
https://github.com/huggingface/datasets/issues/7880
| 3,667,561,864
|
I_kwDODunzps7amo2I
| 7,880
|
Spurious label column created when audiofolder/imagefolder directories match split names
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/132138786?v=4",
"events_url": "https://api.github.com/users/neha222222/events{/privacy}",
"followers_url": "https://api.github.com/users/neha222222/followers",
"following_url": "https://api.github.com/users/neha222222/following{/other_user}",
"gists_url": "https://api.github.com/users/neha222222/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/neha222222",
"id": 132138786,
"login": "neha222222",
"node_id": "U_kgDOB-BHIg",
"organizations_url": "https://api.github.com/users/neha222222/orgs",
"received_events_url": "https://api.github.com/users/neha222222/received_events",
"repos_url": "https://api.github.com/users/neha222222/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/neha222222/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/neha222222/subscriptions",
"type": "User",
"url": "https://api.github.com/users/neha222222",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[] | 2025-11-26T13:36:24
| 2025-11-26T13:36:24
| null |
NONE
| null | null | null | null |
## Describe the bug
When using `audiofolder` or `imagefolder` with directories for **splits** (train/test) rather than class labels, a spurious `label` column is incorrectly created.
**Example:** https://huggingface.co/datasets/datasets-examples/doc-audio-4
```
from datasets import load_dataset
ds = load_dataset("datasets-examples/doc-audio-4")
print(ds["train"].features)
```
Shows 'label' column with ClassLabel(names=['test', 'train']) - incorrect!## Root cause
In `folder_based_builder.py`, the `labels` set is accumulated across ALL splits (line 77). When directories are `train/` and `test/`:
- `labels = {"train", "test"}` → `len(labels) > 1` → `add_labels = True`
- Spurious label column is created with split names as class labels
## Expected behavior
No `label` column should be added when directory names match split names.
## Proposed fix
Skip label inference when inferred labels match split names.
cc @lhoestq
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7880/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7880/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/7879
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7879/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7879/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7879/events
|
https://github.com/huggingface/datasets/issues/7879
| 3,657,249,446
|
I_kwDODunzps7Z_TKm
| 7,879
|
python core dump when downloading dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/5960219?v=4",
"events_url": "https://api.github.com/users/hansewetz/events{/privacy}",
"followers_url": "https://api.github.com/users/hansewetz/followers",
"following_url": "https://api.github.com/users/hansewetz/following{/other_user}",
"gists_url": "https://api.github.com/users/hansewetz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hansewetz",
"id": 5960219,
"login": "hansewetz",
"node_id": "MDQ6VXNlcjU5NjAyMTk=",
"organizations_url": "https://api.github.com/users/hansewetz/orgs",
"received_events_url": "https://api.github.com/users/hansewetz/received_events",
"repos_url": "https://api.github.com/users/hansewetz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hansewetz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hansewetz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hansewetz",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[
"Hi @hansewetz I'm curious, for me it works just fine. Are you still observing the issue?",
"Yup ... still the same issue.\nHowever, after adding a ```sleep(1)``` call after the ``` for``` loop by accident during debugging, the program terminates properly (not a good solution though ... :-) ).\nAre there some threads created that handles the download that are still running when the program exits?\nHaven't had time yet to go through the code in ```iterable_dataset.py::IterableDataset```\n",
"Interesting, I was able to reproduce it, on a jupyter notebook the code runs just fine, as a Python script indeed it seems to never finish running (which is probably leading to the core dumped error). I'll try and take a look at the source code as well to see if I can figure it out.",
"Hi @hansewetz ,\nIf possible can I be assigned with this issue?\n\n",
"```If possible can I be assigned with this issue?```\nHi, I don't know how assignments work here and who can take decisions about assignments ... ",
"Hi @hansewetz and @Aymuos22, I have made some progress:\n\n1) Confirmed last working version is 3.1.0\n\n2) From 3.1.0 to 3.2.0, there was a change in how parquet files are read (see [here](https://github.com/huggingface/datasets/blob/main/src/datasets/packaged_modules/parquet/parquet.py/#168).\n\nThe issue seems to be the following code:\n\n```\nparquet_fragment.to_batches(\n batch_size=batch_size,\n columns=self.config.columns,\n filter=filter_expr,\n batch_readahead=0,\n fragment_readahead=0,\n )\n```\n\nAdding a `use_threads=False` parameter to the `to_batches` call solves the bug. However, this seems far from an optimal solution, since we'd like to be able to use multiple threads for reading the fragments. \n\nI'll keep investigating to see if there's a better solution.",
"Hi @lhoestq, may I ask if the current behaviour was expected by you folks and you don't think it needs solving, or should I keep on investigating a compromise between using multithreading / avoid unexpected behaviour? Thanks in advance :) ",
"Having the same issue. the code never stops executing. Using datasets 4.4.1\nTried with \"islice\" as well. When the streaming flag is True, the code doesn't end execution. On vs-code.",
"The issue on pyarrow side is here: https://github.com/apache/arrow/issues/45214 and the original issue in `datasets` here: https://github.com/huggingface/datasets/issues/7357\n\nIt would be cool to have a fix on the pyarrow side",
"Thank you very much @lhoestq, I'm reading the issue thread in pyarrow and realizing you've been raising awareness around this for a long time now. When I have some time I'll look at @pitrou's PR to see if I can get a better understanding of what's going on on pyarrow. "
] | 2025-11-24T06:22:53
| 2025-11-25T20:45:55
| null |
NONE
| null | null | null | null |
### Describe the bug
When downloading a dataset in streamed mode and exiting the program before the download completes, the python program core dumps when exiting:
```
terminate called without an active exception
Aborted (core dumped)
```
Tested with python 3.12.3, python 3.9.21
### Steps to reproduce the bug
Create python venv:
```bash
python -m venv venv
./venv/bin/activate
pip install datasets==4.4.1
```
Execute the following program:
```
from datasets import load_dataset
ds = load_dataset("HuggingFaceFW/fineweb-2", 'hrv_Latn', split="test", streaming=True)
for sample in ds:
break
```
### Expected behavior
Clean program exit
### Environment info
described above
**note**: the example works correctly when using ```datasets==3.1.0```
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7879/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7879/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/7877
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7877/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7877/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7877/events
|
https://github.com/huggingface/datasets/issues/7877
| 3,652,906,788
|
I_kwDODunzps7Zuu8k
| 7,877
|
work around `tempfile` silently ignoring `TMPDIR` if the dir doesn't exist
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stas00",
"id": 10676103,
"login": "stas00",
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"repos_url": "https://api.github.com/users/stas00/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stas00",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[
"Hi! Just created a Pull Request (#7890) to try to fix this using your suggestions. I hope it helps!"
] | 2025-11-21T19:51:48
| 2025-11-29T20:37:42
| null |
CONTRIBUTOR
| null | null | null | null |
This should help a lot of users running into `No space left on device` while using `datasets`. Normally the issue is is that `/tmp` is too small and the user needs to use another path, which they would normally set as `export TMPDIR=/some/big/storage`
However, the `tempfile` facility that `datasets` and `pyarrow` use is somewhat broken. If the path doesn't exist it'd ignore it and fall back to using `/tmp`. Watch this:
```
$ export TMPDIR='/tmp/username'
$ python -c "\
import os
import tempfile
print(os.environ['TMPDIR'])
print(tempfile.gettempdir())"
/tmp/username
/tmp
```
Now let's ensure the path exists:
```
$ export TMPDIR='/tmp/username'
$ mkdir -p $TMPDIR
$ python -c "\
import os
import tempfile
print(os.environ['TMPDIR'])
print(tempfile.gettempdir())"
/tmp/username
/tmp/username
```
So I recommend `datasets` do either of the 2:
1. assert if `$TMPDIR` dir doesn't exist, telling the user to create it
2. auto-create it
The reason for (1) is that I don't know why `tempdir` doesn't auto-create the dir - perhaps some security implication? I will let you guys make the decision, but the key is not to let things silently fall through and the user puzzling why no matter what they do they can't break past `No space left on device` while using `datasets`
Thank you.
I found this via https://stackoverflow.com/questions/37229398/python-tempfile-gettempdir-does-not-respect-tmpdir while trying to help a colleague to solve this exact issue.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7877/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7877/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/7872
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7872/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7872/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7872/events
|
https://github.com/huggingface/datasets/issues/7872
| 3,643,681,893
|
I_kwDODunzps7ZLixl
| 7,872
|
IterableDataset does not use features information in to_pandas
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/790640?v=4",
"events_url": "https://api.github.com/users/bonext/events{/privacy}",
"followers_url": "https://api.github.com/users/bonext/followers",
"following_url": "https://api.github.com/users/bonext/following{/other_user}",
"gists_url": "https://api.github.com/users/bonext/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bonext",
"id": 790640,
"login": "bonext",
"node_id": "MDQ6VXNlcjc5MDY0MA==",
"organizations_url": "https://api.github.com/users/bonext/orgs",
"received_events_url": "https://api.github.com/users/bonext/received_events",
"repos_url": "https://api.github.com/users/bonext/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bonext/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bonext/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bonext",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[
"Created A PR!",
"Another test script that can be used to test the behavior - \n\n```\nimport datasets\nfrom datasets import features\n\ndef test_crash():\n common_features = features.Features({\n \"a\": features.Value(\"int64\"),\n \"b\": features.List({\"c\": features.Value(\"int64\")}),\n })\n\n def row_generator():\n yield {\"a\": 1, \"b\": []}\n yield {\"a\": 1, \"b\": [{\"c\": 1}]}\n\n d = datasets.IterableDataset.from_generator(row_generator, features=common_features)\n\n list(d.to_pandas()) # <-- this triggers the crash\n\n```"
] | 2025-11-19T17:12:59
| 2025-11-19T18:52:14
| null |
NONE
| null | null | null | null |
### Describe the bug
`IterableDataset` created from generator with explicit `features=` parameter seems to ignore provided features description for certain operations, e.g. `.to_pandas(...)` when data coming from the generator has missing values.
### Steps to reproduce the bug
```python
import datasets
from datasets import features
def test_to_pandas_works_with_explicit_schema():
common_features = features.Features(
{
"a": features.Value("int64"),
"b": features.List({"c": features.Value("int64")}),
}
)
def row_generator():
data = [{"a": 1, "b": []}, {"a": 1, "b": [{"c": 1}]}]
for row in data:
yield row
d = datasets.IterableDataset.from_generator(row_generator, features=common_features)
for _ in d.to_pandas():
pass
# _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
# .venv/lib/python3.13/site-packages/datasets/iterable_dataset.py:3703: in to_pandas
# table = pa.concat_tables(list(self.with_format("arrow").iter(batch_size=1000)))
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
# .venv/lib/python3.13/site-packages/datasets/iterable_dataset.py:2563: in iter
# for key, pa_table in iterator:
# ^^^^^^^^
# .venv/lib/python3.13/site-packages/datasets/iterable_dataset.py:2078: in _iter_arrow
# for key, pa_table in self.ex_iterable._iter_arrow():
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
# .venv/lib/python3.13/site-packages/datasets/iterable_dataset.py:599: in _iter_arrow
# yield new_key, pa.Table.from_batches(chunks_buffer)
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
# pyarrow/table.pxi:5039: in pyarrow.lib.Table.from_batches
# ???
# pyarrow/error.pxi:155: in pyarrow.lib.pyarrow_internal_check_status
# ???
# _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
# > ???
# E pyarrow.lib.ArrowInvalid: Schema at index 1 was different:
# E a: int64
# E b: list<item: null>
# E vs
# E a: int64
# E b: list<item: struct<c: int64>>
# pyarrow/error.pxi:92: ArrowInvalid
```
### Expected behavior
arrow operations use schema provided through `features=` and not the one inferred from the data
### Environment info
- datasets version: 4.4.1
- Platform: macOS-15.7.1-arm64-arm-64bit-Mach-O
- Python version: 3.13.1
- huggingface_hub version: 1.1.4
- PyArrow version: 22.0.0
- Pandas version: 2.3.3
- fsspec version: 2025.10.0
| null |
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7872/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7872/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/7871
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7871/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7871/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7871/events
|
https://github.com/huggingface/datasets/issues/7871
| 3,643,607,371
|
I_kwDODunzps7ZLQlL
| 7,871
|
Reqwest Error: HTTP status client error (429 Too Many Requests)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/26405281?v=4",
"events_url": "https://api.github.com/users/yanan1116/events{/privacy}",
"followers_url": "https://api.github.com/users/yanan1116/followers",
"following_url": "https://api.github.com/users/yanan1116/following{/other_user}",
"gists_url": "https://api.github.com/users/yanan1116/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yanan1116",
"id": 26405281,
"login": "yanan1116",
"node_id": "MDQ6VXNlcjI2NDA1Mjgx",
"organizations_url": "https://api.github.com/users/yanan1116/orgs",
"received_events_url": "https://api.github.com/users/yanan1116/received_events",
"repos_url": "https://api.github.com/users/yanan1116/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yanan1116/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yanan1116/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yanan1116",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[
"the dataset repo: `https://huggingface.co/datasets/nvidia/PhysicalAI-Robotics-GR00T-X-Embodiment-Sim`",
"Hi @yanan1116,\n\nThanks for the detailed report! However, this issue was filed in the wrong repository. This is a `huggingface_hub` issue, not a `datasets` issue.\n\nLooking at your traceback, you're using the `hf download` CLI command (from `huggingface_hub`), and the error occurs in `huggingface_hub/file_download.py` at line 571 in the `xet_get` function. The `datasets` library is not involved in this download at all.\n\nThe 429 error means the CAS (Content Addressable Storage) service at `https://cas-server.xethub.hf.co` is rate-limiting your requests. The `huggingface_hub` library currently doesn't have automatic retry logic for 429 errors from the CAS service.\n\nPlease reopen this issue at: https://github.com/huggingface/huggingface_hub/issues"
] | 2025-11-19T16:52:24
| 2025-11-30T03:32:00
| null |
NONE
| null | null | null | null |
### Describe the bug
full error message:
```
Traceback (most recent call last):
File "/home/yanan/miniconda3/bin/hf", line 7, in <module>
sys.exit(main())
~~~~^^
File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/cli/hf.py", line 56, in main
app()
~~~^^
File "/home/yanan/miniconda3/lib/python3.13/site-packages/typer/main.py", line 327, in __call__
raise e
File "/home/yanan/miniconda3/lib/python3.13/site-packages/typer/main.py", line 310, in __call__
return get_command(self)(*args, **kwargs)
~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "/home/yanan/miniconda3/lib/python3.13/site-packages/click/core.py", line 1161, in __call__
return self.main(*args, **kwargs)
~~~~~~~~~^^^^^^^^^^^^^^^^^
File "/home/yanan/miniconda3/lib/python3.13/site-packages/typer/core.py", line 803, in main
return _main(
self,
...<6 lines>...
**extra,
)
File "/home/yanan/miniconda3/lib/python3.13/site-packages/typer/core.py", line 192, in _main
rv = self.invoke(ctx)
File "/home/yanan/miniconda3/lib/python3.13/site-packages/click/core.py", line 1697, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^
File "/home/yanan/miniconda3/lib/python3.13/site-packages/click/core.py", line 1443, in invoke
return ctx.invoke(self.callback, **ctx.params)
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yanan/miniconda3/lib/python3.13/site-packages/click/core.py", line 788, in invoke
return __callback(*args, **kwargs)
File "/home/yanan/miniconda3/lib/python3.13/site-packages/typer/main.py", line 691, in wrapper
return callback(**use_params)
File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/cli/download.py", line 188, in download
_print_result(run_download())
~~~~~~~~~~~~^^
File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/cli/download.py", line 149, in run_download
return snapshot_download(
repo_id=repo_id,
...<10 lines>...
dry_run=dry_run,
)
File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/utils/_validators.py", line 89, in _inner_fn
return fn(*args, **kwargs)
File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/_snapshot_download.py", line 451, in snapshot_download
thread_map(
~~~~~~~~~~^
_inner_hf_hub_download,
^^^^^^^^^^^^^^^^^^^^^^^
...<3 lines>...
tqdm_class=tqdm_class,
^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/home/yanan/miniconda3/lib/python3.13/site-packages/tqdm/contrib/concurrent.py", line 69, in thread_map
return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs)
File "/home/yanan/miniconda3/lib/python3.13/site-packages/tqdm/contrib/concurrent.py", line 51, in _executor_map
return list(tqdm_class(ex.map(fn, *iterables, chunksize=chunksize), **kwargs))
File "/home/yanan/miniconda3/lib/python3.13/site-packages/tqdm/std.py", line 1181, in __iter__
for obj in iterable:
^^^^^^^^
File "/home/yanan/miniconda3/lib/python3.13/concurrent/futures/_base.py", line 619, in result_iterator
yield _result_or_cancel(fs.pop())
~~~~~~~~~~~~~~~~~^^^^^^^^^^
File "/home/yanan/miniconda3/lib/python3.13/concurrent/futures/_base.py", line 317, in _result_or_cancel
return fut.result(timeout)
~~~~~~~~~~^^^^^^^^^
File "/home/yanan/miniconda3/lib/python3.13/concurrent/futures/_base.py", line 449, in result
return self.__get_result()
~~~~~~~~~~~~~~~~~^^
File "/home/yanan/miniconda3/lib/python3.13/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
File "/home/yanan/miniconda3/lib/python3.13/concurrent/futures/thread.py", line 59, in run
result = self.fn(*self.args, **self.kwargs)
File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/_snapshot_download.py", line 431, in _inner_hf_hub_download
hf_hub_download( # type: ignore
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
repo_id,
^^^^^^^^
...<14 lines>...
dry_run=dry_run,
^^^^^^^^^^^^^^^^
)
^
File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/utils/_validators.py", line 89, in _inner_fn
return fn(*args, **kwargs)
File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/file_download.py", line 986, in hf_hub_download
return _hf_hub_download_to_local_dir(
# Destination
...<16 lines>...
dry_run=dry_run,
)
File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/file_download.py", line 1390, in _hf_hub_download_to_local_dir
_download_to_tmp_and_move(
~~~~~~~~~~~~~~~~~~~~~~~~~^
incomplete_path=paths.incomplete_path(etag),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...<8 lines>...
tqdm_class=tqdm_class,
^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/file_download.py", line 1791, in _download_to_tmp_and_move
xet_get(
~~~~~~~^
incomplete_path=incomplete_path,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...<4 lines>...
tqdm_class=tqdm_class,
^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/file_download.py", line 571, in xet_get
download_files(
~~~~~~~~~~~~~~^
xet_download_info,
^^^^^^^^^^^^^^^^^^
...<3 lines>...
progress_updater=[progress_updater],
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
RuntimeError: Data processing error: CAS service error : Reqwest Error: HTTP status client error (429 Too Many Requests), domain: https://cas-server.xethub.hf.co/reconstructions/04b8a4667b84b3b874a6a2f070cec88920f6289e71185d69fa87e3cf29834710
```
### Steps to reproduce the bug
my command
```bash
hf download nvidia/PhysicalAI-Robotics-GR00T-X-Embodiment-Sim --repo-type dataset --include "single_panda_gripper.CoffeePressButton/**" --local-dir /home/yanan/robotics/Isaac-GR00T/gr00t_dataset_official/
```
### Expected behavior
expect the data can be downloaded without any issue
### Environment info
huggingface_hub 1.1.4
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7871/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7871/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/7870
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7870/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7870/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7870/events
|
https://github.com/huggingface/datasets/issues/7870
| 3,642,209,953
|
I_kwDODunzps7ZF7ah
| 7,870
|
Visualization for Medical Imaging Datasets
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/31857876?v=4",
"events_url": "https://api.github.com/users/CloseChoice/events{/privacy}",
"followers_url": "https://api.github.com/users/CloseChoice/followers",
"following_url": "https://api.github.com/users/CloseChoice/following{/other_user}",
"gists_url": "https://api.github.com/users/CloseChoice/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/CloseChoice",
"id": 31857876,
"login": "CloseChoice",
"node_id": "MDQ6VXNlcjMxODU3ODc2",
"organizations_url": "https://api.github.com/users/CloseChoice/orgs",
"received_events_url": "https://api.github.com/users/CloseChoice/received_events",
"repos_url": "https://api.github.com/users/CloseChoice/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/CloseChoice/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CloseChoice/subscriptions",
"type": "User",
"url": "https://api.github.com/users/CloseChoice",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"It would be amazing to be able to show the Papaya UI in google colab / jupyter notebook. IIRC both allow serving javascript via nbextensions that we can surely use in HTML() objects.\n\nAlternatively we could also start with a simple approach and dump the medical image data as a video file that goes through the slices, so we don't need javascript."
] | 2025-11-19T11:05:39
| 2025-11-21T12:31:19
| 2025-11-21T12:31:19
|
CONTRIBUTOR
| null | null | null | null |
This is a followup to: https://github.com/huggingface/datasets/pull/7815.
I checked the possibilities to visualize the nifti (and potentially dicom), and here's what I found:
- https://github.com/aces/brainbrowser, AGPL3 license, last commit 3 months ago, latest (github) release from 2017. It's available on jsdelivr: https://www.jsdelivr.com/package/npm/brainbrowser (but that is from 2015!)
- https://github.com/rii-mango/Papaya, custom but BSD-style license that would require datasets to list the conditions in their readme somewhere, last commit June 2024. I looked into this library and it looks mature and good enough for our use case, but just working on it for a short time I wasn't able to get this to work, but am sure we could get this working, would probably require some JS on datasets' end. Available on jsdelivr as well: https://www.jsdelivr.com/package/npm/papaya-viewer. Seems like it's frequently loaded.
- https://github.com/hanayik/niivue, BSD3 license, last commit May 26, 2021. Archived. Doesn't look like an option.
I think the only real option for us Papaya, but there is also the risk that we'll end up with an unmaintained package after a while, since development seems to be slow or even halted.
I think conceptually we would need to figure out how we can build a good solution for visualizing Medical Image data. On shap, we have a separate javascript folder in which we render visualizations, this could be a blueprint but will require a bundler, etc. Alternatively one could go with a naive approach to just write some html code in a python string and load the package via jsdelivr.
@lhoestq thoughts?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7870/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7870/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 2 days, 1:25:40
|
https://api.github.com/repos/huggingface/datasets/issues/7869
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7869/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7869/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7869/events
|
https://github.com/huggingface/datasets/issues/7869
| 3,636,808,734
|
I_kwDODunzps7YxUwe
| 7,869
|
Why does dataset merge fail when tools have different parameters?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/116297296?v=4",
"events_url": "https://api.github.com/users/hitszxs/events{/privacy}",
"followers_url": "https://api.github.com/users/hitszxs/followers",
"following_url": "https://api.github.com/users/hitszxs/following{/other_user}",
"gists_url": "https://api.github.com/users/hitszxs/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hitszxs",
"id": 116297296,
"login": "hitszxs",
"node_id": "U_kgDOBu6OUA",
"organizations_url": "https://api.github.com/users/hitszxs/orgs",
"received_events_url": "https://api.github.com/users/hitszxs/received_events",
"repos_url": "https://api.github.com/users/hitszxs/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hitszxs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hitszxs/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hitszxs",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[
"Hi @hitszxs,\n This is indeed by design,\n\nThe `datasets` library is built on top of [Apache Arrow](https://arrow.apache.org/), which uses a **columnar storage format** with strict schema requirements. When you try to concatenate/merge datasets, the library checks if features can be aligned using the [`_check_if_features_can_be_aligned`](https://github.com/huggingface/datasets/blob/main/src/datasets/features/features.py#L2297-L2316) function.\n\nTwo datasets can be merged if:\n1. Columns with the same name have the **same type**, OR\n2. One of them has `Value(\"null\")` (representing missing data)\n\nFor struct types (nested dictionaries like your tool schemas), **all fields must match exactly**. This ensures type safety and efficient columnar storage.\n\n## Workarounds for Your Use Case\n Store tools as JSON strings\n\nInstead of using nested struct types, store the tool definitions as JSON strings\n\n\n"
] | 2025-11-18T08:33:04
| 2025-11-30T03:52:07
| null |
NONE
| null | null | null | null |
Hi, I have a question about SFT (Supervised Fine-tuning) for an agent model.
Suppose I want to fine-tune an agent model that may receive two different tools: tool1 and tool2. These tools have different parameters and types in their schema definitions.
When I try to merge datasets containing different tool definitions, I get the following error:
TypeError: Couldn't cast array of type
struct<refundFee: struct<description: string, type: string>, ... , servicerId: struct<description: string, type: string>>
to
{
'refundFee': {'description': Value(dtype='string'), 'type': Value(dtype='string')},
...
'templateId': {'description': Value(dtype='string'), 'type': Value(dtype='string')}
}
From my understanding, the merge fails because the tools column's nested structure is different across datasets — e.g., one struct contains an extra field servicerId while the other does not. This causes HuggingFace Datasets (and its underlying Apache Arrow schema) to reject the merge.
My question is: why is it designed this way?
Is this strict schema matching a hard requirement of the library?
Is there a recommended way to merge datasets with different tool schemas (different parameters and types)?
For an agent model supporting multiple tools, what's the best practice for preparing/merging training data without losing flexibility?
Any guidance or design rationale would be greatly appreciated. Thanks!
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7869/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7869/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/7868
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7868/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7868/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7868/events
|
https://github.com/huggingface/datasets/issues/7868
| 3,632,429,308
|
I_kwDODunzps7Ygnj8
| 7,868
|
Data duplication with `split_dataset_by_node` and `interleaved_dataset`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42485228?v=4",
"events_url": "https://api.github.com/users/ValMystletainn/events{/privacy}",
"followers_url": "https://api.github.com/users/ValMystletainn/followers",
"following_url": "https://api.github.com/users/ValMystletainn/following{/other_user}",
"gists_url": "https://api.github.com/users/ValMystletainn/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ValMystletainn",
"id": 42485228,
"login": "ValMystletainn",
"node_id": "MDQ6VXNlcjQyNDg1MjI4",
"organizations_url": "https://api.github.com/users/ValMystletainn/orgs",
"received_events_url": "https://api.github.com/users/ValMystletainn/received_events",
"repos_url": "https://api.github.com/users/ValMystletainn/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ValMystletainn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ValMystletainn/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ValMystletainn",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[
"Hi @ValMystletainn ,\nCan I be assigned this issue?",
"> split_dataset_by_node\n\nHello, I have some questions about your intended use: (1) It seems unnecessary to use interleaving for a single dataset. (2) For multiple datasets, it seems possible to interleave first and then split by node?"
] | 2025-11-17T09:15:24
| 2025-11-29T03:21:34
| null |
NONE
| null | null | null | null |
### Describe the bug
Data duplication in different rank, when process a iterabledataset with first `split_dataset_by_node` and then `interleaved_dataset`
### Steps to reproduce the bug
I have provide a minimum scripts
```python
import os
from datasets import interleave_datasets, load_dataset
from datasets.distributed import split_dataset_by_node
path = "/mnt/wwx/datasets/fineweb/data/CC-MAIN-2013-20/"
files = [os.path.join(path, fn) for fn in os.listdir(path)]
dataset = load_dataset("parquet", split="train", data_files=files, streaming=True)
print(f"{dataset.n_shards=}")
dataset_rank0 = split_dataset_by_node(dataset, 0, 4)
dataset_rank1 = split_dataset_by_node(dataset, 1, 4)
dataset_rank0_interleaved = interleave_datasets([dataset_rank0], seed=42, probabilities=[1.0])
dataset_rank1_interleaved = interleave_datasets([dataset_rank1], seed=42, probabilities=[1.0])
print("print the first sample id from all datasets")
print("dataset", next(iter(dataset))['id'])
print("dataset_rank0", next(iter(dataset_rank0))['id'])
print("dataset_rank1", next(iter(dataset_rank1))['id'])
print("dataset_rank0_interleaved", next(iter(dataset_rank0_interleaved))['id'])
print("dataset_rank1_interleaved", next(iter(dataset_rank1_interleaved))['id'])
dataset_rank0_shard = dataset.shard(4, 0)
dataset_rank1_shard = dataset.shard(4, 1)
dataset_rank0_shard_interleaved = interleave_datasets([dataset_rank0_shard], seed=42, probabilities=[1.0])
dataset_rank1_shard_interleaved = interleave_datasets([dataset_rank1_shard], seed=42, probabilities=[1.0])
print("dataset_rank0_shard", next(iter(dataset_rank0_shard))['id'])
print("dataset_rank1_shard", next(iter(dataset_rank1_shard))['id'])
print("dataset_rank0_shard_interleaved", next(iter(dataset_rank0_shard_interleaved))['id'])
print("dataset_rank1_shard_interleaved", next(iter(dataset_rank1_shard_interleaved))['id'])
```
I just use a subfold of C4 with 14 paruets to do the quick run and get
```
dataset.n_shards=14
print the first sample id from all datasets
dataset <urn:uuid:c84a7f00-f3e8-4b67-baa4-df5adaf23bae>
dataset_rank0 <urn:uuid:c84a7f00-f3e8-4b67-baa4-df5adaf23bae>
dataset_rank1 <urn:uuid:6b7da64f-c26e-4086-aef5-4b6f01106223>
dataset_rank0_interleaved <urn:uuid:c84a7f00-f3e8-4b67-baa4-df5adaf23bae>
dataset_rank1_interleaved <urn:uuid:c84a7f00-f3e8-4b67-baa4-df5adaf23bae>
dataset_rank0_shard <urn:uuid:c84a7f00-f3e8-4b67-baa4-df5adaf23bae>
dataset_rank1_shard <urn:uuid:67cf7216-dd05-4f55-a28a-1a1c96989c51>
dataset_rank0_shard_interleaved <urn:uuid:c84a7f00-f3e8-4b67-baa4-df5adaf23bae>
dataset_rank1_shard_interleaved <urn:uuid:67cf7216-dd05-4f55-a28a-1a1c96989c51>
```
### Expected behavior
the first sample of `dataset_rank0_interleaved` and `dataset_rank1_interleaved` should be different, as other `rank0` `rank1` couples.
I have dive into the function and try to find how it work in `split -> interleaved` process.
the `split_dataset_by_node` of iterable dataset does't not change `._ex_iterable` attribute of the dataset. it just set the distributed config in dataset, and the distributed dataset is used in actually `__iter__` call, to handle with shard split or sample skipping.
however, in `interleaved_dataset` of iterable dataset. it copy out all of the `._ex_iterable` of provided datasets, and consist a new `_ex_iterable`, so the missing copy of `distributed config` caused the data duplication in different dp rank.
So I may first ask, is it an unexpected using order of those function, which means:
- always do `split_dataset_by_node` at final rather than in middle way.
- or use `dataset.shard(dp_size, dp_rank)` rather than `split_dataset_by_node` in case similar of mine.
if the using order is permiited, I think it is a bug, and I can do a PR to fix it
(I meet this bug in real training, related issue is https://github.com/ByteDance-Seed/VeOmni/issues/200 if it helps.
### Environment info
datasets 4.4.1
ubuntu 20.04
python 3.11.4
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7868/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7868/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/7867
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7867/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7867/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7867/events
|
https://github.com/huggingface/datasets/issues/7867
| 3,620,931,722
|
I_kwDODunzps7X0wiK
| 7,867
|
NonMatchingSplitsSizesError when loading partial dataset files
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/13678719?v=4",
"events_url": "https://api.github.com/users/QingGo/events{/privacy}",
"followers_url": "https://api.github.com/users/QingGo/followers",
"following_url": "https://api.github.com/users/QingGo/following{/other_user}",
"gists_url": "https://api.github.com/users/QingGo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/QingGo",
"id": 13678719,
"login": "QingGo",
"node_id": "MDQ6VXNlcjEzNjc4NzE5",
"organizations_url": "https://api.github.com/users/QingGo/orgs",
"received_events_url": "https://api.github.com/users/QingGo/received_events",
"repos_url": "https://api.github.com/users/QingGo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/QingGo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/QingGo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/QingGo",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[
"While using verification_mode='no_checks' parameter in load_dataset() can bypass this validation, this solution is not intuitive or convenient for most users, especially those who are not familiar with all the parameters of the load_dataset() function.\n\n```python\nbook_corpus_ds = load_dataset(\n \"SaylorTwift/the_pile_books3_minus_gutenberg\",\n name=\"default\",\n data_files=\"data/train-00000-of-00213-312fd8d7a3c58a63.parquet\",\n split=\"train\",\n cache_dir=\"./data\",\n verification_mode='no_checks'\n)\n```",
"Thanks for the report and reproduction steps @QingGo \n@lhoestq which one of the following looks like a nicer way to handle this?\n\n1] Skip split-size validation entirely for partial loads\nIf the user passes data_files manually and it represents only a subset, then verify_splits() should simply not run, or skip validation only for that split.\n\n2] Replace the error with a warning\n\n3] Automatically detect partial-load cases(i mean we can try this out!)\n\nAssume this, \nIf data_files is provided AND\nthe number of provided files ≠ number of expected files in metadata,\nthen treat it as a partial load and disable strict verification.\n"
] | 2025-11-13T12:03:23
| 2025-11-16T15:39:23
| null |
NONE
| null | null | null | null |
### Describe the bug
When loading only a subset of dataset files while the dataset's README.md contains split metadata, the system throws a NonMatchingSplitsSizesError . This prevents users from loading partial datasets for quick validation in cases of poor network conditions or very large datasets.
### Steps to reproduce the bug
1. Use the Hugging Face `datasets` library to load a dataset with only specific files specified
2. Ensure the dataset repository has split metadata defined in README.md
3. Observe the error when attempting to load a subset of files
```python
# Example code that triggers the error
from datasets import load_dataset
book_corpus_ds = load_dataset(
"SaylorTwift/the_pile_books3_minus_gutenberg",
name="default",
data_files="data/train-00000-of-00213-312fd8d7a3c58a63.parquet",
split="train",
cache_dir="./data"
)
```
### Error Message
```
Traceback (most recent call last):
File "/Users/QingGo/code/llm_learn/src/data/clean_cc_bc.py", line 13, in <module>
book_corpus_ds = load_dataset(
"SaylorTwift/the_pile_books3_minus_gutenberg",
...
File "/Users/QingGo/code/llm_learn/.venv/lib/python3.13/site-packages/datasets/utils/info_utils.py", line 77, in verify_splits
raise NonMatchingSplitsSizesError(str(bad_splits))
datasets.exceptions.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=106199627990.47722, num_examples=192661, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='train', num_bytes=454897326, num_examples=905, shard_lengths=None, dataset_name='the_pile_books3_minus_gutenberg')}]
```
### Expected behavior
When loading partial dataset files, the system should:
1. Skip the `NonMatchingSplitsSizesError` validation, OR
2. Only log a warning message instead of raising an error
### Environment info
- `datasets` version: 4.3.0
- Platform: macOS-15.7.1-arm64-arm-64bit-Mach-O
- Python version: 3.13.2
- `huggingface_hub` version: 0.36.0
- PyArrow version: 22.0.0
- Pandas version: 2.3.3
- `fsspec` version: 2025.9.0
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7867/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7867/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/7864
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7864/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7864/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7864/events
|
https://github.com/huggingface/datasets/issues/7864
| 3,619,137,823
|
I_kwDODunzps7Xt6kf
| 7,864
|
add_column and add_item erroneously(?) require new_fingerprint parameter
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/17151810?v=4",
"events_url": "https://api.github.com/users/echthesia/events{/privacy}",
"followers_url": "https://api.github.com/users/echthesia/followers",
"following_url": "https://api.github.com/users/echthesia/following{/other_user}",
"gists_url": "https://api.github.com/users/echthesia/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/echthesia",
"id": 17151810,
"login": "echthesia",
"node_id": "MDQ6VXNlcjE3MTUxODEw",
"organizations_url": "https://api.github.com/users/echthesia/orgs",
"received_events_url": "https://api.github.com/users/echthesia/received_events",
"repos_url": "https://api.github.com/users/echthesia/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/echthesia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/echthesia/subscriptions",
"type": "User",
"url": "https://api.github.com/users/echthesia",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[
"Take this with a grain of salt, this is just my personal understanding:\nWhile you technically can overwrite the new_fingerprint with a string, e.g.\n```python\nt = d.add_column(\"new_column\", col_value, new_fingerprint=\"dummy_fp\")\nassert t._fingerprint == \"dummy_fp\" # this is true and will pass\n```\nthis is not desired since the fingerprint should be calculated based on the operations (and their arguments) to be unique. This is handled by the [fingerprint_transform](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L6077) function which needs a \"new_fingerprint\" keyword argument and creates a unique hash if its value is not set, see [here](https://github.com/huggingface/datasets/blob/main/src/datasets/fingerprint.py#L432). So it is probably safer to not document this keyword, since one doesn't want the user to actually use it and it's only a feature in very limited cases for people really knowing what they are doing. The thing that might be bugging people who read the code is that `new_fingerprint` seems to be required for `add_item` and `add_column` but it is actually set by the decorator (in which's definition it is optional), so maybe changing the signature of `add_item` and `add_column` to `new_fingerprint: Optional[str] = None` would make sense, since this is also how it's handled in the other cases (created by claude):\n\n - [flatten](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L2034)\n - [cast_column](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L2165)\n - [remove_columns](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L2209)\n - [rename_column](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L2263)\n - [rename_columns](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L2329)\n - [select_columns](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L2397)\n - [batch](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L3760)\n - [filter](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L3813)\n - [flatten_indices](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L3959)\n - [select](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L4038)\n - [_select_contiguous](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L4128)\n - [sort](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L4376)\n - [shuffle](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L4506)\n - [train_test_split](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L4641)\nSo as you mentioned, I believe the methods erronously require the `new_fingerprint` parameter and making them optional is a little consistency win."
] | 2025-11-13T02:56:49
| 2025-11-24T20:33:59
| null |
NONE
| null | null | null | null |
### Describe the bug
Contradicting their documentation (which doesn't mention the parameter at all), both Dataset.add_column and Dataset.add_item require a new_fingerprint string. This parameter is passed directly to the dataset constructor, which has the fingerprint parameter listed as optional; is there any reason it shouldn't be optional in these methods as well?
### Steps to reproduce the bug
Reproduction steps:
1. Look at the function signature for add_column: https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L6078
2. Repeat for add_item: https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L6336
### Expected behavior
add_column and add_item should either set the fingerprint parameter to optional or include it in their docstrings
### Environment info
Not environment-dependent
| null |
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7864/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7864/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/7863
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7863/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7863/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7863/events
|
https://github.com/huggingface/datasets/issues/7863
| 3,618,836,821
|
I_kwDODunzps7XsxFV
| 7,863
|
Support hosting lance / vortex / iceberg / zarr datasets on huggingface hub
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/3664715?v=4",
"events_url": "https://api.github.com/users/pavanramkumar/events{/privacy}",
"followers_url": "https://api.github.com/users/pavanramkumar/followers",
"following_url": "https://api.github.com/users/pavanramkumar/following{/other_user}",
"gists_url": "https://api.github.com/users/pavanramkumar/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/pavanramkumar",
"id": 3664715,
"login": "pavanramkumar",
"node_id": "MDQ6VXNlcjM2NjQ3MTU=",
"organizations_url": "https://api.github.com/users/pavanramkumar/orgs",
"received_events_url": "https://api.github.com/users/pavanramkumar/received_events",
"repos_url": "https://api.github.com/users/pavanramkumar/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/pavanramkumar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pavanramkumar/subscriptions",
"type": "User",
"url": "https://api.github.com/users/pavanramkumar",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] |
[
"Kudos!",
"So cool ! Would love to see support for lance :)",
"@lhoestq thanks for your support! Any suggestions across `datasets` or `huggingface_hub` projects to make this happen?\n\nI just noticed this blog post: https://huggingface.co/blog/streaming-datasets\n\nDo you know if `hfFileSystem` from `huggingface_hub` is flexible enough to accommodate lance? I don't want to `open` and scan a file, I want to create generators with the `lance.dataset.to_batches()` from each fragment (partition) that I can iterate over in a distributed dataloader.\n\nIdeally, something like this should just work:\n\n```\nimport lance\nlance_ds_path = f\"hf://datasets/{dataset_id}/{path_in_repo}.lance\"\nds = lance.dataset(lance_ds_path)\nfragments = ds.get_fragments()\nfragment_generators = []\nfor fragment in fragments:\n fragment_generators = fragment.to_batches()\n```\n\nLooking at the huggingface blog post, I think we might need a PR into `pyarrow` to create a `LanceFragmentScanOptions` class that subclasses [pyarrow.dataset.FragmentScanOptions](https://arrow.apache.org/docs/python/generated/pyarrow.dataset.FragmentScanOptions.html#pyarrow.dataset.FragmentScanOptions) cc @prrao87, @changhiskhan",
"> Do you know if HfFileSystem from huggingface_hub is flexible enough to accommodate lance?\n\nit provides file-like objects for files on HF, and works using range requests. PyArrow uses HfFileSystem for HF files already\n\nThough in the Parquet / PyArrow case the data is read generally row group per row group (using range requests with a minimum size `range_size_limit ` to optimize I/O in case of small row groups)\n\nPS: there is an equivalent to HfFileSystem in rust in OpenDAL, but it only supports read from HF, not write (yet ?)\n\n> I don't want to open and scan a file, I want to create generators with the lance.dataset.to_batches() from each fragment (partition) that I can iterate over in a distributed dataloader.\n\nWe do something very similar for Parquet here: \n\nhttps://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/packaged_modules/parquet/parquet.py#L168-L169",
"Hi, I work on the Lance project. We'd be happy to see the format supported on huggingface hub.\n\nIt's not clear to me from this thread what is required for that. Could we clarify that? Are there examples we can point to?\n\n> I think we might need a PR into `pyarrow` to create a `LanceFragmentScanOptions` class that subclasses [pyarrow.dataset.FragmentScanOptions](https://arrow.apache.org/docs/python/generated/pyarrow.dataset.FragmentScanOptions.html#pyarrow.dataset.FragmentScanOptions)\n\nCould you elaborate why a `FragmentScanOptions` subclass is required? Also, if it is, we could just define that as a subclass within the `pylance` module, unless I'm missing something.\n\nLance supports OpenDAL storage, so I think we could add support for huggingface's filesystem through that and make sure it's exposed in pylance. Could also help implement some write operations. Perhaps that's the main blocker? ",
"> PS: there is an equivalent to HfFileSystem in rust in OpenDAL, but it only supports read from HF, not write (yet ?)\n\nHi, I’m willing to add full-fledged support for the HF file system. This shouldn’t be considered a blocker. 🤟 ",
"Exposing the existing HF filesystem from OpenDAL in pylance would be great ! and a good first step\n\nExcited for write operations too",
"Thanks @lhoestq @wjones127 @Xuanwo ! I think we have all the necessary people on this thread now to make it happen :)\n\n> Could you elaborate why a FragmentScanOptions subclass is required? Also, if it is, we could just define that as a subclass within the pylance module, unless I'm missing something.\n\n@wjones127 I'm not actually sure this is needed but I'm guessing based on [this blog post](https://huggingface.co/blog/streaming-datasets) from a couple of weeks ago. Specifically, this section which allows creation of a dataset object with configurable prefetching:\n\n```\nimport pyarrow\nimport pyarrow.dataset\n\nfragment_scan_options = pyarrow.dataset.ParquetFragmentScanOptions(\n cache_options=pyarrow.CacheOptions(\n prefetch_limit=1,\n range_size_limit=128 << 20\n ),\n)\nds = load_dataset(parquet_dataset_id, streaming=True, fragment_scan_options=fragment_scan_options)\n```\n\nI might be completely wrong that we do need an equivalent `LanceFragmentScanOptions` PR into `pyarrow` and the `OpenDAL` path might be sufficient.\n\nI really just want something like this to work out of the box:\n\n```\nimport lance\nlance_ds_path = f\"hf://datasets/{dataset_id}/{path_in_repo}.lance\"\nds = lance.dataset(lance_ds_path)\nfragments = ds.get_fragments()\nfragment_generators = []\nfor fragment in fragments:\n fragment_generators = fragment.to_batches()\n```\n\nIn the ideal case, I'd like to be able to control prefetch configuration via arguments to `to_batches()` like the ones that already exist for a lance dataset on any S3-compatible object store.\n\nWould a useful approach be to create a toy lance dataset on huggingface and see if this \"just works\"; then work backwards from there?\n\nAs for writing, I'm looking to migrate datasets from my own private S3-compatible object store bucket (Tigris Data) to huggingface datasets but ~~I'm 100% sure~~ I'm _not_ 100% sure whether we even need `hfFileSystem` compatible write capability\n\n\n",
"Here's a public dataset which could be a working example to work backwards from:\n\nhttps://huggingface.co/datasets/pavan-ramkumar/test-slaf\n\npylance currently looks for default object store backends and returns this `ValueError`\n\n```\n>>> import lance\n>>> hf_path = \"hf://datasets/pavan-ramkumar/test-slaf/tree/main/synthetic_50k_processed_v21.slaf/expression.lance\"\n>>> ds = lance.dataset(hf_path)\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\n File \"/Users/pavan/slaf-project/slaf/.venv/lib/python3.12/site-packages/lance/__init__.py\", line 145, in dataset\n ds = LanceDataset(\n ^^^^^^^^^^^^^\n File \"/Users/pavan/slaf-project/slaf/.venv/lib/python3.12/site-packages/lance/dataset.py\", line 425, in __init__\n self._ds = _Dataset(\n ^^^^^^^^^\nValueError: Invalid user input: No object store provider found for scheme: 'hf'\nValid schemes: gs, memory, s3, az, file-object-store, file, oss, s3+ddb, /Users/runner/work/lance/lance/rust/lance-io/src/object_store/providers.rs:161:54\n```",
"@Xuanwo @wjones127 just checking in to see if you had a chance to add a huggingface provider via opendal to pylance. I'm assuming we need a new `huggingface.rs` provider [here](https://github.com/lance-format/lance/tree/4d9c1a4d459ea486556de0ee90828a442d0425b0/rust/lance-io/src/object_store/providers).\n\nDo let me know if I can do anything to help, really excited to help stream lance datasets from huggingface hub",
"> @Xuanwo @wjones127 just checking in to see if you had a chance to add a huggingface provider via opendal to pylance. I'm assuming we need a new `huggingface.rs` provider [here](https://github.com/lance-format/lance/tree/4d9c1a4d459ea486556de0ee90828a442d0425b0/rust/lance-io/src/object_store/providers).\n> \n> Do let me know if I can do anything to help, really excited to help stream lance datasets from huggingface hub\n\nI'm willing to work on this! Would you like to create an issue on lance side and ping me there?",
" > I'm willing to work on this! Would you like to create an issue on lance side and ping me there?\n\nDone! [Link](https://github.com/lance-format/lance/issues/5346)\n",
"@pavanramkumar pls check this out once it's merged! https://github.com/lance-format/lance/pull/5353"
] | 2025-11-13T00:51:07
| 2025-11-26T14:10:29
| null |
NONE
| null | null | null | null |
### Feature request
Huggingface datasets has great support for large tabular datasets in parquet with large partitions. I would love to see two things in the future:
- equivalent support for `lance`, `vortex`, `iceberg`, `zarr` (in that order) in a way that I can stream them using the datasets library
- more fine-grained control of streaming, so that I can stream at the partition / shard level
### Motivation
I work with very large `lance` datasets on S3 and often require random access for AI/ML applications like multi-node training. I was able to achieve high throughput dataloading on a lance dataset with ~150B rows by building distributed dataloaders that can be scaled both vertically (until i/o and CPU are saturated), and then horizontally (to workaround network bottlenecks).
Using this strategy I was able to achieve 10-20x the throughput of the streaming data loader from the `huggingface/datasets` library.
I realized that these would be great features for huggingface to support natively
### Your contribution
I'm not ready yet to make a PR but open to it with the right pointers!
| null |
{
"+1": 4,
"-1": 0,
"confused": 0,
"eyes": 2,
"heart": 5,
"hooray": 2,
"laugh": 2,
"rocket": 8,
"total_count": 23,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7863/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7863/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/7861
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7861/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7861/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7861/events
|
https://github.com/huggingface/datasets/issues/7861
| 3,611,821,713
|
I_kwDODunzps7XSAaR
| 7,861
|
Performance Issue: save_to_disk() 200-1200% slower due to unconditional flatten_indices()
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/222552287?v=4",
"events_url": "https://api.github.com/users/KCKawalkar/events{/privacy}",
"followers_url": "https://api.github.com/users/KCKawalkar/followers",
"following_url": "https://api.github.com/users/KCKawalkar/following{/other_user}",
"gists_url": "https://api.github.com/users/KCKawalkar/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/KCKawalkar",
"id": 222552287,
"login": "KCKawalkar",
"node_id": "U_kgDODUPg3w",
"organizations_url": "https://api.github.com/users/KCKawalkar/orgs",
"received_events_url": "https://api.github.com/users/KCKawalkar/received_events",
"repos_url": "https://api.github.com/users/KCKawalkar/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/KCKawalkar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KCKawalkar/subscriptions",
"type": "User",
"url": "https://api.github.com/users/KCKawalkar",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[] | 2025-11-11T11:05:38
| 2025-11-11T11:05:38
| null |
NONE
| null | null | null | null |
## 🐛 Bug Description
The `save_to_disk()` method unconditionally calls `flatten_indices()` when `_indices` is not None, causing severe performance degradation for datasets processed with filtering, shuffling, or multiprocessed mapping operations.
**Root cause**: This line rebuilds the entire dataset unnecessarily:
```python
dataset = self.flatten_indices() if self._indices is not None else self
```
## 📊 Performance Impact
| Dataset Size | Operation | Save Time | Slowdown |
|-------------|-----------|-----------|----------|
| 100K | Baseline (no indices) | 0.027s | - |
| 100K | Filtered (with indices) | 0.146s | **+431%** |
| 100K | Shuffled (with indices) | 0.332s | **+1107%** |
| 250K | Shuffled (with indices) | 0.849s | **+1202%** |
## 🔄 Reproduction
```python
from datasets import Dataset
import time
# Create dataset
dataset = Dataset.from_dict({'text': [f'sample {i}' for i in range(100000)]})
# Baseline save (no indices)
start = time.time()
dataset.save_to_disk('baseline')
baseline_time = time.time() - start
# Filtered save (creates indices)
filtered = dataset.filter(lambda x: True)
start = time.time()
filtered.save_to_disk('filtered')
filtered_time = time.time() - start
print(f"Baseline: {baseline_time:.3f}s")
print(f"Filtered: {filtered_time:.3f}s")
print(f"Slowdown: {(filtered_time/baseline_time-1)*100:.1f}%")
```
**Expected output**: Filtered dataset is 400-1000% slower than baseline
## 💡 Proposed Solution
Add optional parameter to control flattening:
```python
def save_to_disk(self, dataset_path, flatten_indices=True):
dataset = self.flatten_indices() if (self._indices is not None and flatten_indices) else self
# ... rest of save logic
```
**Benefits**:
- ✅ Immediate performance improvement for users who don't need flattening
- ✅ Backwards compatible (default behavior unchanged)
- ✅ Simple implementation
## 🌍 Environment
- **datasets version**: 2.x
- **Python**: 3.10+
- **OS**: Linux/macOS/Windows
## 📈 Impact
This affects **most ML preprocessing workflows** that filter/shuffle datasets before saving. Performance degradation scales exponentially with dataset size, making it a critical bottleneck for production systems.
## 🔗 Additional Resources
We have comprehensive test scripts demonstrating this across multiple scenarios if needed for further investigation.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7861/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7861/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/7856
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7856/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7856/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7856/events
|
https://github.com/huggingface/datasets/issues/7856
| 3,603,729,142
|
I_kwDODunzps7WzIr2
| 7,856
|
Missing transcript column when loading a local dataset with "audiofolder"
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/10166907?v=4",
"events_url": "https://api.github.com/users/gweltou/events{/privacy}",
"followers_url": "https://api.github.com/users/gweltou/followers",
"following_url": "https://api.github.com/users/gweltou/following{/other_user}",
"gists_url": "https://api.github.com/users/gweltou/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gweltou",
"id": 10166907,
"login": "gweltou",
"node_id": "MDQ6VXNlcjEwMTY2OTA3",
"organizations_url": "https://api.github.com/users/gweltou/orgs",
"received_events_url": "https://api.github.com/users/gweltou/received_events",
"repos_url": "https://api.github.com/users/gweltou/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gweltou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gweltou/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gweltou",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"First bad commit 5c8869f8c36dbc8c8d423030b7b7c4fd64f8c729\n\nEDIT: This is not a bug or a regression. It was a breaking change introduced in the commit I mentioned and was also documented in there. The docs state how to handle this now, see https://huggingface.co/docs/datasets/main/en/audio_load#audiofolder-with-metadata\n\nor simply, move your metadata into the splits folder and update the paths, in your case this would look like this:\n```bash\nmy_dataset/\n - data/\n - test/\n - 54db8760de3cfbff3c8a36a36b4d0f77_00390.0_04583.0.mp3\n - 54db8760de3cfbff3c8a36a36b4d0f77_04583.0_05730.0.mp3\n - metadata.jsonl\n```\n\nand the pahts in the jsonl should be relative to the metadata.json:\n```bash\n{\"file_name\": \"54db8760de3cfbff3c8a36a36b4d0f77_00390.0_04583.0.mp3\", \"transcript\": \"Ata tudoù penaos e tro ar bed ?\"}\n{\"file_name\": \"54db8760de3cfbff3c8a36a36b4d0f77_04583.0_05730.0.mp3\", \"transcript\": \"Ur gwir blijadur eo adkavout ac'hanoc'h hiziv.\"}\n...\n```\n\nSo I think this can be closed.",
"Thank you for your quick answer !\nI'm sorry I missed that in the documentation.\nEverything works fine again after following your recommendations.\nI'm closing the issue."
] | 2025-11-08T16:27:58
| 2025-11-09T12:13:38
| 2025-11-09T12:13:38
|
NONE
| null | null | null | null |
### Describe the bug
My local dataset is not properly loaded when using `load_dataset("audiofolder", data_dir="my_dataset")` with a `jsonl` metadata file.
Only the `audio` column is read while the `transcript` column is not.
The last tested `datasets` version where the behavior was still correct is 2.18.0.
### Steps to reproduce the bug
Dataset directory structure:
```
my_dataset/
- data/
- test/
- 54db8760de3cfbff3c8a36a36b4d0f77_00390.0_04583.0.mp3
- 54db8760de3cfbff3c8a36a36b4d0f77_04583.0_05730.0.mp3
- ...
- metadata.jsonl
```
`metadata.jsonl` file content:
```
{"file_name": "data/test/54db8760de3cfbff3c8a36a36b4d0f77_00390.0_04583.0.mp3", "transcript": "Ata tudoù penaos e tro ar bed ?"}
{"file_name": "data/test/54db8760de3cfbff3c8a36a36b4d0f77_04583.0_05730.0.mp3", "transcript": "Ur gwir blijadur eo adkavout ac'hanoc'h hiziv."}
...
```
```python3
my_dataset = load_dataset("audiofolder", data_dir="my_dataset")
print(my_dataset)
'''
DatasetDict({
test: Dataset({
features: ['audio'],
num_rows: 347
})
})
'''
print(my_dataset['test'][0])
'''
{'audio': <datasets.features._torchcodec.AudioDecoder object at 0x75ffcd172510>}
'''
```
### Expected behavior
Being able to access the `transcript` column in the loaded dataset.
### Environment info
- `datasets` version: 4.4.1
- Platform: Linux-6.5.0-45-generic-x86_64-with-glibc2.39
- Python version: 3.13.9
- `huggingface_hub` version: 1.1.2
- PyArrow version: 22.0.0
- Pandas version: 2.3.3
- `fsspec` version: 2025.10.0
Note: same issue with `datasets` v3.6.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/10166907?v=4",
"events_url": "https://api.github.com/users/gweltou/events{/privacy}",
"followers_url": "https://api.github.com/users/gweltou/followers",
"following_url": "https://api.github.com/users/gweltou/following{/other_user}",
"gists_url": "https://api.github.com/users/gweltou/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gweltou",
"id": 10166907,
"login": "gweltou",
"node_id": "MDQ6VXNlcjEwMTY2OTA3",
"organizations_url": "https://api.github.com/users/gweltou/orgs",
"received_events_url": "https://api.github.com/users/gweltou/received_events",
"repos_url": "https://api.github.com/users/gweltou/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gweltou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gweltou/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gweltou",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7856/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7856/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 19:45:40
|
https://api.github.com/repos/huggingface/datasets/issues/7852
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7852/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7852/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7852/events
|
https://github.com/huggingface/datasets/issues/7852
| 3,595,450,602
|
I_kwDODunzps7WTjjq
| 7,852
|
Problems with NifTI
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/31857876?v=4",
"events_url": "https://api.github.com/users/CloseChoice/events{/privacy}",
"followers_url": "https://api.github.com/users/CloseChoice/followers",
"following_url": "https://api.github.com/users/CloseChoice/following{/other_user}",
"gists_url": "https://api.github.com/users/CloseChoice/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/CloseChoice",
"id": 31857876,
"login": "CloseChoice",
"node_id": "MDQ6VXNlcjMxODU3ODc2",
"organizations_url": "https://api.github.com/users/CloseChoice/orgs",
"received_events_url": "https://api.github.com/users/CloseChoice/received_events",
"repos_url": "https://api.github.com/users/CloseChoice/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/CloseChoice/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CloseChoice/subscriptions",
"type": "User",
"url": "https://api.github.com/users/CloseChoice",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"> 2. when uploading via the niftifolder feature, the resulting parquet only contains relative paths to the nifti files:\n\nwhat did you use to upload the dataset ? iirc push_to_hub() does upload the bytes as well, but to_parquet() doesn't",
"> > 2. when uploading via the niftifolder feature, the resulting parquet only contains relative paths to the nifti files:\n> \n> what did you use to upload the dataset ? iirc push_to_hub() does upload the bytes as well, but to_parquet() doesn't\n\nI used `push_to_hub` but the problem is that the nifti feature does not have an `embed_storage` function"
] | 2025-11-06T11:46:33
| 2025-11-06T16:20:38
| 2025-11-06T16:20:38
|
CONTRIBUTOR
| null | null | null | null |
### Describe the bug
There are currently 2 problems with the new NifTI feature:
1. dealing with zipped files, this is mentioned and explained [here](https://github.com/huggingface/datasets/pull/7815#issuecomment-3496199503)
2. when uploading via the `niftifolder` feature, the resulting parquet only contains relative paths to the nifti files:
```bash
table['nifti']
<pyarrow.lib.ChunkedArray object at 0x798245d37d60>
[
-- is_valid: all not null
-- child 0 type: binary
[
null,
null,
null,
null,
null,
null
]
-- child 1 type: string
[
"/home/tobias/programming/github/datasets/nifti_extracted/T1.nii",
"/home/tobias/programming/github/datasets/nifti_extracted/T2-interleaved.nii",
"/home/tobias/programming/github/datasets/nifti_extracted/T2.nii",
"/home/tobias/programming/github/datasets/nifti_extracted/T2_-interleaved.nii",
"/home/tobias/programming/github/datasets/nifti_extracted/T2_.nii",
"/home/tobias/programming/github/datasets/nifti_extracted/fieldmap.nii"
]
]
```
instead of containing bytes. The code is copy pasted from PDF, so I wonder what is going wrong here.
### Steps to reproduce the bug
see the linked comment
### Expected behavior
downloading should work as smoothly as for pdf
### Environment info
- `datasets` version: 4.4.2.dev0
- Platform: Linux-6.14.0-33-generic-x86_64-with-glibc2.39
- Python version: 3.12.3
- `huggingface_hub` version: 0.35.3
- PyArrow version: 21.0.0
- Pandas version: 2.3.3
- `fsspec` version: 2025.9.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7852/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7852/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 4:34:05
|
https://api.github.com/repos/huggingface/datasets/issues/7842
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7842/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7842/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7842/events
|
https://github.com/huggingface/datasets/issues/7842
| 3,582,182,995
|
I_kwDODunzps7Vg8ZT
| 7,842
|
Transform with columns parameter triggers on non-specified column access
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/18426892?v=4",
"events_url": "https://api.github.com/users/mr-brobot/events{/privacy}",
"followers_url": "https://api.github.com/users/mr-brobot/followers",
"following_url": "https://api.github.com/users/mr-brobot/following{/other_user}",
"gists_url": "https://api.github.com/users/mr-brobot/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mr-brobot",
"id": 18426892,
"login": "mr-brobot",
"node_id": "MDQ6VXNlcjE4NDI2ODky",
"organizations_url": "https://api.github.com/users/mr-brobot/orgs",
"received_events_url": "https://api.github.com/users/mr-brobot/received_events",
"repos_url": "https://api.github.com/users/mr-brobot/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mr-brobot/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mr-brobot/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mr-brobot",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[] | 2025-11-03T13:55:27
| 2025-11-03T14:34:13
| 2025-11-03T14:34:13
|
NONE
| null | null | null | null |
### Describe the bug
Iterating over a [`Column`](https://github.com/huggingface/datasets/blob/8b1bd4ec1cc9e9ce022f749abb6485ef984ae7c0/src/datasets/arrow_dataset.py#L633-L692) iterates through the parent [`Dataset`](https://github.com/huggingface/datasets/blob/8b1bd4ec1cc9e9ce022f749abb6485ef984ae7c0/src/datasets/arrow_dataset.py#L695) and applies all formatting/transforms on each row, regardless of which column is being accessed. This causes an error when transforms depend on columns not present in the projection.
### Steps to reproduce the bug
### Load a dataset with multiple columns
```python
ds = load_dataset("mrbrobot/isic-2024", split="train")
```
### Define a transform that specifies an input column
```python
def image_transform(batch):
batch["image"] = batch["image"] # KeyError when batch doesn't contain "image"
return batch
# apply transform only to image column
ds = ds.with_format("torch")
ds = ds.with_transform(image_transform, columns=["image"], output_all_columns=True)
```
### Iterate over non-specified column
```python
# iterate over a different column, triggers the transform on each row, but batch doesn't contain "image"
for t in ds["target"]: # KeyError: 'image'
print(t)
```
### Expected behavior
If a user iterates over `ds["target"]` and the transform specifies `columns=["image"]`, the transform should be skipped.
### Environment info
`datasets`: 4.2.0
Python: 3.12.12
Linux: Debian 11.11
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7842/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7842/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 0:38:46
|
https://api.github.com/repos/huggingface/datasets/issues/7841
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7841/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7841/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7841/events
|
https://github.com/huggingface/datasets/issues/7841
| 3,579,506,747
|
I_kwDODunzps7VWvA7
| 7,841
|
DOC: `mode` parameter on pdf and video features unused
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/31857876?v=4",
"events_url": "https://api.github.com/users/CloseChoice/events{/privacy}",
"followers_url": "https://api.github.com/users/CloseChoice/followers",
"following_url": "https://api.github.com/users/CloseChoice/following{/other_user}",
"gists_url": "https://api.github.com/users/CloseChoice/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/CloseChoice",
"id": 31857876,
"login": "CloseChoice",
"node_id": "MDQ6VXNlcjMxODU3ODc2",
"organizations_url": "https://api.github.com/users/CloseChoice/orgs",
"received_events_url": "https://api.github.com/users/CloseChoice/received_events",
"repos_url": "https://api.github.com/users/CloseChoice/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/CloseChoice/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CloseChoice/subscriptions",
"type": "User",
"url": "https://api.github.com/users/CloseChoice",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"They seem to be artefacts from a copy-paste of the Image feature ^^' we should remove them"
] | 2025-11-02T12:37:47
| 2025-11-05T14:04:04
| 2025-11-05T14:04:04
|
CONTRIBUTOR
| null | null | null | null |
Following up on https://github.com/huggingface/datasets/pull/7840 I asked claude code to check for undocumented parameters for other features and it found:
- mode parameter on video is documented but unused: https://github.com/huggingface/datasets/blob/main/src/datasets/features/video.py#L48-L49
- the same goes for the mode parameter on the pdf feature: https://github.com/huggingface/datasets/blob/main/src/datasets/features/pdf.py#L47-L48
I assume checking if these modes can be supported and otherwise removing them is the way to go here.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7841/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7841/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 3 days, 1:26:17
|
https://api.github.com/repos/huggingface/datasets/issues/7839
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7839/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7839/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7839/events
|
https://github.com/huggingface/datasets/issues/7839
| 3,579,121,843
|
I_kwDODunzps7VVRCz
| 7,839
|
datasets doesn't work with python 3.14
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/4789087?v=4",
"events_url": "https://api.github.com/users/zachmoshe/events{/privacy}",
"followers_url": "https://api.github.com/users/zachmoshe/followers",
"following_url": "https://api.github.com/users/zachmoshe/following{/other_user}",
"gists_url": "https://api.github.com/users/zachmoshe/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/zachmoshe",
"id": 4789087,
"login": "zachmoshe",
"node_id": "MDQ6VXNlcjQ3ODkwODc=",
"organizations_url": "https://api.github.com/users/zachmoshe/orgs",
"received_events_url": "https://api.github.com/users/zachmoshe/received_events",
"repos_url": "https://api.github.com/users/zachmoshe/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/zachmoshe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zachmoshe/subscriptions",
"type": "User",
"url": "https://api.github.com/users/zachmoshe",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Thanks for the report.\nHave you tried on main? This should work, there was recently a PR merged to address this problem, see #7817",
"Works on main 👍 \nWhat's the release schedule for `datasets`? Seems like a cadence of ~2weeks so I assume a real version is due pretty soon?",
"let's say we do a new release later today ? :)",
"Premium service! \n😂 👑 \nJust checked 4.4.0 - works as expected!"
] | 2025-11-02T09:09:06
| 2025-11-04T14:02:25
| 2025-11-04T14:02:25
|
NONE
| null | null | null | null |
### Describe the bug
Seems that `dataset` doesn't work with python==3.14. The root cause seems to be something with a `deel` API that was changed.
```
TypeError: Pickler._batch_setitems() takes 2 positional arguments but 3 were given
```
### Steps to reproduce the bug
(on a new folder)
uv init
uv python pin 3.14
uv add datasets
uv run python
(in REPL)
import datasets
datasets.load_dataset("cais/mmlu", "all") # will fail on any dataset
```
>>> datasets.load_dataset("cais/mmlu", "all")
Traceback (most recent call last):
File "<python-input-2>", line 1, in <module>
datasets.load_dataset("cais/mmlu", "all")
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^
File "/Users/zmoshe/temp/test_datasets_py3.14/.venv/lib/python3.14/site-packages/datasets/load.py", line 1397, in load_dataset
builder_instance = load_dataset_builder(
path=path,
...<10 lines>...
**config_kwargs,
)
File "/Users/zmoshe/temp/test_datasets_py3.14/.venv/lib/python3.14/site-packages/datasets/load.py", line 1185, in load_dataset_builder
builder_instance._use_legacy_cache_dir_if_possible(dataset_module)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^
File "/Users/zmoshe/temp/test_datasets_py3.14/.venv/lib/python3.14/site-packages/datasets/builder.py", line 615, in _use_legacy_cache_dir_if_possible
self._check_legacy_cache2(dataset_module) or self._check_legacy_cache() or None
~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^
File "/Users/zmoshe/temp/test_datasets_py3.14/.venv/lib/python3.14/site-packages/datasets/builder.py", line 487, in _check_legacy_cache2
config_id = self.config.name + "-" + Hasher.hash({"data_files": self.config.data_files})
~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/zmoshe/temp/test_datasets_py3.14/.venv/lib/python3.14/site-packages/datasets/fingerprint.py", line 188, in hash
return cls.hash_bytes(dumps(value))
~~~~~^^^^^^^
File "/Users/zmoshe/temp/test_datasets_py3.14/.venv/lib/python3.14/site-packages/datasets/utils/_dill.py", line 120, in dumps
dump(obj, file)
~~~~^^^^^^^^^^^
File "/Users/zmoshe/temp/test_datasets_py3.14/.venv/lib/python3.14/site-packages/datasets/utils/_dill.py", line 114, in dump
Pickler(file, recurse=True).dump(obj)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^
File "/Users/zmoshe/temp/test_datasets_py3.14/.venv/lib/python3.14/site-packages/dill/_dill.py", line 428, in dump
StockPickler.dump(self, obj)
~~~~~~~~~~~~~~~~~^^^^^^^^^^^
File "/Users/zmoshe/.local/uv/python/cpython-3.14.0rc2-macos-aarch64-none/lib/python3.14/pickle.py", line 498, in dump
self.save(obj)
~~~~~~~~~^^^^^
File "/Users/zmoshe/temp/test_datasets_py3.14/.venv/lib/python3.14/site-packages/datasets/utils/_dill.py", line 70, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/zmoshe/temp/test_datasets_py3.14/.venv/lib/python3.14/site-packages/dill/_dill.py", line 422, in save
StockPickler.save(self, obj, save_persistent_id)
~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/zmoshe/.local/uv/python/cpython-3.14.0rc2-macos-aarch64-none/lib/python3.14/pickle.py", line 572, in save
f(self, obj) # Call unbound method with explicit self
~^^^^^^^^^^^
File "/Users/zmoshe/temp/test_datasets_py3.14/.venv/lib/python3.14/site-packages/dill/_dill.py", line 1262, in save_module_dict
StockPickler.save_dict(pickler, obj)
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^
File "/Users/zmoshe/.local/uv/python/cpython-3.14.0rc2-macos-aarch64-none/lib/python3.14/pickle.py", line 1064, in save_dict
self._batch_setitems(obj.items(), obj)
~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^
TypeError: Pickler._batch_setitems() takes 2 positional arguments but 3 were given
```
### Expected behavior
should work.
### Environment info
datasets==v4.3.0
python==3.14
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/4789087?v=4",
"events_url": "https://api.github.com/users/zachmoshe/events{/privacy}",
"followers_url": "https://api.github.com/users/zachmoshe/followers",
"following_url": "https://api.github.com/users/zachmoshe/following{/other_user}",
"gists_url": "https://api.github.com/users/zachmoshe/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/zachmoshe",
"id": 4789087,
"login": "zachmoshe",
"node_id": "MDQ6VXNlcjQ3ODkwODc=",
"organizations_url": "https://api.github.com/users/zachmoshe/orgs",
"received_events_url": "https://api.github.com/users/zachmoshe/received_events",
"repos_url": "https://api.github.com/users/zachmoshe/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/zachmoshe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zachmoshe/subscriptions",
"type": "User",
"url": "https://api.github.com/users/zachmoshe",
"user_view_type": "public"
}
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7839/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7839/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 2 days, 4:53:19
|
https://api.github.com/repos/huggingface/datasets/issues/7837
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7837/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7837/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7837/events
|
https://github.com/huggingface/datasets/issues/7837
| 3,575,454,726
|
I_kwDODunzps7VHRwG
| 7,837
|
mono parameter to the Audio feature is missing
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1250234?v=4",
"events_url": "https://api.github.com/users/ernestum/events{/privacy}",
"followers_url": "https://api.github.com/users/ernestum/followers",
"following_url": "https://api.github.com/users/ernestum/following{/other_user}",
"gists_url": "https://api.github.com/users/ernestum/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ernestum",
"id": 1250234,
"login": "ernestum",
"node_id": "MDQ6VXNlcjEyNTAyMzQ=",
"organizations_url": "https://api.github.com/users/ernestum/orgs",
"received_events_url": "https://api.github.com/users/ernestum/received_events",
"repos_url": "https://api.github.com/users/ernestum/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ernestum/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ernestum/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ernestum",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hey, we removed the misleading passage in the docstring and enabled support for `num_channels` as torchcodec does",
"thanks!"
] | 2025-10-31T15:41:39
| 2025-11-03T15:59:18
| 2025-11-03T14:24:12
|
NONE
| null | null | null | null |
According to the docs, there is a "mono" parameter to the Audio feature, which turns any stereo into mono. In practice the signal is not touched and the mono parameter, even though documented, does not exist.
https://github.com/huggingface/datasets/blob/41c05299348a499807432ab476e1cdc4143c8772/src/datasets/features/audio.py#L52C1-L54C22
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7837/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7837/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 2 days, 22:42:33
|
https://api.github.com/repos/huggingface/datasets/issues/7834
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7834/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7834/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7834/events
|
https://github.com/huggingface/datasets/issues/7834
| 3,558,802,959
|
I_kwDODunzps7UHwYP
| 7,834
|
Audio.cast_column() or Audio.decode_example() causes Colab kernel crash (std::bad_alloc)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/2559570?v=4",
"events_url": "https://api.github.com/users/rachidio/events{/privacy}",
"followers_url": "https://api.github.com/users/rachidio/followers",
"following_url": "https://api.github.com/users/rachidio/following{/other_user}",
"gists_url": "https://api.github.com/users/rachidio/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rachidio",
"id": 2559570,
"login": "rachidio",
"node_id": "MDQ6VXNlcjI1NTk1NzA=",
"organizations_url": "https://api.github.com/users/rachidio/orgs",
"received_events_url": "https://api.github.com/users/rachidio/received_events",
"repos_url": "https://api.github.com/users/rachidio/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rachidio/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rachidio/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rachidio",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[
"Hi ! `datasets` v4 uses `torchcodec` for audio decoding (previous versions were using `soundfile`). What is your `torchcodec` version ? Can you try other versions of `torchcodec` and see if it works ?",
"When I install `datasets` with `pip install datasets[audio]` it install this version of `torchcodec`:\n```\nName: torchcodec\nVersion: 0.8.1\n```\nCan you please point to a working version of `torchcodec`?\n\nThanks for your help",
"I believe you simply need to make sure the torchcodec and torch versions work together. Here is how to fix it:\n\n```python\n!pip install -U torchcodec torch\n```",
"I am also encountering this same issue when i run `print(ug_court[\"train\"][0])` to view the features of the first row of my audio data",
"the problem still goes on to when i force training with seeing these features",
"Thank you @lhoestq I've reinstalled the packages an the error is gone.\nMy new versions are:\n```\nName: torch\nVersion: 2.8.0\n---\nName: torchaudio\nVersion: 2.8.0\n---\nName: torchcodec\nVersion: 0.8.1\n```\n\nRegards",
"mine too has worked ",
"Hi,\n\nI encounter the same problem when trying to inspect the first element in the dataset. My environment is:\n```\nroot@3ac6f9f8c6c4:/workspace# pip3 list | grep torch\npytorch-lightning 2.5.6\npytorch-metric-learning 2.9.0\ntorch 2.8.0+cu126\ntorch-audiomentations 0.12.0\ntorch_pitch_shift 1.2.5\ntorchaudio 2.8.0+cu126\ntorchcodec 0.8.1\ntorchelastic 0.2.2\ntorchmetrics 1.8.2\ntorchvision 0.23.0+cu126\n```\nthe same as @rachidio 's new version that works.\n\nI am in a Docker container environment, and here is the code I am working with:\n\n<img width=\"1350\" height=\"388\" alt=\"Image\" src=\"https://github.com/user-attachments/assets/4cf0400f-9ee7-47c7-ba57-c4ef3c1e7fd6\" />"
] | 2025-10-27T22:02:00
| 2025-11-15T16:28:04
| null |
NONE
| null | null | null | null |
### Describe the bug
When using the huggingface datasets.Audio feature to decode a local or remote (public HF dataset) audio file inside Google Colab, the notebook kernel crashes with std::bad_alloc (C++ memory allocation failure).
The crash happens even with a minimal code example and valid .wav file that can be read successfully using soundfile.
Here is a sample Collab notebook to reproduce the problem.
https://colab.research.google.com/drive/1nnb-GC5748Tux3xcYRussCGp2x-zM9Id?usp=sharing
code sample:
```
...
audio_dataset = audio_dataset.cast_column("audio", Audio(sampling_rate=16000))
# Accessing the first element crashes the Colab kernel
print(audio_dataset[0]["audio"])
```
Error log
```
WARNING what(): std::bad_alloc
terminate called after throwing an instance of 'std::bad_alloc'
```
Environment
Platform: Google Colab (Python 3.12.12)
datasets Version: 4.3.0
soundfile Version: 0.13.1
torchaudio Version: 2.8.0+cu126
Thanks in advance to help me on this error I get approx two weeks now after it was working before.
Regards
### Steps to reproduce the bug
https://colab.research.google.com/drive/1nnb-GC5748Tux3xcYRussCGp2x-zM9Id?usp=sharing
### Expected behavior
Loading the audio and decode it.
It should safely return:
{
"path": "path/filaname.wav",
"array": np.ndarray([...]),
"sampling_rate": 16000
}
### Environment info
Environment
Platform: Google Colab (Python 3.12.12)
datasets Version: 4.3.0
soundfile Version: 0.13.1
torchaudio Version: 2.8.0+cu126
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7834/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7834/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/7832
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7832/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7832/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7832/events
|
https://github.com/huggingface/datasets/issues/7832
| 3,555,991,552
|
I_kwDODunzps7T9CAA
| 7,832
|
[DOCS][minor] TIPS paragraph not compiled in docs/stream
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/110672812?v=4",
"events_url": "https://api.github.com/users/art-test-stack/events{/privacy}",
"followers_url": "https://api.github.com/users/art-test-stack/followers",
"following_url": "https://api.github.com/users/art-test-stack/following{/other_user}",
"gists_url": "https://api.github.com/users/art-test-stack/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/art-test-stack",
"id": 110672812,
"login": "art-test-stack",
"node_id": "U_kgDOBpi7rA",
"organizations_url": "https://api.github.com/users/art-test-stack/orgs",
"received_events_url": "https://api.github.com/users/art-test-stack/received_events",
"repos_url": "https://api.github.com/users/art-test-stack/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/art-test-stack/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/art-test-stack/subscriptions",
"type": "User",
"url": "https://api.github.com/users/art-test-stack",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[] | 2025-10-27T10:03:22
| 2025-10-27T10:10:54
| 2025-10-27T10:10:54
|
CONTRIBUTOR
| null | null | null | null |
In the client documentation, the markdown 'TIP' paragraph for paragraph in docs/stream#shuffle is not well executed — not as the other in the same page / while markdown is correctly considering it.
Documentation:
https://huggingface.co/docs/datasets/v4.3.0/en/stream#shuffle:~:text=%5B!TIP%5D%5BIterableDataset.shuffle()%5D(/docs/datasets/v4.3.0/en/package_reference/main_classes%23datasets.IterableDataset.shuffle)%20will%20also%20shuffle%20the%20order%20of%20the%20shards%20if%20the%20dataset%20is%20sharded%20into%20multiple%20files.
Github source:
https://github.com/huggingface/datasets/blob/main/docs/source/stream.mdx#:~:text=Casting%20only%20works%20if%20the%20original%20feature%20type%20and%20new%20feature%20type%20are%20compatible.%20For%20example%2C%20you%20can%20cast%20a%20column%20with%20the%20feature%20type%20Value(%27int32%27)%20to%20Value(%27bool%27)%20if%20the%20original%20column%20only%20contains%20ones%20and%20zeros.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/110672812?v=4",
"events_url": "https://api.github.com/users/art-test-stack/events{/privacy}",
"followers_url": "https://api.github.com/users/art-test-stack/followers",
"following_url": "https://api.github.com/users/art-test-stack/following{/other_user}",
"gists_url": "https://api.github.com/users/art-test-stack/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/art-test-stack",
"id": 110672812,
"login": "art-test-stack",
"node_id": "U_kgDOBpi7rA",
"organizations_url": "https://api.github.com/users/art-test-stack/orgs",
"received_events_url": "https://api.github.com/users/art-test-stack/received_events",
"repos_url": "https://api.github.com/users/art-test-stack/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/art-test-stack/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/art-test-stack/subscriptions",
"type": "User",
"url": "https://api.github.com/users/art-test-stack",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7832/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7832/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 0:07:32
|
https://api.github.com/repos/huggingface/datasets/issues/7829
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7829/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7829/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7829/events
|
https://github.com/huggingface/datasets/issues/7829
| 3,548,584,085
|
I_kwDODunzps7TgxiV
| 7,829
|
Memory leak / Large memory usage with num_workers = 0 and numerous dataset within DatasetDict
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/24591024?v=4",
"events_url": "https://api.github.com/users/raphaelsty/events{/privacy}",
"followers_url": "https://api.github.com/users/raphaelsty/followers",
"following_url": "https://api.github.com/users/raphaelsty/following{/other_user}",
"gists_url": "https://api.github.com/users/raphaelsty/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/raphaelsty",
"id": 24591024,
"login": "raphaelsty",
"node_id": "MDQ6VXNlcjI0NTkxMDI0",
"organizations_url": "https://api.github.com/users/raphaelsty/orgs",
"received_events_url": "https://api.github.com/users/raphaelsty/received_events",
"repos_url": "https://api.github.com/users/raphaelsty/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/raphaelsty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/raphaelsty/subscriptions",
"type": "User",
"url": "https://api.github.com/users/raphaelsty",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[
"Thanks for the report, this is possibly related #7722 and #7694.\n\nCould you pls provide steps to reproduce this?",
"To overcome this issue right now I did simply reduce the size of the dataset and ended up running a for loop (my training has now a constant learning rate schedule). From what I understood, and I don't know if it's possible, the solution would be to tell the backend of `datasets` to leave x% of the memory free (including memory mapping). Can't release the data right now but I will and then allow to reproduce this issue. But it will involve to have some free TB of disk",
"@raphaelsty thanks for coming back to this. I assume you are running in streaming mode? That should prevent these errors but it looks like more people than just you have this problem, so a clearly reproducing example (including data + code) is highly appreciated.",
"This could be related to this issue: https://github.com/huggingface/datasets/issues/4883 in which we discussed how RSS and memory mapping works and depends on the OS and disk type."
] | 2025-10-24T09:51:38
| 2025-11-06T13:31:26
| null |
NONE
| null | null | null | null |
### Describe the bug
Hi team, first off, I love the datasets library! 🥰
I'm encountering a potential memory leak / increasing memory usage when training a model on a very large DatasetDict.
Setup: I have a DatasetDict containing 362 distinct datasets, which sum up to ~2.8 billion rows.
Training Task: I'm performing contrastive learning with SentenceTransformer and Accelerate on a single node with 4 H100, which requires me to sample from only one dataset at a time.
Training Loop: At each training step, I sample ~16,000 examples from a single dataset, and then switch to a different dataset for the next step. I iterate through all 362 datasets this way.
Problem: The process's memory usage continuously increases over time, eventually causing a stale status where GPUs would stop working. It seems memory from previously sampled datasets isn't being released. I've set num_workers=0 for all experiments.
Chart 1: Standard DatasetDict The memory usage grows steadily until it make the training stale (RSS memory) <img width="773" height="719" alt="Image" src="https://github.com/user-attachments/assets/6606bef5-1153-4f2d-bf08-82da249d6e8d" />
Chart 2: IterableDatasetDict I also tried to use IterableDatasetDict and IterableDataset. The memory curve is "smoother," but the result is the same: it grows indefinitely and the training become stale. <img width="339" height="705" alt="Image" src="https://github.com/user-attachments/assets/ee90c1a1-6c3b-4135-9edc-90955cb1695a" />
Any feedback or guidance on how to manage this memory would be greatly appreciated!
### Steps to reproduce the bug
WIP, I'll add some code that manage to reproduce this error, but not straightforward.
### Expected behavior
The memory usage should remain relatively constant or plateau after a few steps. Memory used for sampling one dataset should be released before or during the sampling of the next dataset.
### Environment info
Python: 3.12
Datasets: 4.3.0
SentenceTransformers: 5.1.1
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7829/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7829/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/7821
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7821/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7821/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7821/events
|
https://github.com/huggingface/datasets/issues/7821
| 3,520,913,195
|
I_kwDODunzps7R3N8r
| 7,821
|
Building a dataset with large variable size arrays results in error ArrowInvalid: Value X too large to fit in C integer type
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/51880718?v=4",
"events_url": "https://api.github.com/users/kkoutini/events{/privacy}",
"followers_url": "https://api.github.com/users/kkoutini/followers",
"following_url": "https://api.github.com/users/kkoutini/following{/other_user}",
"gists_url": "https://api.github.com/users/kkoutini/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kkoutini",
"id": 51880718,
"login": "kkoutini",
"node_id": "MDQ6VXNlcjUxODgwNzE4",
"organizations_url": "https://api.github.com/users/kkoutini/orgs",
"received_events_url": "https://api.github.com/users/kkoutini/received_events",
"repos_url": "https://api.github.com/users/kkoutini/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kkoutini/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kkoutini/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kkoutini",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[
"Thanks for reporting ! You can fix this by specifying the output type explicitly and use `LargeList` which uses int64 for offsets:\n\n```python\nfeatures = Features({\"audio\": LargeList(Value(\"uint16\"))})\nds = ds.map(..., features=features)\n```\n\nIt would be cool to improve `list_of_pa_arrays_to_pyarrow_listarray()` to automatically use `LargeList` if the lists are longer than the int32 limit though. Contributions are welcome if you'd like to improve it"
] | 2025-10-16T08:45:17
| 2025-10-20T13:42:05
| null |
CONTRIBUTOR
| null | null | null | null |
### Describe the bug
I used map to store raw audio waveforms of variable lengths in a column of a dataset the `map` call fails with ArrowInvalid: Value X too large to fit in C integer type.
```
Traceback (most recent call last):
Traceback (most recent call last):
File "...lib/python3.12/site-packages/multiprocess/pool.py", line 125, in worker
result = (True, func(*args, **kwds))
^^^^^^^^^^^^^^^^^^^
File "...lib/python3.12/site-packages/datasets/utils/py_utils.py", line 678, in _write_generator_to_queue
for i, result in enumerate(func(**kwargs)):
^^^^^^^^^^^^^^^^^^^^^^^^^
File "...lib/python3.12/site-packages/datasets/arrow_dataset.py", line 3526, in _map_single
writer.write_batch(batch)
File "...lib/python3.12/site-packages/datasets/arrow_writer.py", line 605, in write_batch
arrays.append(pa.array(typed_sequence))
^^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/array.pxi", line 252, in pyarrow.lib.array
File "pyarrow/array.pxi", line 114, in pyarrow.lib._handle_arrow_array_protocol
File "...lib/python3.12/site-packages/datasets/arrow_writer.py", line 225, in __arrow_array__
out = list_of_np_array_to_pyarrow_listarray(data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "...lib/python3.12/site-packages/datasets/features/features.py", line 1538, in list_of_np_array_to_pyarrow_listarray
return list_of_pa_arrays_to_pyarrow_listarray(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "...lib/python3.12/site-packages/datasets/features/features.py", line 1530, in list_of_pa_arrays_to_pyarrow_listarray
offsets = pa.array(offsets, type=pa.int32())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/array.pxi", line 362, in pyarrow.lib.array
File "pyarrow/array.pxi", line 87, in pyarrow.lib._ndarray_to_array
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Value 2148479376 too large to fit in C integer type
```
### Steps to reproduce the bug
Calling map on a dataset that returns a column with long 1d numpy arrays of variable length.
Example:
```python
# %%
import logging
import datasets
import pandas as pd
import numpy as np
# %%
def process_batch(batch, rank):
res = []
for _ in batch["id"]:
res.append(np.zeros((2**30)).astype(np.uint16))
return {"audio": res}
if __name__ == "__main__":
df = pd.DataFrame(
{
"id": list(range(400)),
}
)
ds = datasets.Dataset.from_pandas(df)
try:
from multiprocess import set_start_method
set_start_method("spawn")
except RuntimeError:
print("Spawn method already set, continuing...")
mapped_ds = ds.map(
process_batch,
batched=True,
batch_size=2,
with_rank=True,
num_proc=2,
cache_file_name="path_to_cache/tmp.arrow",
writer_batch_size=200,
remove_columns=ds.column_names,
# disable_nullable=True,
)
```
### Expected behavior
I think the offsets should be pa.int64() if needed and not forced to be `pa.int32()`
in https://github.com/huggingface/datasets/blob/3e13d30823f8ec498d56adbc18c6880a5463b313/src/datasets/features/features.py#L1535
### Environment info
- `datasets` version: 3.3.1
- Platform: Linux-5.15.0-94-generic-x86_64-with-glibc2.35
- Python version: 3.12.9
- `huggingface_hub` version: 0.29.0
- PyArrow version: 19.0.1
- Pandas version: 2.2.3
- `fsspec` version: 2024.12.0
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7821/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7821/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/7819
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7819/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7819/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7819/events
|
https://github.com/huggingface/datasets/issues/7819
| 3,517,086,110
|
I_kwDODunzps7Ronme
| 7,819
|
Cannot download opus dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/51946663?v=4",
"events_url": "https://api.github.com/users/liamsun2019/events{/privacy}",
"followers_url": "https://api.github.com/users/liamsun2019/followers",
"following_url": "https://api.github.com/users/liamsun2019/following{/other_user}",
"gists_url": "https://api.github.com/users/liamsun2019/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/liamsun2019",
"id": 51946663,
"login": "liamsun2019",
"node_id": "MDQ6VXNlcjUxOTQ2NjYz",
"organizations_url": "https://api.github.com/users/liamsun2019/orgs",
"received_events_url": "https://api.github.com/users/liamsun2019/received_events",
"repos_url": "https://api.github.com/users/liamsun2019/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/liamsun2019/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/liamsun2019/subscriptions",
"type": "User",
"url": "https://api.github.com/users/liamsun2019",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[
"Hi ! it seems \"en-zh\" doesn't exist for this dataset\n\nYou can see the list of subsets here: https://huggingface.co/datasets/Helsinki-NLP/opus_books"
] | 2025-10-15T09:06:19
| 2025-10-20T13:45:16
| null |
NONE
| null | null | null | null |
When I tried to download opus_books using:
from datasets import load_dataset
dataset = load_dataset("Helsinki-NLP/opus_books")
I got the following errors:
FileNotFoundError: Couldn't find any data file at /workspace/Helsinki-NLP/opus_books. Couldn't find 'Helsinki-NLP/opus_books' on the Hugging Face Hub either: LocalEntryNotFoundError: An error happened while trying to locate the file on the Hub and we cannot find the requested files in the local cache. Please check your connection and try again or make sure your Internet connection is on.
I also tried:
dataset = load_dataset("opus_books", "en-zh")
and the errors remain the same. However, I can download "mlabonne/FineTome-100k" successfully.
My datasets is version 4.2.0
Any clues? Big thanks.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7819/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7819/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/7818
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7818/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7818/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7818/events
|
https://github.com/huggingface/datasets/issues/7818
| 3,515,887,618
|
I_kwDODunzps7RkDAC
| 7,818
|
train_test_split and stratify breaks with Numpy 2.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/24845694?v=4",
"events_url": "https://api.github.com/users/davebulaval/events{/privacy}",
"followers_url": "https://api.github.com/users/davebulaval/followers",
"following_url": "https://api.github.com/users/davebulaval/following{/other_user}",
"gists_url": "https://api.github.com/users/davebulaval/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/davebulaval",
"id": 24845694,
"login": "davebulaval",
"node_id": "MDQ6VXNlcjI0ODQ1Njk0",
"organizations_url": "https://api.github.com/users/davebulaval/orgs",
"received_events_url": "https://api.github.com/users/davebulaval/received_events",
"repos_url": "https://api.github.com/users/davebulaval/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/davebulaval/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davebulaval/subscriptions",
"type": "User",
"url": "https://api.github.com/users/davebulaval",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"I can't reproduce this. Could you pls provide an example with a public dataset/artificial dataset and show how you loaded that?\n\nThis works for me:\n\n```python\nimport numpy as np\nfrom datasets import Dataset, Features, ClassLabel, Value\n\ndata = {\"text\": [f\"sample_{i}\" for i in range(100)], \"label\": [i % 3 for i in range(100)]}\nfeatures = Features({\"text\": Value(\"string\"),\n \"label\": ClassLabel(names=[\"class_0\", \"class_1\", \"class_2\"])})\ndataset = Dataset.from_dict(data, features=features)\nsplits = dataset.train_test_split(test_size=0.2, stratify_by_column=\"label\")\nprint(f\"Success with numpy {np.__version__}\")\n```\nbut it also works for `numpy<2`",
"@davebulaval tried with numpy 2.3.4, and maybe i have successfully reproduced the bug!\n```\nValueError: Unable to avoid copy while creating an array as requested.\nIf using `np.array(obj, copy=False)` replace it with `np.asarray(obj)` to allow a copy when needed (no behavior change in NumPy 1.x).\nFor more details, see https://numpy.org/devdocs/numpy_2_0_migration_guide.html#adapting-to-changes-in-the-copy-keyword.\n```\n\nAlso i downgraded to numpy 1.26.4\n```\n(hf-reproduce) F:\\Python\\Machine learning\\reproducing>python repro.py\nDatasetDict({\n train: Dataset({\n features: ['text', 'label'],\n num_rows: 16\n })\n test: Dataset({\n features: ['text', 'label'],\n num_rows: 4\n })\n})\n```",
"Also @CloseChoice The bug only appears in cases where the Arrow array cannot be represented as a contiguous NumPy array without copying.\n\nSo closing the discussion loop here - \n\nThe error occurs because `train_test_split(..., stratify_by_column=...)` attempts to convert\nan Arrow column to a NumPy array using `np.array(..., copy=False)`.\n\nIn NumPy <2.0 this silently allowed a copy if needed.\nIn NumPy ≥2.0 this raises:\nValueError: Unable to avoid copy while creating an array as requested.\n\nThis only happens when the Arrow column is not contiguous in memory, which explains\nwhy some datasets reproduce it and others do not."
] | 2025-10-15T00:01:19
| 2025-10-28T16:10:44
| 2025-10-28T16:10:44
|
NONE
| null | null | null | null |
### Describe the bug
As stated in the title, since Numpy changed in version >2.0 with copy, the stratify parameters break.
e.g. `all_dataset.train_test_split(test_size=0.2,stratify_by_column="label")` returns a Numpy error.
It works if you downgrade Numpy to a version lower than 2.0.
### Steps to reproduce the bug
1. Numpy > 2.0
2. `all_dataset.train_test_split(test_size=0.2,stratify_by_column="label")`
### Expected behavior
It returns a stratified split as per the results of Numpy < 2.0
### Environment info
- `datasets` version: 2.14.4
- Platform: Linux-6.8.0-85-generic-x86_64-with-glibc2.35
- Python version: 3.13.7
- Huggingface_hub version: 0.34.4
- PyArrow version: 19.0.0
- Pandas version: 2.3.2
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7818/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7818/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 13 days, 16:09:25
|
https://api.github.com/repos/huggingface/datasets/issues/7816
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7816/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7816/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7816/events
|
https://github.com/huggingface/datasets/issues/7816
| 3,512,210,206
|
I_kwDODunzps7RWBMe
| 7,816
|
disable_progress_bar() not working as expected
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/5577741?v=4",
"events_url": "https://api.github.com/users/windmaple/events{/privacy}",
"followers_url": "https://api.github.com/users/windmaple/followers",
"following_url": "https://api.github.com/users/windmaple/following{/other_user}",
"gists_url": "https://api.github.com/users/windmaple/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/windmaple",
"id": 5577741,
"login": "windmaple",
"node_id": "MDQ6VXNlcjU1Nzc3NDE=",
"organizations_url": "https://api.github.com/users/windmaple/orgs",
"received_events_url": "https://api.github.com/users/windmaple/received_events",
"repos_url": "https://api.github.com/users/windmaple/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/windmaple/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/windmaple/subscriptions",
"type": "User",
"url": "https://api.github.com/users/windmaple",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"@xianbaoqian ",
"Closing this one since it's a Xet issue."
] | 2025-10-14T03:25:39
| 2025-10-14T23:49:26
| 2025-10-14T23:49:26
|
NONE
| null | null | null | null |
### Describe the bug
Hi,
I'm trying to load a dataset on Kaggle TPU image. There is some known compat issue with progress bar on Kaggle, so I'm trying to disable the progress bar globally. This does not work as you can see in [here](https://www.kaggle.com/code/windmaple/hf-datasets-issue).
In contract, disabling progress bar for snapshot_download() works as expected as in [here](https://www.kaggle.com/code/windmaple/snapshot-download-error).
### Steps to reproduce the bug
See this [notebook](https://www.kaggle.com/code/windmaple/hf-datasets-issue).
There is sth. wrong with `shell_paraent`.
### Expected behavior
The downloader should disable progress bar and move forward w/ no error.
### Environment info
The latest version as I did:
!pip install -U datasets ipywidgets ipykernel
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/5577741?v=4",
"events_url": "https://api.github.com/users/windmaple/events{/privacy}",
"followers_url": "https://api.github.com/users/windmaple/followers",
"following_url": "https://api.github.com/users/windmaple/following{/other_user}",
"gists_url": "https://api.github.com/users/windmaple/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/windmaple",
"id": 5577741,
"login": "windmaple",
"node_id": "MDQ6VXNlcjU1Nzc3NDE=",
"organizations_url": "https://api.github.com/users/windmaple/orgs",
"received_events_url": "https://api.github.com/users/windmaple/received_events",
"repos_url": "https://api.github.com/users/windmaple/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/windmaple/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/windmaple/subscriptions",
"type": "User",
"url": "https://api.github.com/users/windmaple",
"user_view_type": "public"
}
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7816/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7816/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 20:23:47
|
https://api.github.com/repos/huggingface/datasets/issues/7813
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7813/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7813/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7813/events
|
https://github.com/huggingface/datasets/issues/7813
| 3,503,446,288
|
I_kwDODunzps7Q0lkQ
| 7,813
|
Caching does not work when using python3.14
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/142020129?v=4",
"events_url": "https://api.github.com/users/intexcor/events{/privacy}",
"followers_url": "https://api.github.com/users/intexcor/followers",
"following_url": "https://api.github.com/users/intexcor/following{/other_user}",
"gists_url": "https://api.github.com/users/intexcor/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/intexcor",
"id": 142020129,
"login": "intexcor",
"node_id": "U_kgDOCHcOIQ",
"organizations_url": "https://api.github.com/users/intexcor/orgs",
"received_events_url": "https://api.github.com/users/intexcor/received_events",
"repos_url": "https://api.github.com/users/intexcor/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/intexcor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/intexcor/subscriptions",
"type": "User",
"url": "https://api.github.com/users/intexcor",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"https://github.com/uqfoundation/dill/issues/725",
"@intexcor does #7817 fix your problem?"
] | 2025-10-10T15:36:46
| 2025-10-27T17:08:26
| 2025-10-27T17:08:26
|
NONE
| null | null | null | null |
### Describe the bug
Traceback (most recent call last):
File "/workspace/ctn.py", line 8, in <module>
ds = load_dataset(f"naver-clova-ix/synthdog-{lang}") # или "synthdog-zh" для китайского
File "/workspace/.venv/lib/python3.14/site-packages/datasets/load.py", line 1397, in load_dataset
builder_instance = load_dataset_builder(
path=path,
...<10 lines>...
**config_kwargs,
)
File "/workspace/.venv/lib/python3.14/site-packages/datasets/load.py", line 1185, in load_dataset_builder
builder_instance._use_legacy_cache_dir_if_possible(dataset_module)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^
File "/workspace/.venv/lib/python3.14/site-packages/datasets/builder.py", line 612, in _use_legacy_cache_dir_if_possible
self._check_legacy_cache2(dataset_module) or self._check_legacy_cache() or None
~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^
File "/workspace/.venv/lib/python3.14/site-packages/datasets/builder.py", line 485, in _check_legacy_cache2
config_id = self.config.name + "-" + Hasher.hash({"data_files": self.config.data_files})
~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/.venv/lib/python3.14/site-packages/datasets/fingerprint.py", line 188, in hash
return cls.hash_bytes(dumps(value))
~~~~~^^^^^^^
File "/workspace/.venv/lib/python3.14/site-packages/datasets/utils/_dill.py", line 120, in dumps
dump(obj, file)
~~~~^^^^^^^^^^^
File "/workspace/.venv/lib/python3.14/site-packages/datasets/utils/_dill.py", line 114, in dump
Pickler(file, recurse=True).dump(obj)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^
File "/workspace/.venv/lib/python3.14/site-packages/dill/_dill.py", line 428, in dump
StockPickler.dump(self, obj)
~~~~~~~~~~~~~~~~~^^^^^^^^^^^
File "/usr/lib/python3.14/pickle.py", line 498, in dump
self.save(obj)
~~~~~~~~~^^^^^
File "/workspace/.venv/lib/python3.14/site-packages/datasets/utils/_dill.py", line 70, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/.venv/lib/python3.14/site-packages/dill/_dill.py", line 422, in save
StockPickler.save(self, obj, save_persistent_id)
~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.14/pickle.py", line 572, in save
f(self, obj) # Call unbound method with explicit self
~^^^^^^^^^^^
File "/workspace/.venv/lib/python3.14/site-packages/dill/_dill.py", line 1262, in save_module_dict
StockPickler.save_dict(pickler, obj)
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^
File "/usr/lib/python3.14/pickle.py", line 1064, in save_dict
self._batch_setitems(obj.items(), obj)
~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^
TypeError: Pickler._batch_setitems() takes 2 positional arguments but 3 were given
### Steps to reproduce the bug
ds_train = ds["train"].map(lambda x: {**x, "lang": lang})
### Expected behavior
Fixed bugs
### Environment info
- `datasets` version: 4.2.0
- Platform: Linux-6.8.0-85-generic-x86_64-with-glibc2.39
- Python version: 3.14.0
- `huggingface_hub` version: 0.35.3
- PyArrow version: 21.0.0
- Pandas version: 2.3.3
- `fsspec` version: 2025.9.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7813/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7813/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 17 days, 1:31:40
|
https://api.github.com/repos/huggingface/datasets/issues/7811
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7811/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7811/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7811/events
|
https://github.com/huggingface/datasets/issues/7811
| 3,500,741,658
|
I_kwDODunzps7QqRQa
| 7,811
|
SIGSEGV when Python exits due to near null deref
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/5192353?v=4",
"events_url": "https://api.github.com/users/iankronquist/events{/privacy}",
"followers_url": "https://api.github.com/users/iankronquist/followers",
"following_url": "https://api.github.com/users/iankronquist/following{/other_user}",
"gists_url": "https://api.github.com/users/iankronquist/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/iankronquist",
"id": 5192353,
"login": "iankronquist",
"node_id": "MDQ6VXNlcjUxOTIzNTM=",
"organizations_url": "https://api.github.com/users/iankronquist/orgs",
"received_events_url": "https://api.github.com/users/iankronquist/received_events",
"repos_url": "https://api.github.com/users/iankronquist/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/iankronquist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iankronquist/subscriptions",
"type": "User",
"url": "https://api.github.com/users/iankronquist",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[
"The issue seems to come from `dill` which is a `datasets` dependency, e.g. this segfaults:\n\n```python\nimport dill\nfrom tqdm import tqdm\nprogress_bar = tqdm(total=(1000), unit='cols', desc='cols ')\nprogress_bar.update(1)\n```\n\n`tqdm` seems to segfault when `dill` is imported. I only found this about segfault but it's maybe not related https://github.com/tqdm/tqdm/issues/1678 ?",
"After more investigation it seems to be because of it imports `__main__`. This segfaults:\n\n```python\nimport __main__\nfrom tqdm import tqdm\nprogress_bar = tqdm(total=(1000), unit='cols', desc='cols ')\nprogress_bar.update(1)\n```\n\nI opened an issue at https://github.com/tqdm/tqdm/issues/1687",
"Here is a workaround. You can run your code as long as the progress bar is closed before exiting.\n\n```python\nfrom datasets import load_dataset\nfrom tqdm import tqdm\n\nprogress_bar = tqdm(total=(1000), unit='cols', desc='cols ')\nprogress_bar.update(1)\nprogress_bar.close() # avoids the segfault\n```",
"https://github.com/tqdm/tqdm/issues/1687#issuecomment-3392457094"
] | 2025-10-09T22:00:11
| 2025-10-10T22:09:24
| null |
NONE
| null | null | null | null |
### Describe the bug
When I run the following python script using datasets I get a segfault.
```python
from datasets import load_dataset
from tqdm import tqdm
progress_bar = tqdm(total=(1000), unit='cols', desc='cols ')
progress_bar.update(1)
```
```
% lldb -- python3 crashmin.py
(lldb) target create "python3"
Current executable set to '/Users/ian/bug/venv/bin/python3' (arm64).
(lldb) settings set -- target.run-args "crashmin.py"
(lldb) r
Process 8095 launched: '/Users/ian/bug/venv/bin/python3' (arm64)
Process 8095 stopped
* thread #2, stop reason = exec
frame #0: 0x0000000100014b30 dyld`_dyld_start
dyld`_dyld_start:
-> 0x100014b30 <+0>: mov x0, sp
0x100014b34 <+4>: and sp, x0, #0xfffffffffffffff0
0x100014b38 <+8>: mov x29, #0x0 ; =0
Target 0: (Python) stopped.
(lldb) c
Process 8095 resuming
cols : 0% 0/1000 [00:00<?, ?cols/s]Process 8095 stopped
* thread #2, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x10)
frame #0: 0x0000000101783454 _datetime.cpython-313-darwin.so`delta_new + 188
_datetime.cpython-313-darwin.so`delta_new:
-> 0x101783454 <+188>: ldr x3, [x20, #0x10]
0x101783458 <+192>: adrp x0, 10
0x10178345c <+196>: add x0, x0, #0x6fc ; "seconds"
Target 0: (Python) stopped.
(lldb) bt
* thread #2, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x10)
* frame #0: 0x0000000101783454 _datetime.cpython-313-darwin.so`delta_new + 188
frame #1: 0x0000000100704b60 Python`type_call + 96
frame #2: 0x000000010067ba34 Python`_PyObject_MakeTpCall + 120
frame #3: 0x00000001007aae3c Python`_PyEval_EvalFrameDefault + 30236
frame #4: 0x000000010067c900 Python`PyObject_CallOneArg + 112
frame #5: 0x000000010070f0a0 Python`slot_tp_finalize + 116
frame #6: 0x000000010070c3b4 Python`subtype_dealloc + 788
frame #7: 0x00000001006c378c Python`insertdict + 756
frame #8: 0x00000001006db2b0 Python`_PyModule_ClearDict + 660
frame #9: 0x000000010080a9a8 Python`finalize_modules + 1772
frame #10: 0x0000000100809a44 Python`_Py_Finalize + 264
frame #11: 0x0000000100837630 Python`Py_RunMain + 252
frame #12: 0x0000000100837ef8 Python`pymain_main + 304
frame #13: 0x0000000100837f98 Python`Py_BytesMain + 40
frame #14: 0x000000019cfcc274 dyld`start + 2840
(lldb) register read x20
x20 = 0x0000000000000000
(lldb)
```
### Steps to reproduce the bug
Run the script above, and observe the segfault.
### Expected behavior
No segfault
### Environment info
```
% pip freeze datasets | grep -i datasets
datasets==4.2.0
(venv) 0 ~/bug 14:58:06
% pip freeze tqdm | grep -i tqdm
tqdm==4.67.1
(venv) 0 ~/bug 14:58:16
% python --version
Python 3.13.7
```
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7811/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7811/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/7804
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7804/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7804/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7804/events
|
https://github.com/huggingface/datasets/issues/7804
| 3,498,534,596
|
I_kwDODunzps7Qh2bE
| 7,804
|
Support scientific data formats
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[
"Please add the support for `Zarr`! That's what we use in the Bioimaging community. It is crucial, because raw upload of a *single* bio image can take _terrabytes in memory_!\n\nThe python library would be `bioio` or `zarr`:\n- [ ] Zarr: `bioio` or `zarr`\n\nSee a [Zarr example](https://ome.github.io/ome-ngff-validator/?source=https://uk1s3.embassy.ebi.ac.uk/bia-integrator-data/S-BIAD845/796b9fb8-f4ec-4c4b-bfc3-5cb00ccf19fe/796b9fb8-f4ec-4c4b-bfc3-5cb00ccf19fe.zarr)\n\ncc @joshmoore",
"@stefanches7 `zarr` is already usable with the hf hub as an array store. See this example from the [docs](https://huggingface.co/docs/huggingface_hub/en/guides/hf_file_system):\n\n```python\nimport numpy as np\nimport zarr\n\nembeddings = np.random.randn(50000, 1000).astype(\"float32\")\n\n# Write an array to a repo\nwith zarr.open_group(\"hf://my-username/my-model-repo/array-store\", mode=\"w\") as root:\n foo = root.create_group(\"embeddings\")\n foobar = foo.zeros('experiment_0', shape=(50000, 1000), chunks=(10000, 1000), dtype='f4')\n foobar[:] = embeddings\n\n# Read an array from a repo\nwith zarr.open_group(\"hf://my-username/my-model-repo/array-store\", mode=\"r\") as root:\n first_row = root[\"embeddings/experiment_0\"][0]\n```\n\nIs there additional functionality that would not be covered by this?",
"@cakiki I think some tiling capabilities, as well as metadata / labels handling. Consult ome-zarr doc here: https://ome-zarr.readthedocs.io/en/stable/python.html\nVisualization would be the cherry on the top. \n\ncc @joshmoore @lubianat @St3V0Bay: curious what you think",
"zarr-specific dataset viewer would be very cool",
"A support for BIDS it would be perfect, I think it's possible to do all the biosinal can be done with mne. There's a cool community for decoding brain signals, and now with EMG. The new META bracelet EMG is saving things in BIDS.\n\nI can help to interface, coding and try to make this happen. I am available at hugging face discord with the username aristimunha, if some 1-to-1 discuss it would be necessary :)",
"@lhoestq , @cakiki , do you think we can make this happen?",
"If you give me the OK, I'll create the PR to make everything for a Biosignal Reader logic, I already studied the nilabel PR :)",
"That would be an amazing addition ! Feel free to ping me in your PR for review or if you have questions / if I can help",
"@bruAristimunha @lhoestq I've recalled a gold of a resource for BIDS: https://openneuro.org/\n\nDo you think there is a data-easy way to make those visible here on HuggingFace? Afaik they use `datalad` to fetch the data. Maybe the best way is to leave OpenNeuro as-is, not connecting it to HuggingFace at all - just an idea I had spontaneously.",
"I know an \"easy\" way to create interoperability with all biosignal datasets from OpenNeuro =) \n\nFor biosignal data, we can use [EEGDash](https://eegdash.org/) to create a Pytorch dataset, which automates fetch, lazy read, and converts to a pytorch dataset. \n\nI have a question about the best serialization for a Hugging Face dataset, but I can discuss it with some of you on Discord; my username is aristimunha.",
"I can explain it publicly too, but I think a short 5-minute conversation would be better than many, many texts to explain the details.",
"It's ok to have discussions in one place here (or in a separate issue if it's needed) - I also generally check github more often than discord ^^'",
"Hi @bruAristimunha @lhoestq any way we could proceed on this?\nI see someone posted a Nifti vizualization PR: https://github.com/huggingface/datasets/pull/7874 - I think it would be a shame if we couldn't accompany that by a neat way to import BIDS Nifti!",
"@stefanches7 author of #7874 here, would be open to expand the current support to BIDS as well after having a brief look. \nMaybe having a brief call over Discord (my username: TobiasPitters on the huggingface discord server) might help sorting things out, since I am not familiar with BIDS. So getting an understanding over test cases needed, etc. would be great!",
"Hey!!\n\nFrom a bids perspective, I can provide full support for all biosignal types (EEG, iEEG, MEG, EMG). BIDS is a well-established contract format; I believe we can design something that supports the entire medical domain. I think it just requires a few details to be aligned.\n\nFrom my perspective, the tricky part is how to best adapt and serialize from the Hugging Face perspective.\n\nUnder the hood, for the biosignal part, I think I would use [mne](https://mne.tools/) for interoperability and [eegdash](https://eegdash.org/dataset_summary.html) to create the serialized dataset, but we can definitely discuss this further. I will ping you @CloseChoice on Discord.",
"had a discussion with @neurolabusc and here's a quick wrap-up:\n - BIDS support would be huge (@bruAristimunha would be great if we could catch up on that)\n - DICOM support as well, but that might be harder due to a lot of variety in how headers are handled, vendor specifics etc. So to have a reliable pipeline to interact with whole folders of DICOM files (including metadata) would require a lot of work and a lot of testing. Therefore I set https://github.com/huggingface/datasets/pull/7835 back to draft mode. But there are tools that ease the way, especially https://github.com/ImagingDataCommons/highdicom (or potentially https://github.com/QIICR/dcmqi). \n - Getting users would help in order to understand what other formats/features are required therefore loading a bunch of open datasets to the hub using the new Nifti feature would be great. Some tutorials might help here as well.",
"Hi @CloseChoice and @bruAristimunha, glad to meet you both! We could appoint a call; I am currently moving to a new job, so the time slots are limited, but let's connect over Discord and see what we could do.\n\n* BIDS: our hackathon team @zuazo @ekarrieta @lakshya16157 put up a BIDS format converter: https://huggingface.co/spaces/stefanches/OpenBIDSifier. Might be useful for imaging dataset conversion to BIDS.\n* DICOM support: cc @St3V0Bay, the author of DICOM support in CroissantML (https://github.com/mlcommons/croissant/pull/942)\n\ncc @nolden",
"my username is aristimunha within the huggieng face discord to discuss more"
] | 2025-10-09T10:18:24
| 2025-11-26T16:09:43
| null |
MEMBER
| null | null | null | null |
List of formats and libraries we can use to load the data in `datasets`:
- [ ] DICOMs: pydicom
- [x] NIfTIs: nibabel
- [ ] WFDB: wfdb
cc @zaRizk7 for viz
Feel free to comment / suggest other formats and libs you'd like to see or to share your interest in one of the mentioned format
| null |
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 5,
"hooray": 4,
"laugh": 0,
"rocket": 0,
"total_count": 10,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7804/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7804/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/7802
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7802/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7802/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7802/events
|
https://github.com/huggingface/datasets/issues/7802
| 3,497,454,119
|
I_kwDODunzps7Qduon
| 7,802
|
[Docs] Missing documentation for `Dataset.from_dict`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/69421545?v=4",
"events_url": "https://api.github.com/users/aaronshenhao/events{/privacy}",
"followers_url": "https://api.github.com/users/aaronshenhao/followers",
"following_url": "https://api.github.com/users/aaronshenhao/following{/other_user}",
"gists_url": "https://api.github.com/users/aaronshenhao/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/aaronshenhao",
"id": 69421545,
"login": "aaronshenhao",
"node_id": "MDQ6VXNlcjY5NDIxNTQ1",
"organizations_url": "https://api.github.com/users/aaronshenhao/orgs",
"received_events_url": "https://api.github.com/users/aaronshenhao/received_events",
"repos_url": "https://api.github.com/users/aaronshenhao/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/aaronshenhao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aaronshenhao/subscriptions",
"type": "User",
"url": "https://api.github.com/users/aaronshenhao",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[
"I'd like to work on this documentation issue.",
"Hi I'd like to work on this. I can see the docstring is already in the code. \nCould you confirm:\n1. Is this still available?\n2. Should I add this to the main_classes.md file, or is there a specific \n documentation file I should update?\n3. Are there any formatting guidelines I should follow?\n\nI'm new to contributing but eager to learn!"
] | 2025-10-09T02:54:41
| 2025-10-19T16:09:33
| null |
NONE
| null | null | null | null |
Documentation link: https://huggingface.co/docs/datasets/en/package_reference/main_classes
Link to method (docstring present): https://github.com/huggingface/datasets/blob/6f2502c5a026caa89839713f6f7c8b958e5e83eb/src/datasets/arrow_dataset.py#L1029
The docstring is present for the function, but seems missing from the official documentation for the `Dataset` class on HuggingFace.
The method in question:
```python
@classmethod
def from_dict(
cls,
mapping: dict,
features: Optional[Features] = None,
info: Optional[DatasetInfo] = None,
split: Optional[NamedSplit] = None,
) -> "Dataset":
"""
Convert `dict` to a `pyarrow.Table` to create a [`Dataset`].
Important: a dataset created with from_dict() lives in memory
and therefore doesn't have an associated cache directory.
This may change in the future, but in the meantime if you
want to reduce memory usage you should write it back on disk
and reload using e.g. save_to_disk / load_from_disk.
Args:
mapping (`Mapping`):
Mapping of strings to Arrays or Python lists.
features ([`Features`], *optional*):
Dataset features.
info (`DatasetInfo`, *optional*):
Dataset information, like description, citation, etc.
split (`NamedSplit`, *optional*):
Name of the dataset split.
Returns:
[`Dataset`]
"""
```
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7802/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7802/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/7798
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7798/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7798/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7798/events
|
https://github.com/huggingface/datasets/issues/7798
| 3,484,470,782
|
I_kwDODunzps7PsM3-
| 7,798
|
Audio dataset is not decoding on 4.1.1
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/61390950?v=4",
"events_url": "https://api.github.com/users/thewh1teagle/events{/privacy}",
"followers_url": "https://api.github.com/users/thewh1teagle/followers",
"following_url": "https://api.github.com/users/thewh1teagle/following{/other_user}",
"gists_url": "https://api.github.com/users/thewh1teagle/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/thewh1teagle",
"id": 61390950,
"login": "thewh1teagle",
"node_id": "MDQ6VXNlcjYxMzkwOTUw",
"organizations_url": "https://api.github.com/users/thewh1teagle/orgs",
"received_events_url": "https://api.github.com/users/thewh1teagle/received_events",
"repos_url": "https://api.github.com/users/thewh1teagle/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/thewh1teagle/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thewh1teagle/subscriptions",
"type": "User",
"url": "https://api.github.com/users/thewh1teagle",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[
"Previously (datasets<=3.6.0), audio columns were decoded automatically when accessing a row. Now, for performance reasons, audio decoding is lazy by default: you just see the file path unless you explicitly cast the column to Audio.\n\nHere’s the fix (following the current [datasets audio docs](https://huggingface.co/docs/datasets/en/audio_load)\n):\n\n```\nfrom datasets import load_dataset, Audio\n\ndataset = load_dataset(\"MrDragonFox/Elise\", split=\"train\")\n\n# Explicitly decode the audio column\ndataset = dataset.cast_column(\"audio\", Audio(sampling_rate=16_000))\n\nprint(dataset[0][\"audio\"])\n# {'path': '...', 'array': array([...], dtype=float32), 'sampling_rate': 16000}\n```",
"@haitam03-yo's comment is right that the data is not decoded by default anymore indeed, but here is how it works in practice now:\n\nFrom `datasets` v4, audio data are read as [AudioDecoder](https://meta-pytorch.org/torchcodec/0.4/generated/torchcodec.decoders.AudioDecoder.html) objects from torchcodec. This doesn't decode the data by default, but you can call `audio.get_all_samples()` to decode the audio.\n\nSee the documentation on how to process audio data here: https://huggingface.co/docs/datasets/audio_process",
"To resolve this, you need to explicitly cast the audio column to the Audio feature. This will decode the audio data and make it accessible as an array. Here is the corrected code snippet\n\n\nfrom datasets import load_dataset, Audio\n\n# Load your dataset\ndataset = load_dataset(\"MrDragonFox/Elise\", split=\"train\")\n\n# Explicitly cast the 'audio' column to the Audio feature\ndataset = dataset.cast_column(\"audio\", Audio(sampling_rate=16_000))\n\n# Now you can access the decoded audio array\nprint(dataset[0][\"audio\"])\n\nBy adding the cast_column step, you are telling the datasets library to decode the audio data with the specified sampling rate, and you will then be able to access the audio array as you were used to in previous versions."
] | 2025-10-05T06:37:50
| 2025-10-06T14:07:55
| null |
NONE
| null | null | null | null |
### Describe the bug
The audio column remain as non-decoded objects even when accessing them.
```python
dataset = load_dataset("MrDragonFox/Elise", split = "train")
dataset[0] # see that it doesn't show 'array' etc...
```
Works fine with `datasets==3.6.0`
Followed the docs in
- https://huggingface.co/docs/datasets/en/audio_load
### Steps to reproduce the bug
```python
dataset = load_dataset("MrDragonFox/Elise", split = "train")
dataset[0] # see that it doesn't show 'array' etc...
```
### Expected behavior
It should decode when accessing the elemenet
### Environment info
4.1.1
ubuntu 22.04
Related
- https://github.com/huggingface/datasets/issues/7707
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7798/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7798/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/7793
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7793/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7793/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7793/events
|
https://github.com/huggingface/datasets/issues/7793
| 3,459,496,971
|
I_kwDODunzps7OM7wL
| 7,793
|
Cannot load dataset, fails with nested data conversions not implemented for chunked array outputs
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/41182432?v=4",
"events_url": "https://api.github.com/users/neevparikh/events{/privacy}",
"followers_url": "https://api.github.com/users/neevparikh/followers",
"following_url": "https://api.github.com/users/neevparikh/following{/other_user}",
"gists_url": "https://api.github.com/users/neevparikh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/neevparikh",
"id": 41182432,
"login": "neevparikh",
"node_id": "MDQ6VXNlcjQxMTgyNDMy",
"organizations_url": "https://api.github.com/users/neevparikh/orgs",
"received_events_url": "https://api.github.com/users/neevparikh/received_events",
"repos_url": "https://api.github.com/users/neevparikh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/neevparikh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/neevparikh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/neevparikh",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[
"Hey @neevparikh,\nThanks for reporting this! I can reproduce the issue and have identified the root cause.\nProblem: The metr-evals/malt-public dataset contains deeply nested conversation data that exceeds PyArrow's 16MB chunk limit. When PyArrow tries to read it in chunks, it hits a fundamental limitation: \"Nested data conversions not implemented for chunked array outputs\".\nRoot Cause: Your dataset has large nested arrays (conversation trees with 4k-87k elements) that get automatically chunked by PyArrow, but the nested data conversion logic can't handle repetition levels across chunk boundaries\n I'm preparing a PR that adds a fallback mechanism to the parquet reader. When this specific error occurs, it will:\n\nDetect the nested data issue\nCombine chunks selectively for problematic columns\nContinue processing normally\n\nThis maintains backward compatibility while fixing the issue for nested datasets like yours.\nWorkaround (if you need immediate access): Try loading with smaller batch sizes:\npythonds = datasets.load_dataset(\"metr-evals/malt-public\", name=\"irrelevant_detail\", \n download_config=datasets.DownloadConfig(\n parquet_batch_size=1000\n ))"
] | 2025-09-27T01:03:12
| 2025-09-27T21:35:31
| null |
NONE
| null | null | null | null |
### Describe the bug
Hi! When I load this dataset, it fails with a pyarrow error. I'm using datasets 4.1.1, though I also see this with datasets 4.1.2
To reproduce:
```
import datasets
ds = datasets.load_dataset(path="metr-evals/malt-public", name="irrelevant_detail")
```
Error:
```
Traceback (most recent call last):
File "/Users/neev/scratch/.venv/lib/python3.13/site-packages/datasets/builder.py", line 1815, in _prepare_split_single
for _, table in generator:
^^^^^^^^^
File "/Users/neev/scratch/.venv/lib/python3.13/site-packages/datasets/packaged_modules/parquet/parquet.py", line 93, in _generate_tables
for batch_idx, record_batch in enumerate(
~~~~~~~~~^
parquet_fragment.to_batches(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...<5 lines>...
)
^
):
^
File "pyarrow/_dataset.pyx", line 3904, in _iterator
File "pyarrow/_dataset.pyx", line 3494, in pyarrow._dataset.TaggedRecordBatchIterator.__next__
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowNotImplementedError: Nested data conversions not implemented for chunked array outputs
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/neev/scratch/test_hf.py", line 3, in <module>
ds = datasets.load_dataset(path="metr-evals/malt-public", name="irrelevant_detail")
File "/Users/neev/scratch/.venv/lib/python3.13/site-packages/datasets/load.py", line 1412, in load_dataset
builder_instance.download_and_prepare(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
download_config=download_config,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...<3 lines>...
storage_options=storage_options,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/Users/neev/scratch/.venv/lib/python3.13/site-packages/datasets/builder.py", line 894, in download_and_prepare
self._download_and_prepare(
~~~~~~~~~~~~~~~~~~~~~~~~~~^
dl_manager=dl_manager,
^^^^^^^^^^^^^^^^^^^^^^
...<2 lines>...
**download_and_prepare_kwargs,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/Users/neev/scratch/.venv/lib/python3.13/site-packages/datasets/builder.py", line 970, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/neev/scratch/.venv/lib/python3.13/site-packages/datasets/builder.py", line 1702, in _prepare_split
for job_id, done, content in self._prepare_split_single(
~~~~~~~~~~~~~~~~~~~~~~~~~~^
gen_kwargs=gen_kwargs, job_id=job_id, **_prepare_split_args
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
):
^
File "/Users/neev/scratch/.venv/lib/python3.13/site-packages/datasets/builder.py", line 1858, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset
```
### Steps to reproduce the bug
To reproduce:
```
import datasets
ds = datasets.load_dataset(path="metr-evals/malt-public", name="irrelevant_detail")
```
### Expected behavior
The dataset loads
### Environment info
Datasets: 4.1.1
Python: 3.13
Platform: Macos
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7793/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7793/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/7792
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7792/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7792/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7792/events
|
https://github.com/huggingface/datasets/issues/7792
| 3,456,802,210
|
I_kwDODunzps7OCp2i
| 7,792
|
Concatenate IterableDataset instances and distribute underlying shards in a RoundRobin manner
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/13559010?v=4",
"events_url": "https://api.github.com/users/LTMeyer/events{/privacy}",
"followers_url": "https://api.github.com/users/LTMeyer/followers",
"following_url": "https://api.github.com/users/LTMeyer/following{/other_user}",
"gists_url": "https://api.github.com/users/LTMeyer/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/LTMeyer",
"id": 13559010,
"login": "LTMeyer",
"node_id": "MDQ6VXNlcjEzNTU5MDEw",
"organizations_url": "https://api.github.com/users/LTMeyer/orgs",
"received_events_url": "https://api.github.com/users/LTMeyer/received_events",
"repos_url": "https://api.github.com/users/LTMeyer/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/LTMeyer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LTMeyer/subscriptions",
"type": "User",
"url": "https://api.github.com/users/LTMeyer",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
| null |
[] |
[
"# With `datasets.Dataset`\n\nHere is an small script that shows the distribution differences of samples between `interleave_datasets`, `concatenate_datasets` and `concatenate_datasets` + shuffling.\n\n```python\nimport datasets as hf_datasets\n\ndef gen(dataset: int, n_samples: int):\n for i in range(n_samples):\n yield {\"dataset\": dataset, \"sample\": i}\n\nds_1 = hf_datasets.Dataset.from_generator(gen, gen_kwargs={\"dataset\": 0, \"n_samples\": 2})\nds_2 = hf_datasets.Dataset.from_generator(gen, gen_kwargs={\"dataset\": 1, \"n_samples\": 1})\nds_3 = hf_datasets.Dataset.from_generator(gen, gen_kwargs={\"dataset\": 2, \"n_samples\": 3})\n\nn_workers = 3\nprint(f\"Simulate run with {n_workers} workers\")\n\nprint(\"Interleave datasets\")\nfor w in range(n_workers):\n ds_interleave = hf_datasets.interleave_datasets([ds_1, ds_2, ds_3]).shard(n_workers, w)\n for i, sample in enumerate(ds_interleave):\n print(f\"Worker {w} process sample {i} {sample}\")\n\nprint(\"Concatenate datasets\")\nfor w in range(n_workers):\n ds_concatenate = hf_datasets.concatenate_datasets([ds_1, ds_2, ds_3]).shard(n_workers, w)\n for i, sample in enumerate(ds_concatenate):\n print(f\"Worker {w} process sample {i} {sample}\")\n\nprint(\"Concated and shuffled datasets\")\nfor w in range(n_workers):\n ds_concatenate = hf_datasets.concatenate_datasets([ds_1, ds_2, ds_3]).shuffle().shard(n_workers, w)\n for i, sample in enumerate(ds_concatenate):\n print(f\"Worker {w} process sample {i} {sample}\")\n```\n\n> Interleave datasets\nWorker 0 process sample 0 {'dataset': 0, 'sample': 0}\nWorker 1 process sample 0 {'dataset': 1, 'sample': 0}\nWorker 2 process sample 0 {'dataset': 2, 'sample': 0}\n\n> Concatenate datasets\nWorker 0 process sample 0 {'dataset': 0, 'sample': 0}\nWorker 0 process sample 1 {'dataset': 0, 'sample': 1}\nWorker 1 process sample 0 {'dataset': 1, 'sample': 0}\nWorker 1 process sample 1 {'dataset': 2, 'sample': 0}\nWorker 2 process sample 0 {'dataset': 2, 'sample': 1}\nWorker 2 process sample 1 {'dataset': 2, 'sample': 2}\n\n> Concated and shuffled datasets\nWorker 0 process sample 0 {'dataset': 2, 'sample': 2}\nWorker 0 process sample 1 {'dataset': 2, 'sample': 0}\nWorker 1 process sample 0 {'dataset': 0, 'sample': 1}\nWorker 1 process sample 1 {'dataset': 2, 'sample': 1}\nWorker 2 process sample 0 {'dataset': 2, 'sample': 2}\nWorker 2 process sample 1 {'dataset': 0, 'sample': 0}\n\nWithout shuffling, round robin would yield:\n> Worker 0 process sample 0 {'dataset': 0, 'sample': 0}\nWorker 0 process sample 1 {'dataset': 2, 'sample': 0}\nWorker 1 process sample 0 {'dataset': 0, 'sample': 1}\nWorker 1 process sample 1 {'dataset': 2, 'sample': 1}\nWorker 2 process sample 0 {'dataset': 1, 'sample': 0}\nWorker 2 process sample 1 {'dataset': 2, 'sample': 2}",
"# With `datasets.IterableDataset`\n\nThe above works for `Dataset`, but with a sharded `IterableDataset` some data get discarded. See the following results obtained with the script below.\n\n> Simulate run with 3 workers\n\n> Interleave datasets\nWorker 0 process sample 0 {'dataset': 0, 'sample': 0}\nWorker 1 fails with list index out of range.\nWorker 2 fails with list index out of range.\nWith dataloader\nToo many dataloader workers: 3 (max is dataset.num_shards=1). Stopping 2 dataloader workers.\n{'dataset': tensor([0]), 'sample': tensor([0])}\n\n> Concatenate datasets\nWorker 0 process sample 0 {'dataset': 0, 'sample': 0}\nWorker 0 process sample 1 {'dataset': 1, 'sample': 0}\nWorker 0 process sample 2 {'dataset': 2, 'sample': 0}\nWorker 1 fails with list index out of range\nWorker 2 fails with list index out of range\nWith dataloader\nToo many dataloader workers: 3 (max is dataset.num_shards=1). Stopping 2 dataloader workers.\n{'dataset': tensor([0]), 'sample': tensor([0])}\n{'dataset': tensor([1]), 'sample': tensor([0])}\n{'dataset': tensor([2]), 'sample': tensor([0])}\n\n> Concated and shuffled datasets\nWorker 0 process sample 0 {'dataset': 0, 'sample': 0}\nWorker 0 process sample 1 {'dataset': 1, 'sample': 0}\nWorker 0 process sample 2 {'dataset': 2, 'sample': 0}\nWorker 1 fails with list index out of range\nWorker 2 fails with list index out of range\nWith dataloader\nToo many dataloader workers: 3 (max is dataset.num_shards=1). Stopping 2 dataloader workers.\n{'dataset': tensor([0]), 'sample': tensor([0])}\n{'dataset': tensor([1]), 'sample': tensor([0])}\n{'dataset': tensor([2]), 'sample': tensor([0])}\n\n<details>\n\n<summary>Experiment script</summary>\n\n```python\nds_1 = hf_datasets.Dataset.from_generator(gen, gen_kwargs={\"dataset\": 0, \"n_samples\": 2}).to_iterable_dataset(\n num_shards=2\n)\nds_2 = hf_datasets.Dataset.from_generator(gen, gen_kwargs={\"dataset\": 1, \"n_samples\": 1}).to_iterable_dataset(\n num_shards=1\n)\nds_3 = hf_datasets.Dataset.from_generator(gen, gen_kwargs={\"dataset\": 2, \"n_samples\": 3}).to_iterable_dataset(\n num_shards=3\n)\n\nn_workers = 3\nprint(f\"Simulate run with {n_workers} workers\")\n\nprint(\"\\nInterleave datasets\")\nds_interleave = hf_datasets.interleave_datasets([ds_1, ds_2, ds_3])\nfor w in range(n_workers):\n try:\n for i, sample in enumerate(ds_interleave.shard(n_workers, w)):\n print(f\"Worker {w} process sample {i} {sample}\")\n except IndexError as e:\n print(f\"Worker {w} fails with {e}.\")\n\nprint(\"With dataloader\")\nfor sample in torch.utils.data.DataLoader(ds_interleave, num_workers=n_workers):\n print(f\"{sample}\")\n\nprint(\"\\nConcatenate datasets\")\nds_concatenate = hf_datasets.concatenate_datasets([ds_1, ds_2, ds_3])\nfor w in range(n_workers):\n try:\n for i, sample in enumerate(ds_concatenate.shard(n_workers, w)):\n print(f\"Worker {w} process sample {i} {sample}\")\n except IndexError as e:\n print(f\"Worker {w} fails with {e}\")\n\nprint(\"With dataloader\")\nfor sample in torch.utils.data.DataLoader(ds_concatenate, num_workers=n_workers):\n print(f\"{sample}\")\n\nprint(\"\\nConcated and shuffled datasets\")\nds_concatenate = hf_datasets.concatenate_datasets([ds_1, ds_2, ds_3]).shuffle()\nfor w in range(n_workers):\n try:\n for i, sample in enumerate(ds_concatenate.shard(n_workers, w)):\n print(f\"Worker {w} process sample {i} {sample}\")\n except IndexError as e:\n print(f\"Worker {w} fails with {e}\")\n\nprint(\"With dataloader\")\nfor sample in torch.utils.data.DataLoader(ds_concatenate, num_workers=n_workers):\n print(f\"{sample}\")\n```\n\n</details>\n\n# Round Robin with fixed logic\n\n> I started implementing the following, but I'm afraid my sharding logic is incorrect.\n\nHere is a solution for mixing the data in a round robin fashion that allows to distribute the data to all workers. In the previous example above only 1 worker over 3 was actually retrieving data, which resulted in discarding some data.\n\n```python\ndef shard_data_sources(self, num_shards: int, index: int, contiguous=True) -> \"MixMultiSourceExampleIterable\":\n \"\"\"Shard the underlying iterables in a roundrobin manner.\n\n Let's consider we have our iterables as [[s0_0, s0_1], [s1_0], [s2_0, s2_1, s2_3]],\n and we request 3 shards.\n index 0 gets s0_0 s2_0\n index 1 gets s0_1 s2_1\n index 2 gets s1_0 s2_3\n \"\"\"\n return MixMultiSourcesExampleIterable(\n list(\n islice(\n # flatten all underlying iterables (fixed logic)\n [\n ex_iterable.shard_data_sources(ex_iterable.num_shards, index)\n for ex_iterable in self.ex_iterables\n for index in range(ex_iterable.num_shards)\n ],\n # offset the starting point by the index\n index,\n # take over the full list, so exhaust the iterators\n None,\n # step by the number of shards requested\n num_shards,\n )\n )\n )\n```\n\nEditing the example above with the following we obtain the expected result:\n```python\nprint(\"\\nMix datasets\")\nds_mix = mix_dataset([ds_1, ds_2, ds_3])\nfor w in range(n_workers):\n try:\n for i, sample in enumerate(ds_mix.shard(n_workers, w)):\n print(f\"Worker {w} process sample {i} {sample}\")\n except IndexError as e:\n print(f\"Worker {w} fails with {e}\")\n\nprint(\"With dataloader\")\nfor sample in torch.utils.data.DataLoader(ds_mix, num_workers=n_workers):\n print(f\"{sample}\")\n```\n> Mix datasets\nMix datasets\nWorker 0 process sample 0 {'dataset': 0, 'sample': 0}\nWorker 0 process sample 1 {'dataset': 2, 'sample': 0}\nWorker 1 process sample 0 {'dataset': 0, 'sample': 1}\nWorker 1 process sample 1 {'dataset': 2, 'sample': 1}\nWorker 2 process sample 0 {'dataset': 1, 'sample': 0}\nWorker 2 process sample 1 {'dataset': 2, 'sample': 2}\nWith dataloader\n{'dataset': tensor([0]), 'sample': tensor([0])}\n{'dataset': tensor([0]), 'sample': tensor([1])}\n{'dataset': tensor([1]), 'sample': tensor([0])}\n{'dataset': tensor([2]), 'sample': tensor([0])}\n{'dataset': tensor([2]), 'sample': tensor([1])}\n{'dataset': tensor([2]), 'sample': tensor([2])}\n\n# Questions \n\n- The example is quite small, showing that some data get discarded, but on large datasets is this significant?\n- How does the suggested solution interplays with shuffling?\n\n\n\n\n",
"# Larger Experiment\n\n> The example is quite small, showing that some data get discarded, but on large datasets is this significant?\n\nContinuing the experiment above, but with 3 larger and unbalanced datasets, with respectively 1000, 150, and 300 samples, and a dataloader with 4 workers:\n \n> Interleave datasets\nWith dataloader\nToo many dataloader workers: 4 (max is dataset.num_shards=1). Stopping 3 dataloader workers.\nYield 300 samples\n\n> Concatenate datasets\nWith dataloader\nToo many dataloader workers: 4 (max is dataset.num_shards=1). Stopping 3 dataloader workers.\nYield 705 samples\n\n> Concated and shuffled datasets\nWith dataloader\nToo many dataloader workers: 4 (max is dataset.num_shards=1). Stopping 3 dataloader workers.\nYield 705 samples\n\n> Mix datasets\nWith dataloader\nYield 1405 samples\n\nThe dataset mixing proposed above is the only one that yields all the samples while using all the dataloaders.\nAdditional checks should include training metrics (does it improve training quality to mix the data like this), and behavior check in a DDP settings, we don't want to face any deadlock due to some GPU having more batches than other. But this later point should be already handled by the iterator of the `IterableDataset`.\n\n# Follow up?\n\n@lhoestq would there be any interest in making a PR of it? Otherwise I can close the issue as I found a solution to my problem. ",
"I believe this PR could solve your issue? :)\n\nhttps://github.com/huggingface/datasets/pull/7786",
"> I believe this PR could solve your issue? :)\n\nThank you @lhoestq for the reply.\nI have just tested it with the script above. It gives:\n\n> Interleave datasets without replacement\nWith dataloader\nToo many dataloader workers: 4 (max is dataset.num_shards=1). Stopping 3 dataloader workers.\nYield 705 samples\n\nIf we compare with the original `interleave_dataset` method it produces 405 samples more. However, it only uses 1 worker on the 4 available. Moreover it doesn't yield all the samples as the mixing strategy with RoundRobin above does (1405 samples vs 705).",
"@LTMeyer With the following script and using the code from #7786 I get all 1450 samples\n\n```\nimport datasets as hf_datasets\n\n\ndef gen(dataset: int, n_samples: int):\n for i in range(n_samples):\n yield {\"dataset\": dataset, \"sample\": i}\n\n\nds_1 = hf_datasets.Dataset.from_generator(gen, gen_kwargs={\"dataset\": 0, \"n_samples\": 1000}).to_iterable_dataset()\nds_2 = hf_datasets.Dataset.from_generator(gen, gen_kwargs={\"dataset\": 1, \"n_samples\": 150}).to_iterable_dataset()\nds_3 = hf_datasets.Dataset.from_generator(gen, gen_kwargs={\"dataset\": 2, \"n_samples\": 300}).to_iterable_dataset()\n\nprint(\"Interleave datasets\")\nds_interleave = hf_datasets.interleave_datasets(\n [ds_1, ds_2, ds_3],\n probabilities=[1 / 3, 1 / 3, 1 / 3],\n stopping_strategy=\"all_exhausted_without_replacement\",\n)\nfor i, sample in enumerate(ds_interleave):\n print(f\"process sample {i} {sample}\")\n```\nI'm not sure on the workers side how many will be spawned and so on. ",
"> [@LTMeyer](https://github.com/LTMeyer) With the following script and using the code from [#7786](https://github.com/huggingface/datasets/pull/7786) I get all 1450 samples\n\nThis depends on the number of shards and the number of processes being used.\nIn the example below there is only one shard per dataset (the default of `to_iterable_dataset` method). Then, the for loop is running in the main process. It thus consumes all the shards, hence the 1450 samples.\n\n> \n> ```\n> import datasets as hf_datasets\n> \n> \n> def gen(dataset: int, n_samples: int):\n> for i in range(n_samples):\n> yield {\"dataset\": dataset, \"sample\": i}\n> \n> \n> ds_1 = hf_datasets.Dataset.from_generator(gen, gen_kwargs={\"dataset\": 0, \"n_samples\": 1000}).to_iterable_dataset()\n> ds_2 = hf_datasets.Dataset.from_generator(gen, gen_kwargs={\"dataset\": 1, \"n_samples\": 150}).to_iterable_dataset()\n> ds_3 = hf_datasets.Dataset.from_generator(gen, gen_kwargs={\"dataset\": 2, \"n_samples\": 300}).to_iterable_dataset()\n> \n> print(\"Interleave datasets\")\n> ds_interleave = hf_datasets.interleave_datasets(\n> [ds_1, ds_2, ds_3],\n> probabilities=[1 / 3, 1 / 3, 1 / 3],\n> stopping_strategy=\"all_exhausted_without_replacement\",\n> )\n> for i, sample in enumerate(ds_interleave):\n> print(f\"process sample {i} {sample}\")\n> ```\n> \n\n\n> I'm not sure on the workers side how many will be spawned and so on.\n\nWhile using the data to train a model, I would like to use the `torch.utils.data.DataLoader` to feed batches of data to my model. To make the data loading fast, it is common to use `num_workers>0` in the dataloader. This will consume data in parallel. In practice, it copies the dataset instance and read in parallel different chunks of data. These chunks correspond to the underlying shards of the iterable dataset.\n\nIf we have 1 shard per dataset, as it is the case in the example above, the dataloading will indeed get all the 1450 samples, but it will run only in one process even if multiple are available. This is inefficient because it doesn't utilize all available resources. See the script and results below.\n\n```python\nfor num_workers in [0, 1, 2, 3, 4]:\n print(f\"Dataloader with {num_workers} workers.\")\n dataloader = DataLoader(ds_interleave, num_workers=num_workers, batch_size=1)\n for i, sample in enumerate(dataloader, start=1):\n pass\n print(f\"{i} processed samples\")\n```\n\n```\nDataloader with 0 workers.\n1450 processed samples\nDataloader with 1 workers.\n1450 processed samples\nDataloader with 2 workers.\nToo many dataloader workers: 2 (max is dataset.num_shards=1). Stopping 1 dataloader workers.\n1450 processed samples\nDataloader with 3 workers.\nToo many dataloader workers: 3 (max is dataset.num_shards=1). Stopping 2 dataloader workers.\n1450 processed samples\nDataloader with 4 workers.\nToo many dataloader workers: 4 (max is dataset.num_shards=1). Stopping 3 dataloader workers.\n1450 processed samples\n```\n\nNow if we shard our data differently, like 2, 1, and 3 for each dataset respectively as the [previous example](https://github.com/huggingface/datasets/issues/7792#issuecomment-3345970293), and use a dataloader with different number of workers (same script as above), we obtain:\n\n```\nDataloader with 0 workers.\n1450 processed samples\nDataloader with 1 workers.\n1450 processed samples\nDataloader with 2 workers.\nToo many dataloader workers: 2 (max is dataset.num_shards=1). Stopping 1 dataloader workers.\n850 processed samples\nDataloader with 3 workers.\nToo many dataloader workers: 3 (max is dataset.num_shards=1). Stopping 2 dataloader workers.\n750 processed samples\nDataloader with 4 workers.\nToo many dataloader workers: 4 (max is dataset.num_shards=1). Stopping 3 dataloader workers.\n750 processed samples\n```",
"I added a small fix to your PR @radulescupetru to try to make @LTMeyer 's example work :)\n\nCan you confirm it works for you now @LTMeyer ?\n\nNote that maximum parallelism requires each subset to have num_shards >= num_workers, otherwise there aren't enough shards to distribute to every worker for interleaving. In your example one of the subsets has only 1 shard, so only 1 worker can take care of interleaving.",
"> Can you confirm it works for you now [@LTMeyer](https://github.com/LTMeyer) ?\n\nResult with https://github.com/huggingface/datasets/pull/7786/commits/a547d81469128bea4acc3bcc2a4a6a95968936ee:\n```\nDataloader with 0 workers.\n1450 processed samples\nDataloader with 1 workers.\n1450 processed samples\nDataloader with 2 workers.\nToo many dataloader workers: 2 (max is dataset.num_shards=1). Stopping 1 dataloader workers.\n1450 processed samples\nDataloader with 3 workers.\nToo many dataloader workers: 3 (max is dataset.num_shards=1). Stopping 2 dataloader workers.\n1450 processed samples\nDataloader with 4 workers.\nToo many dataloader workers: 4 (max is dataset.num_shards=1). Stopping 3 dataloader workers.\n1450 processed samples\n```\n\n I have checked with the script above and I confirm that all samples are now correctly returned, thank you @lhoestq .\n\n> Note that maximum parallelism requires each subset to have num_shards >= num_workers, otherwise there aren't enough shards to distribute to every worker for interleaving. In your example one of the subsets has only 1 shard, so only 1 worker can take care of interleaving.\n\nThis point I'm not sure I understand. That is maybe where @radulescupetru's intent and mine differ. Why should we limit the number of workers to the minimum number of shards? My initial goal was to distribute shards among workers to maximize data loading speed, and to mix the data so batches are representative of the whole dataset and diverse enough (hence the round-robin). \n\nIn the example above, we have 6 shards in total, can we not distribute these shards among workers? That what the `MixMultiSourcesExampleIterable` in https://github.com/huggingface/datasets/issues/7792#issuecomment-3345970293 above does.\n- If 2 workers, 3 shards for each. \n- If 3 workers, 2 shards for each.\n- If 4 workers, the 2 first ones get 2 shards while the two last ones get only 1.\n- Above 6 workers, the 6 first ones get 1 shard each, and the remaining workers get none.\n\n\n",
"@LTMeyer I think it's just a design choice that datasets library took. From my interaction with it, it seems that even when concatenating or interleaving, individual components are still treated individually (for example, num_shards is not summed).\n\nI guess in a real scenario you wouldn't end up with 1 shard only, but it's true that you need to be a bit careful with the setup. For workers it's a bit more automated in the sense that if you have more it will stop the extra ones, but when distributing a dataset over multiple gpus it's even more tricky as if the number of shards is not a factor of world size iterating is slower.",
"> [@LTMeyer](https://github.com/LTMeyer) I think it's just a design choice that datasets library took. From my interaction with it, it seems that even when concatenating or interleaving, individual components are still treated individually (for example, num_shards is not summed).\n\nIndeed. I am curious to know if there is any explanation for this choice that I am missing.\n\n> I guess in a real scenario you wouldn't end up with 1 shard only, but it's true that you need to be a bit careful with the setup. \n\nIn my case I would like to mix many small datasets which are individually based on only few shards. So it's actually close to the case with 1 shard only.\n\n> For workers it's a bit more automated in the sense that if you have more it will stop the extra ones, but when distributing a dataset over multiple gpus it's even more tricky as if the number of shards is not a factor of world size iterating is slower.\n\nMy understanding is that, in a multi-gpu settings, we want each GPU to receive the same number of batches to avoid deadlock in any synchronization process. \nMulti-GPU related sharding of the `IterableDataset` is managed there https://github.com/huggingface/datasets/blob/4.1.1/src/datasets/iterable_dataset.py#L2371-L2392,\nwhile the sharding for dataloaders with multiple workers is handled there https://github.com/huggingface/datasets/blob/4.1.1/src/datasets/iterable_dataset.py#L2292-L2314.\n\nHere is a script to check the behavior in case of multi-gpus, using `split_dataset_by_node`. In the example I consider just 2 GPUs.\n\n```python\nworld_size = 2\nfor num_workers in [0, 1, 2, 3, 4]:\n for rank in range(world_size):\n print(f\"Rank {rank}\")\n ds_interleave_rank = split_dataset_by_node(ds_interleave, rank, world_size)\n print(f\"Dataloader with {num_workers} workers.\")\n dataloader = DataLoader(ds_interleave_rank, num_workers=num_workers, batch_size=1)\n for i in enumerate(dataloader, start=1):\n pass\n print(f\"{i} processed samples\")\n print(\"\\n\")\n```\n\nThe results using https://github.com/huggingface/datasets/pull/7786/commits/455bfaaa6d574aa9d9c9592baee390017512cc5f:\n```\nRank 0\nDataloader with 0 workers.\n725 processed samples\nRank 1\nDataloader with 0 workers.\n725 processed samples\n\n\nRank 0\nDataloader with 1 workers.\n725 processed samples\nRank 1\nDataloader with 1 workers.\n725 processed samples\n\n\nRank 0\nDataloader with 2 workers.\nToo many dataloader workers: 2 (max is dataset.num_shards=1). Stopping 1 dataloader workers.\n725 processed samples\nRank 1\nDataloader with 2 workers.\n725 processed samples\n\n\nRank 0\nDataloader with 3 workers.\nToo many dataloader workers: 3 (max is dataset.num_shards=1). Stopping 2 dataloader workers.\n725 processed samples\nRank 1\nDataloader with 3 workers.\n725 processed samples\n\n\nRank 0\nDataloader with 4 workers.\nToo many dataloader workers: 4 (max is dataset.num_shards=1). Stopping 3 dataloader workers.\n725 processed samples\nRank 1\nDataloader with 4 workers.\n725 processed samples\n```\n\nIf now I use the mixing described above the results are:\n```\nRank 0\nDataloader with 0 workers.\n750 processed samples\nRank 1\nDataloader with 0 workers.\n700 processed samples\n\n\nRank 0\nDataloader with 1 workers.\n750 processed samples\nRank 1\nDataloader with 1 workers.\n700 processed samples\n\n\nRank 0\nDataloader with 2 workers.\n750 processed samples\nRank 1\nDataloader with 2 workers.\n700 processed samples\n\n\nRank 0\nDataloader with 3 workers.\n750 processed samples\nRank 1\nDataloader with 3 workers.\n700 processed samples\n\n\nRank 0\nDataloader with 4 workers.\n750 processed samples\nRank 1\nDataloader with 4 workers.\n700 processed samples\n```\n\nDifferent GPUs received different number of batches which is problematic. The interleave method, on the other hand, feeds each GPU with the same number of batches. Nonetheless, it doesn't leverage all available workers.\nI'll check if I can fix the distribution of shards across GPU in the last configuration.",
"When concatenating or interleaving, the resulting `num_shards` is the *minimum `num_shards` of the input datasets*. This allows each new shard to always contain data from every input dataset. This ensures in every shard the right sampling when interleaving and the right data order when concatenating.\n\nSumming the dataset shards isn't ideal since each shard would contain data from only one of the dataset and would not contain any interleaved/concatenated data.",
"Thank you @lhoestq, it makes perfect sense. The part I am missing is that if I concatenate many datasets with small number of shards it will result in a global dataset with not so many shards, thus limiting the use of available workers. Data loading will be consequently inefficient. I was looking for a solution to leverage all parallelism available to maximize data loading speed.\n\nMy original use case was:\nI want to use a dataset stored on the HF hub. It is composed of many subfolders. Each of this subfolder contain only a few shards. I would like to use the dataset but only on a subset of folders, while keeping information about the origin of each sample (i.e. from which subfolder they come from).\nThe first part would possible with the `data_files` argument of `load_dataset` method. However, I would not have the origin information about the sample, as it is not provided in the original dataset. I was thus thinking about considering each subfolder as an independent HF iterable dataset and concatenate them. This method does not work because it drastically reduces the dataloading efficiency due to the low number of shards.\n\n> Summing the dataset shards isn't ideal `since` each shard would contain data from only one of the dataset and would not contain any interleaved/concatenated data.\n\nThis is not necessarily a problem for my use case. It will be the case for the original dataset anyway.",
"Also, I notice in the example above that if we modify the number of shards, we get different number of samples per GPU and workers even with the implementation of @radulescupetru. This will cause a deadlock in the DDP. So I guess HF expects all shards to contain the same number of samples. Is that a correct assumption @lhoestq?\n\nSetting the number of shards for the datasets above to 2, 2 and 3. Using the `interleave_datasets` I get the following:\n```\nRank 0\nAssigning 1 shard (or data source) of the dataset to each node.\nDataloader with 0 workers.\nAssigning 1 shard (or data source) of the dataset to each node.\n775 processed samples\nRank 1\nDataloader with 0 workers.\n675 processed samples\n\n\nRank 0\nAssigning 1 shard (or data source) of the dataset to each node.\nDataloader with 1 workers.\nAssigning 1 shard (or data source) of the dataset to each node.\n775 processed samples\nRank 1\nDataloader with 1 workers.\n675 processed samples\n\n\nRank 0\nAssigning 1 shard (or data source) of the dataset to each node.\nDataloader with 2 workers.\nToo many dataloader workers: 2 (max is dataset.num_shards=1). Stopping 1 dataloader workers.\nWARNING:datasets.iterable_dataset:Too many dataloader workers: 2 (max is dataset.num_shards=1). Stopping 1 dataloader workers.\nAssigning 1 shard (or data source) of the dataset to each node.\n775 processed samples\nRank 1\nDataloader with 2 workers.\n675 processed samples\n\n\nRank 0\nAssigning 1 shard (or data source) of the dataset to each node.\nDataloader with 3 workers.\nToo many dataloader workers: 3 (max is dataset.num_shards=1). Stopping 2 dataloader workers.\nWARNING:datasets.iterable_dataset:Too many dataloader workers: 3 (max is dataset.num_shards=1). Stopping 2 dataloader workers.\nAssigning 1 shard (or data source) of the dataset to each node.\n775 processed samples\nRank 1\nDataloader with 3 workers.\n675 processed samples\n\n\nRank 0\nAssigning 1 shard (or data source) of the dataset to each node.\nDataloader with 4 workers.\nToo many dataloader workers: 4 (max is dataset.num_shards=1). Stopping 3 dataloader workers.\nWARNING:datasets.iterable_dataset:Too many dataloader workers: 4 (max is dataset.num_shards=1). Stopping 3 dataloader workers.\nAssigning 1 shard (or data source) of the dataset to each node.\n775 processed samples\nRank 1\nDataloader with 4 workers.\n675 processed samples\n```",
"I see @LTMeyer, that makes sense. Do you think we should sum the shards by default for concatenating then ? I feel like your use case is more important than ensuring each worker has data of every subdataset in order.\n\n(I wouldn't touch the interleaving logic though)\n\n> Also, I notice in the example above that if we modify the number of shards, we get different number of samples per GPU and workers even with the implementation of @radulescupetru. This will cause a deadlock in the DDP. So I guess HF expects all shards to contain the same number of samples. Is that a correct assumption @lhoestq?\n\nShards rarely have the same number of samples, so the DDP algorithm itself should be able to stop on its own or have a strategy to circumvent this. For example it can loop until all the nodes have exhausted their data:\n\n```python\ndef loop():\n while True:\n yield from dataloader\n yield \"end\"\n\nfor x in loop():\n if x == \"end\":\n exhausted[rank] = True\n continue\n # stop once the data from all the ranks are exhausted\n dist.all_reduce(exhausted)\n if torch.all(exhausted):\n break\n # do your forward pass + loss here\n # model.forward(...)\n```\n\nI made a full example here: https://github.com/huggingface/datasets/issues/6623#issuecomment-2379458138",
"To summarize, and highlight the distinction with https://github.com/huggingface/datasets/pull/7786, there are actually two feature requests:\n1. Similarly to `interleave_datasets`, we want to interleave the longest dataset without repetition. This is handled by https://github.com/huggingface/datasets/pull/7786, and is consistant with the rest of the HF features (i.e. `concatenate_datasets` and `interleave_datasets`);\n2. We want to be able to _fuse_ datasets and distribute their shards across workers to maximize data loading speed.\n\n > I feel like your use case is more important than ensuring each worker has data of every subdataset in order.\n\nIndeed my use case, pointed as 2. above is first about maximizing data loading speed and second about mixing the data. The order of priority seems to be the opposite in 1.\n\n> Do you think we should sum the shards by default for concatenating then?\n\nI think the library should at least provide a method for this. Users can then decide what matters the most for their use case (data order or dataloading speed). What do you think?\n\n> Shards rarely have the same number of samples, so the DDP algorithm itself should be able to stop on its own or have a strategy to circumvent this.\n\nIf imbalanced data stream in a DDP context is not the responsibility of the datasets library, it is, for me, a reason more to provides a fuse or mix dataset method that sum the shards.\n\n> I made a full example here: https://github.com/huggingface/datasets/issues/6623#issuecomment-2379458138 \n\nThank you for the example. Pytorch now provides also utilities to handle this problematic case, see [Join context manager in DDP](https://docs.pytorch.org/tutorials/advanced/generic_join.html#:%7E:text=The%20context%20manager%20allows%20the,shadowed%20are%20specified%20by%20hooks)",
"I'm closing this issue because of several existing solutions:\n- https://github.com/huggingface/datasets/pull/7786 allows to interleave datasets without replacement.\n- Using [`.shard`](https://huggingface.co/docs/datasets/v4.2.0/en/package_reference/main_classes#datasets.IterableDataset.shard) instead of [`split_dataset_by_node`](https://huggingface.co/docs/datasets/v4.2.0/en/package_reference/main_classes#datasets.distributed.split_dataset_by_node). Given _m_ shards and _n_ ranks, if m % n != 0, the later function will make each of the _n_ ranks go through all of the _m_ shards, although not fetching the same data. On the other hand, the former function can distribute the _m_ shards across the _n_ ranks and make better use of parallel reads.\n\nThank you @lhoestq and @radulescupetru for the help."
] | 2025-09-26T10:05:19
| 2025-10-15T18:05:23
| 2025-10-15T18:05:23
|
NONE
| null | null | null | null |
### Feature request
I would like to be able to concatenate multiple `IterableDataset` with possibly different features. I would like to then be able to stream the results in parallel (both using DDP and multiple workers in the pytorch DataLoader). I want the merge of datasets to be well balanced between the different processes.
### Motivation
I want to train a model on a combination of datasets, which I can convert to a single representation. This applies to converting different datasets items to the same Python class, as using a tokenizer on multiple modalities.
Assuming that my original datasets are not necessarily well balanced as they may have different size and thus different number of shards, I would like the merged dataset to be distributed evenly over the multiple processes. I don't mind if it's not perfectly balanced, and as result, some workers of the torch DataLoader do nothing, as long as the DDP is properly handled causing no deadlock.
### What I've tried
I've tried the two functions already provided in datasets, namely `interleave_datasets` and `concatenate_datasets`.
- Interleave seems to be the best approach of what I'm trying to do. However, it doesn't suit my purpose because as I understand it, it stops as soon as one of the dataset source is exhausted, or repeat the smallest source items until the largest is exhausted. I would like something in-between, similarly to what [roundrobin does](https://more-itertools.readthedocs.io/en/stable/api.html#more_itertools.roundrobin).
- Concatenate does not mix the data enough and one dataset may be overrepresented in some early batches.
Let's consider we have 3 datasets composed of different number of shards as follow [[s0_0, s0_1], [s1_0], [s2_0, s2_1, s2_3]], where s denotes the underlying shard, the first index the dataset and the second the shard number.
If we request 3 shards in the `shard_data_source` we should obtain the following:
index 0 gets s0_0 s2_0
index 1 gets s0_1 s2_1
index 2 gets s1_0 s2_3
I started implementing the following, but I'm afraid my sharding logic is incorrect.
```python
from copy import deepcopy
from itertools import chain, islice
import datasets
import numpy as np
from datasets import IterableDataset
from datasets.iterable_dataset import _BaseExamplesIterable
from more_itertools import roundrobin
class MixMultiSourcesExampleIterable(_BaseExamplesIterable):
def __init__(self, ex_iterables: list[_BaseExamplesIterable]):
super().__init__()
self.ex_iterables = ex_iterables
def _init_state_dict(self) -> dict:
self._state_dict = {
"ex_iterables": [ex_iterable._init_state_dict() for ex_iterable in self.ex_iterables],
"type": self.__class__.__name__,
}
return self._state_dict
@property
def num_shards(self) -> int:
return sum(ex_iterable.num_shards for ex_iterable in self.ex_iterables)
def __iter__(self):
yield from roundrobin(*self.ex_iterables)
def shuffle_data_sources(self, generator: np.random.Generator) -> "MixMultiSourcesExampleIterable":
"""Shuffle the list of examples iterable, as well as each underlying examples iterable."""
rng = deepcopy(generator)
ex_iterables = list(self.ex_iterables)
rng.shuffle(ex_iterables)
ex_iterables = [ex_iterable.shuffle_data_sources(generator) for ex_iterable in ex_iterables]
return MixMultiSourcesExampleIterable(ex_iterables)
def shard_data_sources(self, num_shards: int, index: int, contiguous=True) -> "MixMultiSourceExampleIterable":
"""Shard the underlying iterables in a roundrobin manner.
Let's consider we have our iterables as [[s0_0, s0_1], [s1_0], [s2_0, s2_1, s2_3]],
and we request 3 shards.
index 0 gets s0_0 s2_0
index 1 gets s0_1 s2_1
index 2 gets s1_0 s2_3
"""
return MixMultiSourcesExampleIterable(
list(
islice(
# flatten all underlying iterables
chain.from_iterable([ex_iterable.shard_data_sources(1, 0) for ex_iterable in self.ex_iterables]),
# offset the starting point by the index
index,
# take over the full list, so exhaust the iterators
None,
# step by the number of shards requested
num_shards,
)
)
)
def mix_dataset(iterable_datasets: list[datasets.IterableDataset]) -> IterableDataset:
ex_iterable = MixMultiSourcesExampleIterable([ds._ex_iterable for ds in iterable_datasets])
return IterableDataset(
ex_iterable, distributed=iterable_datasets[0]._distributed, formatting=iterable_datasets[0]._formatting
)
```
### Questions
- Am I missing something? Is there a way to use `interleave_datasets` or `concatenate_datasets` to fit my purpose?
- Would it be the right approach to spread the maximum number of underlying shards across my different processes?
### Your contribution
As much as I can.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/13559010?v=4",
"events_url": "https://api.github.com/users/LTMeyer/events{/privacy}",
"followers_url": "https://api.github.com/users/LTMeyer/followers",
"following_url": "https://api.github.com/users/LTMeyer/following{/other_user}",
"gists_url": "https://api.github.com/users/LTMeyer/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/LTMeyer",
"id": 13559010,
"login": "LTMeyer",
"node_id": "MDQ6VXNlcjEzNTU5MDEw",
"organizations_url": "https://api.github.com/users/LTMeyer/orgs",
"received_events_url": "https://api.github.com/users/LTMeyer/received_events",
"repos_url": "https://api.github.com/users/LTMeyer/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/LTMeyer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LTMeyer/subscriptions",
"type": "User",
"url": "https://api.github.com/users/LTMeyer",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7792/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7792/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 19 days, 8:00:04
|
https://api.github.com/repos/huggingface/datasets/issues/7883
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7883/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7883/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7883/events
|
https://github.com/huggingface/datasets/issues/7883
| 3,668,182,561
|
I_kwDODunzps7apAYh
| 7,883
|
Data.to_csv() cannot be recognized by pylance
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/154290630?v=4",
"events_url": "https://api.github.com/users/xi4ngxin/events{/privacy}",
"followers_url": "https://api.github.com/users/xi4ngxin/followers",
"following_url": "https://api.github.com/users/xi4ngxin/following{/other_user}",
"gists_url": "https://api.github.com/users/xi4ngxin/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/xi4ngxin",
"id": 154290630,
"login": "xi4ngxin",
"node_id": "U_kgDOCTJJxg",
"organizations_url": "https://api.github.com/users/xi4ngxin/orgs",
"received_events_url": "https://api.github.com/users/xi4ngxin/received_events",
"repos_url": "https://api.github.com/users/xi4ngxin/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/xi4ngxin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xi4ngxin/subscriptions",
"type": "User",
"url": "https://api.github.com/users/xi4ngxin",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[] | 2025-11-26T16:16:56
| 2025-11-26T16:16:56
| null |
NONE
| null | null | null | null |
### Describe the bug
Hi, everyone ! I am a beginner with datasets.
I am testing reading multiple CSV files from a zip archive. The result of reading the dataset shows success, and it can ultimately be correctly saved to CSV.
Intermediate results:
```
Generating train split: 62973 examples [00:00, 175939.01 examples/s]
DatasetDict({
train: Dataset({
features: ['交易时间\t', '收支方向\t', '业务(产品)种类\t', '交易金额\t', '币种\t', '时点余额\t', '对手方名称\t', '对方机构名称\t', ' 对方钱包ID/账号\t', '交易对手名称\t', '交易对手编号\t', '交易流水号\t', '摘要\t', '附言\t', '备注\t', '用途\t', '客户流水号\t'],
num_rows: 62973
})
})
```
However, Pylance gives me the following error:
```
Cannot access attribute "to_csv" for class "DatasetDict"
Attribute "to_csv" is unknownPylance[reportAttributeAccessIssue](https://github.com/microsoft/pylance-release/blob/main/docs/diagnostics/reportAttributeAccessIssue.md)```
Cannot access attribute "to_csv" for class "IterableDatasetDict"
Attribute "to_csv" is unknownPylance[reportAttributeAccessIssue](https://github.com/microsoft/pylance-release/blob/main/docs/diagnostics/reportAttributeAccessIssue.md)
(method) to_csv: Unknown | ((path_or_buf: datasets.utils.typing.PathLike | BinaryIO, batch_size: int | None = None, num_proc: int | None = None, storage_options: dict[Unknown, Unknown] | None = None, **to_csv_kwargs: Unknown) -> int) | ((path_or_buf: datasets.utils.typing.PathLike | BinaryIO, batch_size: int | None = None, storage_options: dict[Unknown, Unknown] | None = None, **to_csv_kwargs: Unknown) -> int)
```
I ignored the error and continued executing to get the correct result:
```
Dataset({
features: ['交易时间\t', '收支方向\t', '业务(产品)种类\t', '交易金额\t', '币种\t', '时点余额\t', '对手方名称\t', '对方机构名称\t', '对方 钱包ID/账号\t', '交易对手名称\t', '交易对手编号\t', '交易流水号\t', '摘要\t', '附言\t', '备注\t', '用途\t', '客户流水号\t'],
num_rows: 62973
})
```
Since the data volume is small, I manually merged the CSV files, and the final result is consistent with what the program saved.
looks like :
<img width="1264" height="150" alt="Image" src="https://github.com/user-attachments/assets/743540d7-ad8c-4531-ae7e-de71a5243a32" />
### Steps to reproduce the bug
this is my code.
```
from datasets import load_dataset
def main():
url = "data/test.zip"
data_files = {"train": url}
dataset = load_dataset("csv", data_files=data_files,split="train", encoding="gbk", skiprows=2)
# print(dataset)
dataset.to_csv("data/test.csv")
if __name__ == "__main__":
main()
```
### Expected behavior
I want to know why this happens. Is there something wrong with my code?
### Environment info
OS: Windows 11 **upgrade from** OS: Windows_NT x64 10.0.22631
Editor:
VS Code Version: 1.106.2 (user setup)
"datasets" version = "4.4.1"
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7883/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7883/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/7882
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7882/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7882/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7882/events
|
https://github.com/huggingface/datasets/issues/7882
| 3,667,664,527
|
I_kwDODunzps7anB6P
| 7,882
|
Inconsistent loading of LFS-hosted files in epfml/FineWeb-HQ dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/6270922?v=4",
"events_url": "https://api.github.com/users/Oligou/events{/privacy}",
"followers_url": "https://api.github.com/users/Oligou/followers",
"following_url": "https://api.github.com/users/Oligou/following{/other_user}",
"gists_url": "https://api.github.com/users/Oligou/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Oligou",
"id": 6270922,
"login": "Oligou",
"node_id": "MDQ6VXNlcjYyNzA5MjI=",
"organizations_url": "https://api.github.com/users/Oligou/orgs",
"received_events_url": "https://api.github.com/users/Oligou/received_events",
"repos_url": "https://api.github.com/users/Oligou/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Oligou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Oligou/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Oligou",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[] | 2025-11-26T14:06:02
| 2025-11-26T14:06:02
| null |
NONE
| null | null | null | null |
### Describe the bug
Some files in the `epfml/FineWeb-HQ` dataset fail to load via the Hugging Face `datasets` library.
- xet-hosted files load fine
- LFS-hosted files sometimes fail
Example:
- Fails: https://huggingface.co/datasets/epfml/FineWeb-HQ/blob/main/data/CC-MAIN-2024-26/000_00003.parquet
- Works: https://huggingface.co/datasets/epfml/FineWeb-HQ/blob/main/data/CC-MAIN-2024-42/000_00027.parquet
Discussion: https://huggingface.co/datasets/epfml/FineWeb-HQ/discussions/2
### Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset(
"epfml/FineWeb-HQ",
data_files="data/CC-MAIN-2024-26/000_00003.parquet",
)
```
Error message:
```
HfHubHTTPError: 403 Forbidden: None.
Cannot access content at: https://cdn-lfs-us-1.hf.co/repos/...
Make sure your token has the correct permissions.
...
<Error><Code>AccessDenied</Code><Message>Access Denied</Message></Error>
```
### Expected behavior
It should load the dataset for all files.
### Environment info
- python 3.10
- datasets 4.4.1
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7882/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7882/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/7880
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7880/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7880/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7880/events
|
https://github.com/huggingface/datasets/issues/7880
| 3,667,561,864
|
I_kwDODunzps7amo2I
| 7,880
|
Spurious label column created when audiofolder/imagefolder directories match split names
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/132138786?v=4",
"events_url": "https://api.github.com/users/neha222222/events{/privacy}",
"followers_url": "https://api.github.com/users/neha222222/followers",
"following_url": "https://api.github.com/users/neha222222/following{/other_user}",
"gists_url": "https://api.github.com/users/neha222222/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/neha222222",
"id": 132138786,
"login": "neha222222",
"node_id": "U_kgDOB-BHIg",
"organizations_url": "https://api.github.com/users/neha222222/orgs",
"received_events_url": "https://api.github.com/users/neha222222/received_events",
"repos_url": "https://api.github.com/users/neha222222/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/neha222222/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/neha222222/subscriptions",
"type": "User",
"url": "https://api.github.com/users/neha222222",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[] | 2025-11-26T13:36:24
| 2025-11-26T13:36:24
| null |
NONE
| null | null | null | null |
## Describe the bug
When using `audiofolder` or `imagefolder` with directories for **splits** (train/test) rather than class labels, a spurious `label` column is incorrectly created.
**Example:** https://huggingface.co/datasets/datasets-examples/doc-audio-4
```
from datasets import load_dataset
ds = load_dataset("datasets-examples/doc-audio-4")
print(ds["train"].features)
```
Shows 'label' column with ClassLabel(names=['test', 'train']) - incorrect!## Root cause
In `folder_based_builder.py`, the `labels` set is accumulated across ALL splits (line 77). When directories are `train/` and `test/`:
- `labels = {"train", "test"}` → `len(labels) > 1` → `add_labels = True`
- Spurious label column is created with split names as class labels
## Expected behavior
No `label` column should be added when directory names match split names.
## Proposed fix
Skip label inference when inferred labels match split names.
cc @lhoestq
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7880/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7880/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/7879
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7879/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7879/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7879/events
|
https://github.com/huggingface/datasets/issues/7879
| 3,657,249,446
|
I_kwDODunzps7Z_TKm
| 7,879
|
python core dump when downloading dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/5960219?v=4",
"events_url": "https://api.github.com/users/hansewetz/events{/privacy}",
"followers_url": "https://api.github.com/users/hansewetz/followers",
"following_url": "https://api.github.com/users/hansewetz/following{/other_user}",
"gists_url": "https://api.github.com/users/hansewetz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hansewetz",
"id": 5960219,
"login": "hansewetz",
"node_id": "MDQ6VXNlcjU5NjAyMTk=",
"organizations_url": "https://api.github.com/users/hansewetz/orgs",
"received_events_url": "https://api.github.com/users/hansewetz/received_events",
"repos_url": "https://api.github.com/users/hansewetz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hansewetz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hansewetz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hansewetz",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[
"Hi @hansewetz I'm curious, for me it works just fine. Are you still observing the issue?",
"Yup ... still the same issue.\nHowever, after adding a ```sleep(1)``` call after the ``` for``` loop by accident during debugging, the program terminates properly (not a good solution though ... :-) ).\nAre there some threads created that handles the download that are still running when the program exits?\nHaven't had time yet to go through the code in ```iterable_dataset.py::IterableDataset```\n",
"Interesting, I was able to reproduce it, on a jupyter notebook the code runs just fine, as a Python script indeed it seems to never finish running (which is probably leading to the core dumped error). I'll try and take a look at the source code as well to see if I can figure it out.",
"Hi @hansewetz ,\nIf possible can I be assigned with this issue?\n\n",
"```If possible can I be assigned with this issue?```\nHi, I don't know how assignments work here and who can take decisions about assignments ... ",
"Hi @hansewetz and @Aymuos22, I have made some progress:\n\n1) Confirmed last working version is 3.1.0\n\n2) From 3.1.0 to 3.2.0, there was a change in how parquet files are read (see [here](https://github.com/huggingface/datasets/blob/main/src/datasets/packaged_modules/parquet/parquet.py/#168).\n\nThe issue seems to be the following code:\n\n```\nparquet_fragment.to_batches(\n batch_size=batch_size,\n columns=self.config.columns,\n filter=filter_expr,\n batch_readahead=0,\n fragment_readahead=0,\n )\n```\n\nAdding a `use_threads=False` parameter to the `to_batches` call solves the bug. However, this seems far from an optimal solution, since we'd like to be able to use multiple threads for reading the fragments. \n\nI'll keep investigating to see if there's a better solution.",
"Hi @lhoestq, may I ask if the current behaviour was expected by you folks and you don't think it needs solving, or should I keep on investigating a compromise between using multithreading / avoid unexpected behaviour? Thanks in advance :) ",
"Having the same issue. the code never stops executing. Using datasets 4.4.1\nTried with \"islice\" as well. When the streaming flag is True, the code doesn't end execution. On vs-code.",
"The issue on pyarrow side is here: https://github.com/apache/arrow/issues/45214 and the original issue in `datasets` here: https://github.com/huggingface/datasets/issues/7357\n\nIt would be cool to have a fix on the pyarrow side",
"Thank you very much @lhoestq, I'm reading the issue thread in pyarrow and realizing you've been raising awareness around this for a long time now. When I have some time I'll look at @pitrou's PR to see if I can get a better understanding of what's going on on pyarrow. "
] | 2025-11-24T06:22:53
| 2025-11-25T20:45:55
| null |
NONE
| null | null | null | null |
### Describe the bug
When downloading a dataset in streamed mode and exiting the program before the download completes, the python program core dumps when exiting:
```
terminate called without an active exception
Aborted (core dumped)
```
Tested with python 3.12.3, python 3.9.21
### Steps to reproduce the bug
Create python venv:
```bash
python -m venv venv
./venv/bin/activate
pip install datasets==4.4.1
```
Execute the following program:
```
from datasets import load_dataset
ds = load_dataset("HuggingFaceFW/fineweb-2", 'hrv_Latn', split="test", streaming=True)
for sample in ds:
break
```
### Expected behavior
Clean program exit
### Environment info
described above
**note**: the example works correctly when using ```datasets==3.1.0```
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7879/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7879/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/7877
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7877/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7877/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7877/events
|
https://github.com/huggingface/datasets/issues/7877
| 3,652,906,788
|
I_kwDODunzps7Zuu8k
| 7,877
|
work around `tempfile` silently ignoring `TMPDIR` if the dir doesn't exist
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stas00",
"id": 10676103,
"login": "stas00",
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"repos_url": "https://api.github.com/users/stas00/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stas00",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[
"Hi! Just created a Pull Request (#7890) to try to fix this using your suggestions. I hope it helps!"
] | 2025-11-21T19:51:48
| 2025-11-29T20:37:42
| null |
CONTRIBUTOR
| null | null | null | null |
This should help a lot of users running into `No space left on device` while using `datasets`. Normally the issue is is that `/tmp` is too small and the user needs to use another path, which they would normally set as `export TMPDIR=/some/big/storage`
However, the `tempfile` facility that `datasets` and `pyarrow` use is somewhat broken. If the path doesn't exist it'd ignore it and fall back to using `/tmp`. Watch this:
```
$ export TMPDIR='/tmp/username'
$ python -c "\
import os
import tempfile
print(os.environ['TMPDIR'])
print(tempfile.gettempdir())"
/tmp/username
/tmp
```
Now let's ensure the path exists:
```
$ export TMPDIR='/tmp/username'
$ mkdir -p $TMPDIR
$ python -c "\
import os
import tempfile
print(os.environ['TMPDIR'])
print(tempfile.gettempdir())"
/tmp/username
/tmp/username
```
So I recommend `datasets` do either of the 2:
1. assert if `$TMPDIR` dir doesn't exist, telling the user to create it
2. auto-create it
The reason for (1) is that I don't know why `tempdir` doesn't auto-create the dir - perhaps some security implication? I will let you guys make the decision, but the key is not to let things silently fall through and the user puzzling why no matter what they do they can't break past `No space left on device` while using `datasets`
Thank you.
I found this via https://stackoverflow.com/questions/37229398/python-tempfile-gettempdir-does-not-respect-tmpdir while trying to help a colleague to solve this exact issue.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7877/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7877/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/7872
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7872/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7872/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7872/events
|
https://github.com/huggingface/datasets/issues/7872
| 3,643,681,893
|
I_kwDODunzps7ZLixl
| 7,872
|
IterableDataset does not use features information in to_pandas
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/790640?v=4",
"events_url": "https://api.github.com/users/bonext/events{/privacy}",
"followers_url": "https://api.github.com/users/bonext/followers",
"following_url": "https://api.github.com/users/bonext/following{/other_user}",
"gists_url": "https://api.github.com/users/bonext/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bonext",
"id": 790640,
"login": "bonext",
"node_id": "MDQ6VXNlcjc5MDY0MA==",
"organizations_url": "https://api.github.com/users/bonext/orgs",
"received_events_url": "https://api.github.com/users/bonext/received_events",
"repos_url": "https://api.github.com/users/bonext/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bonext/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bonext/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bonext",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[
"Created A PR!",
"Another test script that can be used to test the behavior - \n\n```\nimport datasets\nfrom datasets import features\n\ndef test_crash():\n common_features = features.Features({\n \"a\": features.Value(\"int64\"),\n \"b\": features.List({\"c\": features.Value(\"int64\")}),\n })\n\n def row_generator():\n yield {\"a\": 1, \"b\": []}\n yield {\"a\": 1, \"b\": [{\"c\": 1}]}\n\n d = datasets.IterableDataset.from_generator(row_generator, features=common_features)\n\n list(d.to_pandas()) # <-- this triggers the crash\n\n```"
] | 2025-11-19T17:12:59
| 2025-11-19T18:52:14
| null |
NONE
| null | null | null | null |
### Describe the bug
`IterableDataset` created from generator with explicit `features=` parameter seems to ignore provided features description for certain operations, e.g. `.to_pandas(...)` when data coming from the generator has missing values.
### Steps to reproduce the bug
```python
import datasets
from datasets import features
def test_to_pandas_works_with_explicit_schema():
common_features = features.Features(
{
"a": features.Value("int64"),
"b": features.List({"c": features.Value("int64")}),
}
)
def row_generator():
data = [{"a": 1, "b": []}, {"a": 1, "b": [{"c": 1}]}]
for row in data:
yield row
d = datasets.IterableDataset.from_generator(row_generator, features=common_features)
for _ in d.to_pandas():
pass
# _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
# .venv/lib/python3.13/site-packages/datasets/iterable_dataset.py:3703: in to_pandas
# table = pa.concat_tables(list(self.with_format("arrow").iter(batch_size=1000)))
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
# .venv/lib/python3.13/site-packages/datasets/iterable_dataset.py:2563: in iter
# for key, pa_table in iterator:
# ^^^^^^^^
# .venv/lib/python3.13/site-packages/datasets/iterable_dataset.py:2078: in _iter_arrow
# for key, pa_table in self.ex_iterable._iter_arrow():
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
# .venv/lib/python3.13/site-packages/datasets/iterable_dataset.py:599: in _iter_arrow
# yield new_key, pa.Table.from_batches(chunks_buffer)
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
# pyarrow/table.pxi:5039: in pyarrow.lib.Table.from_batches
# ???
# pyarrow/error.pxi:155: in pyarrow.lib.pyarrow_internal_check_status
# ???
# _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
# > ???
# E pyarrow.lib.ArrowInvalid: Schema at index 1 was different:
# E a: int64
# E b: list<item: null>
# E vs
# E a: int64
# E b: list<item: struct<c: int64>>
# pyarrow/error.pxi:92: ArrowInvalid
```
### Expected behavior
arrow operations use schema provided through `features=` and not the one inferred from the data
### Environment info
- datasets version: 4.4.1
- Platform: macOS-15.7.1-arm64-arm-64bit-Mach-O
- Python version: 3.13.1
- huggingface_hub version: 1.1.4
- PyArrow version: 22.0.0
- Pandas version: 2.3.3
- fsspec version: 2025.10.0
| null |
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7872/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7872/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/7871
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7871/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7871/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7871/events
|
https://github.com/huggingface/datasets/issues/7871
| 3,643,607,371
|
I_kwDODunzps7ZLQlL
| 7,871
|
Reqwest Error: HTTP status client error (429 Too Many Requests)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/26405281?v=4",
"events_url": "https://api.github.com/users/yanan1116/events{/privacy}",
"followers_url": "https://api.github.com/users/yanan1116/followers",
"following_url": "https://api.github.com/users/yanan1116/following{/other_user}",
"gists_url": "https://api.github.com/users/yanan1116/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yanan1116",
"id": 26405281,
"login": "yanan1116",
"node_id": "MDQ6VXNlcjI2NDA1Mjgx",
"organizations_url": "https://api.github.com/users/yanan1116/orgs",
"received_events_url": "https://api.github.com/users/yanan1116/received_events",
"repos_url": "https://api.github.com/users/yanan1116/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yanan1116/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yanan1116/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yanan1116",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[
"the dataset repo: `https://huggingface.co/datasets/nvidia/PhysicalAI-Robotics-GR00T-X-Embodiment-Sim`",
"Hi @yanan1116,\n\nThanks for the detailed report! However, this issue was filed in the wrong repository. This is a `huggingface_hub` issue, not a `datasets` issue.\n\nLooking at your traceback, you're using the `hf download` CLI command (from `huggingface_hub`), and the error occurs in `huggingface_hub/file_download.py` at line 571 in the `xet_get` function. The `datasets` library is not involved in this download at all.\n\nThe 429 error means the CAS (Content Addressable Storage) service at `https://cas-server.xethub.hf.co` is rate-limiting your requests. The `huggingface_hub` library currently doesn't have automatic retry logic for 429 errors from the CAS service.\n\nPlease reopen this issue at: https://github.com/huggingface/huggingface_hub/issues"
] | 2025-11-19T16:52:24
| 2025-11-30T03:32:00
| null |
NONE
| null | null | null | null |
### Describe the bug
full error message:
```
Traceback (most recent call last):
File "/home/yanan/miniconda3/bin/hf", line 7, in <module>
sys.exit(main())
~~~~^^
File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/cli/hf.py", line 56, in main
app()
~~~^^
File "/home/yanan/miniconda3/lib/python3.13/site-packages/typer/main.py", line 327, in __call__
raise e
File "/home/yanan/miniconda3/lib/python3.13/site-packages/typer/main.py", line 310, in __call__
return get_command(self)(*args, **kwargs)
~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "/home/yanan/miniconda3/lib/python3.13/site-packages/click/core.py", line 1161, in __call__
return self.main(*args, **kwargs)
~~~~~~~~~^^^^^^^^^^^^^^^^^
File "/home/yanan/miniconda3/lib/python3.13/site-packages/typer/core.py", line 803, in main
return _main(
self,
...<6 lines>...
**extra,
)
File "/home/yanan/miniconda3/lib/python3.13/site-packages/typer/core.py", line 192, in _main
rv = self.invoke(ctx)
File "/home/yanan/miniconda3/lib/python3.13/site-packages/click/core.py", line 1697, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^
File "/home/yanan/miniconda3/lib/python3.13/site-packages/click/core.py", line 1443, in invoke
return ctx.invoke(self.callback, **ctx.params)
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yanan/miniconda3/lib/python3.13/site-packages/click/core.py", line 788, in invoke
return __callback(*args, **kwargs)
File "/home/yanan/miniconda3/lib/python3.13/site-packages/typer/main.py", line 691, in wrapper
return callback(**use_params)
File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/cli/download.py", line 188, in download
_print_result(run_download())
~~~~~~~~~~~~^^
File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/cli/download.py", line 149, in run_download
return snapshot_download(
repo_id=repo_id,
...<10 lines>...
dry_run=dry_run,
)
File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/utils/_validators.py", line 89, in _inner_fn
return fn(*args, **kwargs)
File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/_snapshot_download.py", line 451, in snapshot_download
thread_map(
~~~~~~~~~~^
_inner_hf_hub_download,
^^^^^^^^^^^^^^^^^^^^^^^
...<3 lines>...
tqdm_class=tqdm_class,
^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/home/yanan/miniconda3/lib/python3.13/site-packages/tqdm/contrib/concurrent.py", line 69, in thread_map
return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs)
File "/home/yanan/miniconda3/lib/python3.13/site-packages/tqdm/contrib/concurrent.py", line 51, in _executor_map
return list(tqdm_class(ex.map(fn, *iterables, chunksize=chunksize), **kwargs))
File "/home/yanan/miniconda3/lib/python3.13/site-packages/tqdm/std.py", line 1181, in __iter__
for obj in iterable:
^^^^^^^^
File "/home/yanan/miniconda3/lib/python3.13/concurrent/futures/_base.py", line 619, in result_iterator
yield _result_or_cancel(fs.pop())
~~~~~~~~~~~~~~~~~^^^^^^^^^^
File "/home/yanan/miniconda3/lib/python3.13/concurrent/futures/_base.py", line 317, in _result_or_cancel
return fut.result(timeout)
~~~~~~~~~~^^^^^^^^^
File "/home/yanan/miniconda3/lib/python3.13/concurrent/futures/_base.py", line 449, in result
return self.__get_result()
~~~~~~~~~~~~~~~~~^^
File "/home/yanan/miniconda3/lib/python3.13/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
File "/home/yanan/miniconda3/lib/python3.13/concurrent/futures/thread.py", line 59, in run
result = self.fn(*self.args, **self.kwargs)
File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/_snapshot_download.py", line 431, in _inner_hf_hub_download
hf_hub_download( # type: ignore
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
repo_id,
^^^^^^^^
...<14 lines>...
dry_run=dry_run,
^^^^^^^^^^^^^^^^
)
^
File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/utils/_validators.py", line 89, in _inner_fn
return fn(*args, **kwargs)
File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/file_download.py", line 986, in hf_hub_download
return _hf_hub_download_to_local_dir(
# Destination
...<16 lines>...
dry_run=dry_run,
)
File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/file_download.py", line 1390, in _hf_hub_download_to_local_dir
_download_to_tmp_and_move(
~~~~~~~~~~~~~~~~~~~~~~~~~^
incomplete_path=paths.incomplete_path(etag),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...<8 lines>...
tqdm_class=tqdm_class,
^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/file_download.py", line 1791, in _download_to_tmp_and_move
xet_get(
~~~~~~~^
incomplete_path=incomplete_path,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...<4 lines>...
tqdm_class=tqdm_class,
^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/file_download.py", line 571, in xet_get
download_files(
~~~~~~~~~~~~~~^
xet_download_info,
^^^^^^^^^^^^^^^^^^
...<3 lines>...
progress_updater=[progress_updater],
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
RuntimeError: Data processing error: CAS service error : Reqwest Error: HTTP status client error (429 Too Many Requests), domain: https://cas-server.xethub.hf.co/reconstructions/04b8a4667b84b3b874a6a2f070cec88920f6289e71185d69fa87e3cf29834710
```
### Steps to reproduce the bug
my command
```bash
hf download nvidia/PhysicalAI-Robotics-GR00T-X-Embodiment-Sim --repo-type dataset --include "single_panda_gripper.CoffeePressButton/**" --local-dir /home/yanan/robotics/Isaac-GR00T/gr00t_dataset_official/
```
### Expected behavior
expect the data can be downloaded without any issue
### Environment info
huggingface_hub 1.1.4
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7871/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7871/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/7870
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7870/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7870/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7870/events
|
https://github.com/huggingface/datasets/issues/7870
| 3,642,209,953
|
I_kwDODunzps7ZF7ah
| 7,870
|
Visualization for Medical Imaging Datasets
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/31857876?v=4",
"events_url": "https://api.github.com/users/CloseChoice/events{/privacy}",
"followers_url": "https://api.github.com/users/CloseChoice/followers",
"following_url": "https://api.github.com/users/CloseChoice/following{/other_user}",
"gists_url": "https://api.github.com/users/CloseChoice/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/CloseChoice",
"id": 31857876,
"login": "CloseChoice",
"node_id": "MDQ6VXNlcjMxODU3ODc2",
"organizations_url": "https://api.github.com/users/CloseChoice/orgs",
"received_events_url": "https://api.github.com/users/CloseChoice/received_events",
"repos_url": "https://api.github.com/users/CloseChoice/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/CloseChoice/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CloseChoice/subscriptions",
"type": "User",
"url": "https://api.github.com/users/CloseChoice",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"It would be amazing to be able to show the Papaya UI in google colab / jupyter notebook. IIRC both allow serving javascript via nbextensions that we can surely use in HTML() objects.\n\nAlternatively we could also start with a simple approach and dump the medical image data as a video file that goes through the slices, so we don't need javascript."
] | 2025-11-19T11:05:39
| 2025-11-21T12:31:19
| 2025-11-21T12:31:19
|
CONTRIBUTOR
| null | null | null | null |
This is a followup to: https://github.com/huggingface/datasets/pull/7815.
I checked the possibilities to visualize the nifti (and potentially dicom), and here's what I found:
- https://github.com/aces/brainbrowser, AGPL3 license, last commit 3 months ago, latest (github) release from 2017. It's available on jsdelivr: https://www.jsdelivr.com/package/npm/brainbrowser (but that is from 2015!)
- https://github.com/rii-mango/Papaya, custom but BSD-style license that would require datasets to list the conditions in their readme somewhere, last commit June 2024. I looked into this library and it looks mature and good enough for our use case, but just working on it for a short time I wasn't able to get this to work, but am sure we could get this working, would probably require some JS on datasets' end. Available on jsdelivr as well: https://www.jsdelivr.com/package/npm/papaya-viewer. Seems like it's frequently loaded.
- https://github.com/hanayik/niivue, BSD3 license, last commit May 26, 2021. Archived. Doesn't look like an option.
I think the only real option for us Papaya, but there is also the risk that we'll end up with an unmaintained package after a while, since development seems to be slow or even halted.
I think conceptually we would need to figure out how we can build a good solution for visualizing Medical Image data. On shap, we have a separate javascript folder in which we render visualizations, this could be a blueprint but will require a bundler, etc. Alternatively one could go with a naive approach to just write some html code in a python string and load the package via jsdelivr.
@lhoestq thoughts?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7870/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7870/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 2 days, 1:25:40
|
https://api.github.com/repos/huggingface/datasets/issues/7869
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7869/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7869/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7869/events
|
https://github.com/huggingface/datasets/issues/7869
| 3,636,808,734
|
I_kwDODunzps7YxUwe
| 7,869
|
Why does dataset merge fail when tools have different parameters?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/116297296?v=4",
"events_url": "https://api.github.com/users/hitszxs/events{/privacy}",
"followers_url": "https://api.github.com/users/hitszxs/followers",
"following_url": "https://api.github.com/users/hitszxs/following{/other_user}",
"gists_url": "https://api.github.com/users/hitszxs/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hitszxs",
"id": 116297296,
"login": "hitszxs",
"node_id": "U_kgDOBu6OUA",
"organizations_url": "https://api.github.com/users/hitszxs/orgs",
"received_events_url": "https://api.github.com/users/hitszxs/received_events",
"repos_url": "https://api.github.com/users/hitszxs/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hitszxs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hitszxs/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hitszxs",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[
"Hi @hitszxs,\n This is indeed by design,\n\nThe `datasets` library is built on top of [Apache Arrow](https://arrow.apache.org/), which uses a **columnar storage format** with strict schema requirements. When you try to concatenate/merge datasets, the library checks if features can be aligned using the [`_check_if_features_can_be_aligned`](https://github.com/huggingface/datasets/blob/main/src/datasets/features/features.py#L2297-L2316) function.\n\nTwo datasets can be merged if:\n1. Columns with the same name have the **same type**, OR\n2. One of them has `Value(\"null\")` (representing missing data)\n\nFor struct types (nested dictionaries like your tool schemas), **all fields must match exactly**. This ensures type safety and efficient columnar storage.\n\n## Workarounds for Your Use Case\n Store tools as JSON strings\n\nInstead of using nested struct types, store the tool definitions as JSON strings\n\n\n"
] | 2025-11-18T08:33:04
| 2025-11-30T03:52:07
| null |
NONE
| null | null | null | null |
Hi, I have a question about SFT (Supervised Fine-tuning) for an agent model.
Suppose I want to fine-tune an agent model that may receive two different tools: tool1 and tool2. These tools have different parameters and types in their schema definitions.
When I try to merge datasets containing different tool definitions, I get the following error:
TypeError: Couldn't cast array of type
struct<refundFee: struct<description: string, type: string>, ... , servicerId: struct<description: string, type: string>>
to
{
'refundFee': {'description': Value(dtype='string'), 'type': Value(dtype='string')},
...
'templateId': {'description': Value(dtype='string'), 'type': Value(dtype='string')}
}
From my understanding, the merge fails because the tools column's nested structure is different across datasets — e.g., one struct contains an extra field servicerId while the other does not. This causes HuggingFace Datasets (and its underlying Apache Arrow schema) to reject the merge.
My question is: why is it designed this way?
Is this strict schema matching a hard requirement of the library?
Is there a recommended way to merge datasets with different tool schemas (different parameters and types)?
For an agent model supporting multiple tools, what's the best practice for preparing/merging training data without losing flexibility?
Any guidance or design rationale would be greatly appreciated. Thanks!
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7869/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7869/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/7868
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7868/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7868/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7868/events
|
https://github.com/huggingface/datasets/issues/7868
| 3,632,429,308
|
I_kwDODunzps7Ygnj8
| 7,868
|
Data duplication with `split_dataset_by_node` and `interleaved_dataset`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42485228?v=4",
"events_url": "https://api.github.com/users/ValMystletainn/events{/privacy}",
"followers_url": "https://api.github.com/users/ValMystletainn/followers",
"following_url": "https://api.github.com/users/ValMystletainn/following{/other_user}",
"gists_url": "https://api.github.com/users/ValMystletainn/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ValMystletainn",
"id": 42485228,
"login": "ValMystletainn",
"node_id": "MDQ6VXNlcjQyNDg1MjI4",
"organizations_url": "https://api.github.com/users/ValMystletainn/orgs",
"received_events_url": "https://api.github.com/users/ValMystletainn/received_events",
"repos_url": "https://api.github.com/users/ValMystletainn/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ValMystletainn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ValMystletainn/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ValMystletainn",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[
"Hi @ValMystletainn ,\nCan I be assigned this issue?",
"> split_dataset_by_node\n\nHello, I have some questions about your intended use: (1) It seems unnecessary to use interleaving for a single dataset. (2) For multiple datasets, it seems possible to interleave first and then split by node?"
] | 2025-11-17T09:15:24
| 2025-11-29T03:21:34
| null |
NONE
| null | null | null | null |
### Describe the bug
Data duplication in different rank, when process a iterabledataset with first `split_dataset_by_node` and then `interleaved_dataset`
### Steps to reproduce the bug
I have provide a minimum scripts
```python
import os
from datasets import interleave_datasets, load_dataset
from datasets.distributed import split_dataset_by_node
path = "/mnt/wwx/datasets/fineweb/data/CC-MAIN-2013-20/"
files = [os.path.join(path, fn) for fn in os.listdir(path)]
dataset = load_dataset("parquet", split="train", data_files=files, streaming=True)
print(f"{dataset.n_shards=}")
dataset_rank0 = split_dataset_by_node(dataset, 0, 4)
dataset_rank1 = split_dataset_by_node(dataset, 1, 4)
dataset_rank0_interleaved = interleave_datasets([dataset_rank0], seed=42, probabilities=[1.0])
dataset_rank1_interleaved = interleave_datasets([dataset_rank1], seed=42, probabilities=[1.0])
print("print the first sample id from all datasets")
print("dataset", next(iter(dataset))['id'])
print("dataset_rank0", next(iter(dataset_rank0))['id'])
print("dataset_rank1", next(iter(dataset_rank1))['id'])
print("dataset_rank0_interleaved", next(iter(dataset_rank0_interleaved))['id'])
print("dataset_rank1_interleaved", next(iter(dataset_rank1_interleaved))['id'])
dataset_rank0_shard = dataset.shard(4, 0)
dataset_rank1_shard = dataset.shard(4, 1)
dataset_rank0_shard_interleaved = interleave_datasets([dataset_rank0_shard], seed=42, probabilities=[1.0])
dataset_rank1_shard_interleaved = interleave_datasets([dataset_rank1_shard], seed=42, probabilities=[1.0])
print("dataset_rank0_shard", next(iter(dataset_rank0_shard))['id'])
print("dataset_rank1_shard", next(iter(dataset_rank1_shard))['id'])
print("dataset_rank0_shard_interleaved", next(iter(dataset_rank0_shard_interleaved))['id'])
print("dataset_rank1_shard_interleaved", next(iter(dataset_rank1_shard_interleaved))['id'])
```
I just use a subfold of C4 with 14 paruets to do the quick run and get
```
dataset.n_shards=14
print the first sample id from all datasets
dataset <urn:uuid:c84a7f00-f3e8-4b67-baa4-df5adaf23bae>
dataset_rank0 <urn:uuid:c84a7f00-f3e8-4b67-baa4-df5adaf23bae>
dataset_rank1 <urn:uuid:6b7da64f-c26e-4086-aef5-4b6f01106223>
dataset_rank0_interleaved <urn:uuid:c84a7f00-f3e8-4b67-baa4-df5adaf23bae>
dataset_rank1_interleaved <urn:uuid:c84a7f00-f3e8-4b67-baa4-df5adaf23bae>
dataset_rank0_shard <urn:uuid:c84a7f00-f3e8-4b67-baa4-df5adaf23bae>
dataset_rank1_shard <urn:uuid:67cf7216-dd05-4f55-a28a-1a1c96989c51>
dataset_rank0_shard_interleaved <urn:uuid:c84a7f00-f3e8-4b67-baa4-df5adaf23bae>
dataset_rank1_shard_interleaved <urn:uuid:67cf7216-dd05-4f55-a28a-1a1c96989c51>
```
### Expected behavior
the first sample of `dataset_rank0_interleaved` and `dataset_rank1_interleaved` should be different, as other `rank0` `rank1` couples.
I have dive into the function and try to find how it work in `split -> interleaved` process.
the `split_dataset_by_node` of iterable dataset does't not change `._ex_iterable` attribute of the dataset. it just set the distributed config in dataset, and the distributed dataset is used in actually `__iter__` call, to handle with shard split or sample skipping.
however, in `interleaved_dataset` of iterable dataset. it copy out all of the `._ex_iterable` of provided datasets, and consist a new `_ex_iterable`, so the missing copy of `distributed config` caused the data duplication in different dp rank.
So I may first ask, is it an unexpected using order of those function, which means:
- always do `split_dataset_by_node` at final rather than in middle way.
- or use `dataset.shard(dp_size, dp_rank)` rather than `split_dataset_by_node` in case similar of mine.
if the using order is permiited, I think it is a bug, and I can do a PR to fix it
(I meet this bug in real training, related issue is https://github.com/ByteDance-Seed/VeOmni/issues/200 if it helps.
### Environment info
datasets 4.4.1
ubuntu 20.04
python 3.11.4
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7868/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7868/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/7867
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7867/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7867/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7867/events
|
https://github.com/huggingface/datasets/issues/7867
| 3,620,931,722
|
I_kwDODunzps7X0wiK
| 7,867
|
NonMatchingSplitsSizesError when loading partial dataset files
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/13678719?v=4",
"events_url": "https://api.github.com/users/QingGo/events{/privacy}",
"followers_url": "https://api.github.com/users/QingGo/followers",
"following_url": "https://api.github.com/users/QingGo/following{/other_user}",
"gists_url": "https://api.github.com/users/QingGo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/QingGo",
"id": 13678719,
"login": "QingGo",
"node_id": "MDQ6VXNlcjEzNjc4NzE5",
"organizations_url": "https://api.github.com/users/QingGo/orgs",
"received_events_url": "https://api.github.com/users/QingGo/received_events",
"repos_url": "https://api.github.com/users/QingGo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/QingGo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/QingGo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/QingGo",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[
"While using verification_mode='no_checks' parameter in load_dataset() can bypass this validation, this solution is not intuitive or convenient for most users, especially those who are not familiar with all the parameters of the load_dataset() function.\n\n```python\nbook_corpus_ds = load_dataset(\n \"SaylorTwift/the_pile_books3_minus_gutenberg\",\n name=\"default\",\n data_files=\"data/train-00000-of-00213-312fd8d7a3c58a63.parquet\",\n split=\"train\",\n cache_dir=\"./data\",\n verification_mode='no_checks'\n)\n```",
"Thanks for the report and reproduction steps @QingGo \n@lhoestq which one of the following looks like a nicer way to handle this?\n\n1] Skip split-size validation entirely for partial loads\nIf the user passes data_files manually and it represents only a subset, then verify_splits() should simply not run, or skip validation only for that split.\n\n2] Replace the error with a warning\n\n3] Automatically detect partial-load cases(i mean we can try this out!)\n\nAssume this, \nIf data_files is provided AND\nthe number of provided files ≠ number of expected files in metadata,\nthen treat it as a partial load and disable strict verification.\n"
] | 2025-11-13T12:03:23
| 2025-11-16T15:39:23
| null |
NONE
| null | null | null | null |
### Describe the bug
When loading only a subset of dataset files while the dataset's README.md contains split metadata, the system throws a NonMatchingSplitsSizesError . This prevents users from loading partial datasets for quick validation in cases of poor network conditions or very large datasets.
### Steps to reproduce the bug
1. Use the Hugging Face `datasets` library to load a dataset with only specific files specified
2. Ensure the dataset repository has split metadata defined in README.md
3. Observe the error when attempting to load a subset of files
```python
# Example code that triggers the error
from datasets import load_dataset
book_corpus_ds = load_dataset(
"SaylorTwift/the_pile_books3_minus_gutenberg",
name="default",
data_files="data/train-00000-of-00213-312fd8d7a3c58a63.parquet",
split="train",
cache_dir="./data"
)
```
### Error Message
```
Traceback (most recent call last):
File "/Users/QingGo/code/llm_learn/src/data/clean_cc_bc.py", line 13, in <module>
book_corpus_ds = load_dataset(
"SaylorTwift/the_pile_books3_minus_gutenberg",
...
File "/Users/QingGo/code/llm_learn/.venv/lib/python3.13/site-packages/datasets/utils/info_utils.py", line 77, in verify_splits
raise NonMatchingSplitsSizesError(str(bad_splits))
datasets.exceptions.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=106199627990.47722, num_examples=192661, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='train', num_bytes=454897326, num_examples=905, shard_lengths=None, dataset_name='the_pile_books3_minus_gutenberg')}]
```
### Expected behavior
When loading partial dataset files, the system should:
1. Skip the `NonMatchingSplitsSizesError` validation, OR
2. Only log a warning message instead of raising an error
### Environment info
- `datasets` version: 4.3.0
- Platform: macOS-15.7.1-arm64-arm-64bit-Mach-O
- Python version: 3.13.2
- `huggingface_hub` version: 0.36.0
- PyArrow version: 22.0.0
- Pandas version: 2.3.3
- `fsspec` version: 2025.9.0
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7867/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7867/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/7864
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7864/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7864/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7864/events
|
https://github.com/huggingface/datasets/issues/7864
| 3,619,137,823
|
I_kwDODunzps7Xt6kf
| 7,864
|
add_column and add_item erroneously(?) require new_fingerprint parameter
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/17151810?v=4",
"events_url": "https://api.github.com/users/echthesia/events{/privacy}",
"followers_url": "https://api.github.com/users/echthesia/followers",
"following_url": "https://api.github.com/users/echthesia/following{/other_user}",
"gists_url": "https://api.github.com/users/echthesia/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/echthesia",
"id": 17151810,
"login": "echthesia",
"node_id": "MDQ6VXNlcjE3MTUxODEw",
"organizations_url": "https://api.github.com/users/echthesia/orgs",
"received_events_url": "https://api.github.com/users/echthesia/received_events",
"repos_url": "https://api.github.com/users/echthesia/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/echthesia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/echthesia/subscriptions",
"type": "User",
"url": "https://api.github.com/users/echthesia",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[
"Take this with a grain of salt, this is just my personal understanding:\nWhile you technically can overwrite the new_fingerprint with a string, e.g.\n```python\nt = d.add_column(\"new_column\", col_value, new_fingerprint=\"dummy_fp\")\nassert t._fingerprint == \"dummy_fp\" # this is true and will pass\n```\nthis is not desired since the fingerprint should be calculated based on the operations (and their arguments) to be unique. This is handled by the [fingerprint_transform](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L6077) function which needs a \"new_fingerprint\" keyword argument and creates a unique hash if its value is not set, see [here](https://github.com/huggingface/datasets/blob/main/src/datasets/fingerprint.py#L432). So it is probably safer to not document this keyword, since one doesn't want the user to actually use it and it's only a feature in very limited cases for people really knowing what they are doing. The thing that might be bugging people who read the code is that `new_fingerprint` seems to be required for `add_item` and `add_column` but it is actually set by the decorator (in which's definition it is optional), so maybe changing the signature of `add_item` and `add_column` to `new_fingerprint: Optional[str] = None` would make sense, since this is also how it's handled in the other cases (created by claude):\n\n - [flatten](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L2034)\n - [cast_column](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L2165)\n - [remove_columns](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L2209)\n - [rename_column](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L2263)\n - [rename_columns](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L2329)\n - [select_columns](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L2397)\n - [batch](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L3760)\n - [filter](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L3813)\n - [flatten_indices](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L3959)\n - [select](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L4038)\n - [_select_contiguous](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L4128)\n - [sort](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L4376)\n - [shuffle](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L4506)\n - [train_test_split](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L4641)\nSo as you mentioned, I believe the methods erronously require the `new_fingerprint` parameter and making them optional is a little consistency win."
] | 2025-11-13T02:56:49
| 2025-11-24T20:33:59
| null |
NONE
| null | null | null | null |
### Describe the bug
Contradicting their documentation (which doesn't mention the parameter at all), both Dataset.add_column and Dataset.add_item require a new_fingerprint string. This parameter is passed directly to the dataset constructor, which has the fingerprint parameter listed as optional; is there any reason it shouldn't be optional in these methods as well?
### Steps to reproduce the bug
Reproduction steps:
1. Look at the function signature for add_column: https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L6078
2. Repeat for add_item: https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L6336
### Expected behavior
add_column and add_item should either set the fingerprint parameter to optional or include it in their docstrings
### Environment info
Not environment-dependent
| null |
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7864/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7864/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/7863
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7863/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7863/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7863/events
|
https://github.com/huggingface/datasets/issues/7863
| 3,618,836,821
|
I_kwDODunzps7XsxFV
| 7,863
|
Support hosting lance / vortex / iceberg / zarr datasets on huggingface hub
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/3664715?v=4",
"events_url": "https://api.github.com/users/pavanramkumar/events{/privacy}",
"followers_url": "https://api.github.com/users/pavanramkumar/followers",
"following_url": "https://api.github.com/users/pavanramkumar/following{/other_user}",
"gists_url": "https://api.github.com/users/pavanramkumar/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/pavanramkumar",
"id": 3664715,
"login": "pavanramkumar",
"node_id": "MDQ6VXNlcjM2NjQ3MTU=",
"organizations_url": "https://api.github.com/users/pavanramkumar/orgs",
"received_events_url": "https://api.github.com/users/pavanramkumar/received_events",
"repos_url": "https://api.github.com/users/pavanramkumar/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/pavanramkumar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pavanramkumar/subscriptions",
"type": "User",
"url": "https://api.github.com/users/pavanramkumar",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] |
[
"Kudos!",
"So cool ! Would love to see support for lance :)",
"@lhoestq thanks for your support! Any suggestions across `datasets` or `huggingface_hub` projects to make this happen?\n\nI just noticed this blog post: https://huggingface.co/blog/streaming-datasets\n\nDo you know if `hfFileSystem` from `huggingface_hub` is flexible enough to accommodate lance? I don't want to `open` and scan a file, I want to create generators with the `lance.dataset.to_batches()` from each fragment (partition) that I can iterate over in a distributed dataloader.\n\nIdeally, something like this should just work:\n\n```\nimport lance\nlance_ds_path = f\"hf://datasets/{dataset_id}/{path_in_repo}.lance\"\nds = lance.dataset(lance_ds_path)\nfragments = ds.get_fragments()\nfragment_generators = []\nfor fragment in fragments:\n fragment_generators = fragment.to_batches()\n```\n\nLooking at the huggingface blog post, I think we might need a PR into `pyarrow` to create a `LanceFragmentScanOptions` class that subclasses [pyarrow.dataset.FragmentScanOptions](https://arrow.apache.org/docs/python/generated/pyarrow.dataset.FragmentScanOptions.html#pyarrow.dataset.FragmentScanOptions) cc @prrao87, @changhiskhan",
"> Do you know if HfFileSystem from huggingface_hub is flexible enough to accommodate lance?\n\nit provides file-like objects for files on HF, and works using range requests. PyArrow uses HfFileSystem for HF files already\n\nThough in the Parquet / PyArrow case the data is read generally row group per row group (using range requests with a minimum size `range_size_limit ` to optimize I/O in case of small row groups)\n\nPS: there is an equivalent to HfFileSystem in rust in OpenDAL, but it only supports read from HF, not write (yet ?)\n\n> I don't want to open and scan a file, I want to create generators with the lance.dataset.to_batches() from each fragment (partition) that I can iterate over in a distributed dataloader.\n\nWe do something very similar for Parquet here: \n\nhttps://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/packaged_modules/parquet/parquet.py#L168-L169",
"Hi, I work on the Lance project. We'd be happy to see the format supported on huggingface hub.\n\nIt's not clear to me from this thread what is required for that. Could we clarify that? Are there examples we can point to?\n\n> I think we might need a PR into `pyarrow` to create a `LanceFragmentScanOptions` class that subclasses [pyarrow.dataset.FragmentScanOptions](https://arrow.apache.org/docs/python/generated/pyarrow.dataset.FragmentScanOptions.html#pyarrow.dataset.FragmentScanOptions)\n\nCould you elaborate why a `FragmentScanOptions` subclass is required? Also, if it is, we could just define that as a subclass within the `pylance` module, unless I'm missing something.\n\nLance supports OpenDAL storage, so I think we could add support for huggingface's filesystem through that and make sure it's exposed in pylance. Could also help implement some write operations. Perhaps that's the main blocker? ",
"> PS: there is an equivalent to HfFileSystem in rust in OpenDAL, but it only supports read from HF, not write (yet ?)\n\nHi, I’m willing to add full-fledged support for the HF file system. This shouldn’t be considered a blocker. 🤟 ",
"Exposing the existing HF filesystem from OpenDAL in pylance would be great ! and a good first step\n\nExcited for write operations too",
"Thanks @lhoestq @wjones127 @Xuanwo ! I think we have all the necessary people on this thread now to make it happen :)\n\n> Could you elaborate why a FragmentScanOptions subclass is required? Also, if it is, we could just define that as a subclass within the pylance module, unless I'm missing something.\n\n@wjones127 I'm not actually sure this is needed but I'm guessing based on [this blog post](https://huggingface.co/blog/streaming-datasets) from a couple of weeks ago. Specifically, this section which allows creation of a dataset object with configurable prefetching:\n\n```\nimport pyarrow\nimport pyarrow.dataset\n\nfragment_scan_options = pyarrow.dataset.ParquetFragmentScanOptions(\n cache_options=pyarrow.CacheOptions(\n prefetch_limit=1,\n range_size_limit=128 << 20\n ),\n)\nds = load_dataset(parquet_dataset_id, streaming=True, fragment_scan_options=fragment_scan_options)\n```\n\nI might be completely wrong that we do need an equivalent `LanceFragmentScanOptions` PR into `pyarrow` and the `OpenDAL` path might be sufficient.\n\nI really just want something like this to work out of the box:\n\n```\nimport lance\nlance_ds_path = f\"hf://datasets/{dataset_id}/{path_in_repo}.lance\"\nds = lance.dataset(lance_ds_path)\nfragments = ds.get_fragments()\nfragment_generators = []\nfor fragment in fragments:\n fragment_generators = fragment.to_batches()\n```\n\nIn the ideal case, I'd like to be able to control prefetch configuration via arguments to `to_batches()` like the ones that already exist for a lance dataset on any S3-compatible object store.\n\nWould a useful approach be to create a toy lance dataset on huggingface and see if this \"just works\"; then work backwards from there?\n\nAs for writing, I'm looking to migrate datasets from my own private S3-compatible object store bucket (Tigris Data) to huggingface datasets but ~~I'm 100% sure~~ I'm _not_ 100% sure whether we even need `hfFileSystem` compatible write capability\n\n\n",
"Here's a public dataset which could be a working example to work backwards from:\n\nhttps://huggingface.co/datasets/pavan-ramkumar/test-slaf\n\npylance currently looks for default object store backends and returns this `ValueError`\n\n```\n>>> import lance\n>>> hf_path = \"hf://datasets/pavan-ramkumar/test-slaf/tree/main/synthetic_50k_processed_v21.slaf/expression.lance\"\n>>> ds = lance.dataset(hf_path)\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\n File \"/Users/pavan/slaf-project/slaf/.venv/lib/python3.12/site-packages/lance/__init__.py\", line 145, in dataset\n ds = LanceDataset(\n ^^^^^^^^^^^^^\n File \"/Users/pavan/slaf-project/slaf/.venv/lib/python3.12/site-packages/lance/dataset.py\", line 425, in __init__\n self._ds = _Dataset(\n ^^^^^^^^^\nValueError: Invalid user input: No object store provider found for scheme: 'hf'\nValid schemes: gs, memory, s3, az, file-object-store, file, oss, s3+ddb, /Users/runner/work/lance/lance/rust/lance-io/src/object_store/providers.rs:161:54\n```",
"@Xuanwo @wjones127 just checking in to see if you had a chance to add a huggingface provider via opendal to pylance. I'm assuming we need a new `huggingface.rs` provider [here](https://github.com/lance-format/lance/tree/4d9c1a4d459ea486556de0ee90828a442d0425b0/rust/lance-io/src/object_store/providers).\n\nDo let me know if I can do anything to help, really excited to help stream lance datasets from huggingface hub",
"> @Xuanwo @wjones127 just checking in to see if you had a chance to add a huggingface provider via opendal to pylance. I'm assuming we need a new `huggingface.rs` provider [here](https://github.com/lance-format/lance/tree/4d9c1a4d459ea486556de0ee90828a442d0425b0/rust/lance-io/src/object_store/providers).\n> \n> Do let me know if I can do anything to help, really excited to help stream lance datasets from huggingface hub\n\nI'm willing to work on this! Would you like to create an issue on lance side and ping me there?",
" > I'm willing to work on this! Would you like to create an issue on lance side and ping me there?\n\nDone! [Link](https://github.com/lance-format/lance/issues/5346)\n",
"@pavanramkumar pls check this out once it's merged! https://github.com/lance-format/lance/pull/5353"
] | 2025-11-13T00:51:07
| 2025-11-26T14:10:29
| null |
NONE
| null | null | null | null |
### Feature request
Huggingface datasets has great support for large tabular datasets in parquet with large partitions. I would love to see two things in the future:
- equivalent support for `lance`, `vortex`, `iceberg`, `zarr` (in that order) in a way that I can stream them using the datasets library
- more fine-grained control of streaming, so that I can stream at the partition / shard level
### Motivation
I work with very large `lance` datasets on S3 and often require random access for AI/ML applications like multi-node training. I was able to achieve high throughput dataloading on a lance dataset with ~150B rows by building distributed dataloaders that can be scaled both vertically (until i/o and CPU are saturated), and then horizontally (to workaround network bottlenecks).
Using this strategy I was able to achieve 10-20x the throughput of the streaming data loader from the `huggingface/datasets` library.
I realized that these would be great features for huggingface to support natively
### Your contribution
I'm not ready yet to make a PR but open to it with the right pointers!
| null |
{
"+1": 4,
"-1": 0,
"confused": 0,
"eyes": 2,
"heart": 5,
"hooray": 2,
"laugh": 2,
"rocket": 8,
"total_count": 23,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7863/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7863/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/7861
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7861/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7861/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7861/events
|
https://github.com/huggingface/datasets/issues/7861
| 3,611,821,713
|
I_kwDODunzps7XSAaR
| 7,861
|
Performance Issue: save_to_disk() 200-1200% slower due to unconditional flatten_indices()
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/222552287?v=4",
"events_url": "https://api.github.com/users/KCKawalkar/events{/privacy}",
"followers_url": "https://api.github.com/users/KCKawalkar/followers",
"following_url": "https://api.github.com/users/KCKawalkar/following{/other_user}",
"gists_url": "https://api.github.com/users/KCKawalkar/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/KCKawalkar",
"id": 222552287,
"login": "KCKawalkar",
"node_id": "U_kgDODUPg3w",
"organizations_url": "https://api.github.com/users/KCKawalkar/orgs",
"received_events_url": "https://api.github.com/users/KCKawalkar/received_events",
"repos_url": "https://api.github.com/users/KCKawalkar/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/KCKawalkar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KCKawalkar/subscriptions",
"type": "User",
"url": "https://api.github.com/users/KCKawalkar",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[] | 2025-11-11T11:05:38
| 2025-11-11T11:05:38
| null |
NONE
| null | null | null | null |
## 🐛 Bug Description
The `save_to_disk()` method unconditionally calls `flatten_indices()` when `_indices` is not None, causing severe performance degradation for datasets processed with filtering, shuffling, or multiprocessed mapping operations.
**Root cause**: This line rebuilds the entire dataset unnecessarily:
```python
dataset = self.flatten_indices() if self._indices is not None else self
```
## 📊 Performance Impact
| Dataset Size | Operation | Save Time | Slowdown |
|-------------|-----------|-----------|----------|
| 100K | Baseline (no indices) | 0.027s | - |
| 100K | Filtered (with indices) | 0.146s | **+431%** |
| 100K | Shuffled (with indices) | 0.332s | **+1107%** |
| 250K | Shuffled (with indices) | 0.849s | **+1202%** |
## 🔄 Reproduction
```python
from datasets import Dataset
import time
# Create dataset
dataset = Dataset.from_dict({'text': [f'sample {i}' for i in range(100000)]})
# Baseline save (no indices)
start = time.time()
dataset.save_to_disk('baseline')
baseline_time = time.time() - start
# Filtered save (creates indices)
filtered = dataset.filter(lambda x: True)
start = time.time()
filtered.save_to_disk('filtered')
filtered_time = time.time() - start
print(f"Baseline: {baseline_time:.3f}s")
print(f"Filtered: {filtered_time:.3f}s")
print(f"Slowdown: {(filtered_time/baseline_time-1)*100:.1f}%")
```
**Expected output**: Filtered dataset is 400-1000% slower than baseline
## 💡 Proposed Solution
Add optional parameter to control flattening:
```python
def save_to_disk(self, dataset_path, flatten_indices=True):
dataset = self.flatten_indices() if (self._indices is not None and flatten_indices) else self
# ... rest of save logic
```
**Benefits**:
- ✅ Immediate performance improvement for users who don't need flattening
- ✅ Backwards compatible (default behavior unchanged)
- ✅ Simple implementation
## 🌍 Environment
- **datasets version**: 2.x
- **Python**: 3.10+
- **OS**: Linux/macOS/Windows
## 📈 Impact
This affects **most ML preprocessing workflows** that filter/shuffle datasets before saving. Performance degradation scales exponentially with dataset size, making it a critical bottleneck for production systems.
## 🔗 Additional Resources
We have comprehensive test scripts demonstrating this across multiple scenarios if needed for further investigation.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7861/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7861/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/7856
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7856/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7856/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7856/events
|
https://github.com/huggingface/datasets/issues/7856
| 3,603,729,142
|
I_kwDODunzps7WzIr2
| 7,856
|
Missing transcript column when loading a local dataset with "audiofolder"
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/10166907?v=4",
"events_url": "https://api.github.com/users/gweltou/events{/privacy}",
"followers_url": "https://api.github.com/users/gweltou/followers",
"following_url": "https://api.github.com/users/gweltou/following{/other_user}",
"gists_url": "https://api.github.com/users/gweltou/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gweltou",
"id": 10166907,
"login": "gweltou",
"node_id": "MDQ6VXNlcjEwMTY2OTA3",
"organizations_url": "https://api.github.com/users/gweltou/orgs",
"received_events_url": "https://api.github.com/users/gweltou/received_events",
"repos_url": "https://api.github.com/users/gweltou/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gweltou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gweltou/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gweltou",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"First bad commit 5c8869f8c36dbc8c8d423030b7b7c4fd64f8c729\n\nEDIT: This is not a bug or a regression. It was a breaking change introduced in the commit I mentioned and was also documented in there. The docs state how to handle this now, see https://huggingface.co/docs/datasets/main/en/audio_load#audiofolder-with-metadata\n\nor simply, move your metadata into the splits folder and update the paths, in your case this would look like this:\n```bash\nmy_dataset/\n - data/\n - test/\n - 54db8760de3cfbff3c8a36a36b4d0f77_00390.0_04583.0.mp3\n - 54db8760de3cfbff3c8a36a36b4d0f77_04583.0_05730.0.mp3\n - metadata.jsonl\n```\n\nand the pahts in the jsonl should be relative to the metadata.json:\n```bash\n{\"file_name\": \"54db8760de3cfbff3c8a36a36b4d0f77_00390.0_04583.0.mp3\", \"transcript\": \"Ata tudoù penaos e tro ar bed ?\"}\n{\"file_name\": \"54db8760de3cfbff3c8a36a36b4d0f77_04583.0_05730.0.mp3\", \"transcript\": \"Ur gwir blijadur eo adkavout ac'hanoc'h hiziv.\"}\n...\n```\n\nSo I think this can be closed.",
"Thank you for your quick answer !\nI'm sorry I missed that in the documentation.\nEverything works fine again after following your recommendations.\nI'm closing the issue."
] | 2025-11-08T16:27:58
| 2025-11-09T12:13:38
| 2025-11-09T12:13:38
|
NONE
| null | null | null | null |
### Describe the bug
My local dataset is not properly loaded when using `load_dataset("audiofolder", data_dir="my_dataset")` with a `jsonl` metadata file.
Only the `audio` column is read while the `transcript` column is not.
The last tested `datasets` version where the behavior was still correct is 2.18.0.
### Steps to reproduce the bug
Dataset directory structure:
```
my_dataset/
- data/
- test/
- 54db8760de3cfbff3c8a36a36b4d0f77_00390.0_04583.0.mp3
- 54db8760de3cfbff3c8a36a36b4d0f77_04583.0_05730.0.mp3
- ...
- metadata.jsonl
```
`metadata.jsonl` file content:
```
{"file_name": "data/test/54db8760de3cfbff3c8a36a36b4d0f77_00390.0_04583.0.mp3", "transcript": "Ata tudoù penaos e tro ar bed ?"}
{"file_name": "data/test/54db8760de3cfbff3c8a36a36b4d0f77_04583.0_05730.0.mp3", "transcript": "Ur gwir blijadur eo adkavout ac'hanoc'h hiziv."}
...
```
```python3
my_dataset = load_dataset("audiofolder", data_dir="my_dataset")
print(my_dataset)
'''
DatasetDict({
test: Dataset({
features: ['audio'],
num_rows: 347
})
})
'''
print(my_dataset['test'][0])
'''
{'audio': <datasets.features._torchcodec.AudioDecoder object at 0x75ffcd172510>}
'''
```
### Expected behavior
Being able to access the `transcript` column in the loaded dataset.
### Environment info
- `datasets` version: 4.4.1
- Platform: Linux-6.5.0-45-generic-x86_64-with-glibc2.39
- Python version: 3.13.9
- `huggingface_hub` version: 1.1.2
- PyArrow version: 22.0.0
- Pandas version: 2.3.3
- `fsspec` version: 2025.10.0
Note: same issue with `datasets` v3.6.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/10166907?v=4",
"events_url": "https://api.github.com/users/gweltou/events{/privacy}",
"followers_url": "https://api.github.com/users/gweltou/followers",
"following_url": "https://api.github.com/users/gweltou/following{/other_user}",
"gists_url": "https://api.github.com/users/gweltou/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gweltou",
"id": 10166907,
"login": "gweltou",
"node_id": "MDQ6VXNlcjEwMTY2OTA3",
"organizations_url": "https://api.github.com/users/gweltou/orgs",
"received_events_url": "https://api.github.com/users/gweltou/received_events",
"repos_url": "https://api.github.com/users/gweltou/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gweltou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gweltou/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gweltou",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7856/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7856/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 19:45:40
|
https://api.github.com/repos/huggingface/datasets/issues/7852
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7852/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7852/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7852/events
|
https://github.com/huggingface/datasets/issues/7852
| 3,595,450,602
|
I_kwDODunzps7WTjjq
| 7,852
|
Problems with NifTI
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/31857876?v=4",
"events_url": "https://api.github.com/users/CloseChoice/events{/privacy}",
"followers_url": "https://api.github.com/users/CloseChoice/followers",
"following_url": "https://api.github.com/users/CloseChoice/following{/other_user}",
"gists_url": "https://api.github.com/users/CloseChoice/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/CloseChoice",
"id": 31857876,
"login": "CloseChoice",
"node_id": "MDQ6VXNlcjMxODU3ODc2",
"organizations_url": "https://api.github.com/users/CloseChoice/orgs",
"received_events_url": "https://api.github.com/users/CloseChoice/received_events",
"repos_url": "https://api.github.com/users/CloseChoice/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/CloseChoice/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CloseChoice/subscriptions",
"type": "User",
"url": "https://api.github.com/users/CloseChoice",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"> 2. when uploading via the niftifolder feature, the resulting parquet only contains relative paths to the nifti files:\n\nwhat did you use to upload the dataset ? iirc push_to_hub() does upload the bytes as well, but to_parquet() doesn't",
"> > 2. when uploading via the niftifolder feature, the resulting parquet only contains relative paths to the nifti files:\n> \n> what did you use to upload the dataset ? iirc push_to_hub() does upload the bytes as well, but to_parquet() doesn't\n\nI used `push_to_hub` but the problem is that the nifti feature does not have an `embed_storage` function"
] | 2025-11-06T11:46:33
| 2025-11-06T16:20:38
| 2025-11-06T16:20:38
|
CONTRIBUTOR
| null | null | null | null |
### Describe the bug
There are currently 2 problems with the new NifTI feature:
1. dealing with zipped files, this is mentioned and explained [here](https://github.com/huggingface/datasets/pull/7815#issuecomment-3496199503)
2. when uploading via the `niftifolder` feature, the resulting parquet only contains relative paths to the nifti files:
```bash
table['nifti']
<pyarrow.lib.ChunkedArray object at 0x798245d37d60>
[
-- is_valid: all not null
-- child 0 type: binary
[
null,
null,
null,
null,
null,
null
]
-- child 1 type: string
[
"/home/tobias/programming/github/datasets/nifti_extracted/T1.nii",
"/home/tobias/programming/github/datasets/nifti_extracted/T2-interleaved.nii",
"/home/tobias/programming/github/datasets/nifti_extracted/T2.nii",
"/home/tobias/programming/github/datasets/nifti_extracted/T2_-interleaved.nii",
"/home/tobias/programming/github/datasets/nifti_extracted/T2_.nii",
"/home/tobias/programming/github/datasets/nifti_extracted/fieldmap.nii"
]
]
```
instead of containing bytes. The code is copy pasted from PDF, so I wonder what is going wrong here.
### Steps to reproduce the bug
see the linked comment
### Expected behavior
downloading should work as smoothly as for pdf
### Environment info
- `datasets` version: 4.4.2.dev0
- Platform: Linux-6.14.0-33-generic-x86_64-with-glibc2.39
- Python version: 3.12.3
- `huggingface_hub` version: 0.35.3
- PyArrow version: 21.0.0
- Pandas version: 2.3.3
- `fsspec` version: 2025.9.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7852/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7852/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 4:34:05
|
https://api.github.com/repos/huggingface/datasets/issues/7842
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7842/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7842/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7842/events
|
https://github.com/huggingface/datasets/issues/7842
| 3,582,182,995
|
I_kwDODunzps7Vg8ZT
| 7,842
|
Transform with columns parameter triggers on non-specified column access
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/18426892?v=4",
"events_url": "https://api.github.com/users/mr-brobot/events{/privacy}",
"followers_url": "https://api.github.com/users/mr-brobot/followers",
"following_url": "https://api.github.com/users/mr-brobot/following{/other_user}",
"gists_url": "https://api.github.com/users/mr-brobot/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mr-brobot",
"id": 18426892,
"login": "mr-brobot",
"node_id": "MDQ6VXNlcjE4NDI2ODky",
"organizations_url": "https://api.github.com/users/mr-brobot/orgs",
"received_events_url": "https://api.github.com/users/mr-brobot/received_events",
"repos_url": "https://api.github.com/users/mr-brobot/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mr-brobot/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mr-brobot/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mr-brobot",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[] | 2025-11-03T13:55:27
| 2025-11-03T14:34:13
| 2025-11-03T14:34:13
|
NONE
| null | null | null | null |
### Describe the bug
Iterating over a [`Column`](https://github.com/huggingface/datasets/blob/8b1bd4ec1cc9e9ce022f749abb6485ef984ae7c0/src/datasets/arrow_dataset.py#L633-L692) iterates through the parent [`Dataset`](https://github.com/huggingface/datasets/blob/8b1bd4ec1cc9e9ce022f749abb6485ef984ae7c0/src/datasets/arrow_dataset.py#L695) and applies all formatting/transforms on each row, regardless of which column is being accessed. This causes an error when transforms depend on columns not present in the projection.
### Steps to reproduce the bug
### Load a dataset with multiple columns
```python
ds = load_dataset("mrbrobot/isic-2024", split="train")
```
### Define a transform that specifies an input column
```python
def image_transform(batch):
batch["image"] = batch["image"] # KeyError when batch doesn't contain "image"
return batch
# apply transform only to image column
ds = ds.with_format("torch")
ds = ds.with_transform(image_transform, columns=["image"], output_all_columns=True)
```
### Iterate over non-specified column
```python
# iterate over a different column, triggers the transform on each row, but batch doesn't contain "image"
for t in ds["target"]: # KeyError: 'image'
print(t)
```
### Expected behavior
If a user iterates over `ds["target"]` and the transform specifies `columns=["image"]`, the transform should be skipped.
### Environment info
`datasets`: 4.2.0
Python: 3.12.12
Linux: Debian 11.11
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7842/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7842/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 0:38:46
|
https://api.github.com/repos/huggingface/datasets/issues/7841
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7841/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7841/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7841/events
|
https://github.com/huggingface/datasets/issues/7841
| 3,579,506,747
|
I_kwDODunzps7VWvA7
| 7,841
|
DOC: `mode` parameter on pdf and video features unused
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/31857876?v=4",
"events_url": "https://api.github.com/users/CloseChoice/events{/privacy}",
"followers_url": "https://api.github.com/users/CloseChoice/followers",
"following_url": "https://api.github.com/users/CloseChoice/following{/other_user}",
"gists_url": "https://api.github.com/users/CloseChoice/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/CloseChoice",
"id": 31857876,
"login": "CloseChoice",
"node_id": "MDQ6VXNlcjMxODU3ODc2",
"organizations_url": "https://api.github.com/users/CloseChoice/orgs",
"received_events_url": "https://api.github.com/users/CloseChoice/received_events",
"repos_url": "https://api.github.com/users/CloseChoice/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/CloseChoice/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CloseChoice/subscriptions",
"type": "User",
"url": "https://api.github.com/users/CloseChoice",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"They seem to be artefacts from a copy-paste of the Image feature ^^' we should remove them"
] | 2025-11-02T12:37:47
| 2025-11-05T14:04:04
| 2025-11-05T14:04:04
|
CONTRIBUTOR
| null | null | null | null |
Following up on https://github.com/huggingface/datasets/pull/7840 I asked claude code to check for undocumented parameters for other features and it found:
- mode parameter on video is documented but unused: https://github.com/huggingface/datasets/blob/main/src/datasets/features/video.py#L48-L49
- the same goes for the mode parameter on the pdf feature: https://github.com/huggingface/datasets/blob/main/src/datasets/features/pdf.py#L47-L48
I assume checking if these modes can be supported and otherwise removing them is the way to go here.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7841/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7841/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 3 days, 1:26:17
|
https://api.github.com/repos/huggingface/datasets/issues/7839
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7839/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7839/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7839/events
|
https://github.com/huggingface/datasets/issues/7839
| 3,579,121,843
|
I_kwDODunzps7VVRCz
| 7,839
|
datasets doesn't work with python 3.14
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/4789087?v=4",
"events_url": "https://api.github.com/users/zachmoshe/events{/privacy}",
"followers_url": "https://api.github.com/users/zachmoshe/followers",
"following_url": "https://api.github.com/users/zachmoshe/following{/other_user}",
"gists_url": "https://api.github.com/users/zachmoshe/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/zachmoshe",
"id": 4789087,
"login": "zachmoshe",
"node_id": "MDQ6VXNlcjQ3ODkwODc=",
"organizations_url": "https://api.github.com/users/zachmoshe/orgs",
"received_events_url": "https://api.github.com/users/zachmoshe/received_events",
"repos_url": "https://api.github.com/users/zachmoshe/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/zachmoshe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zachmoshe/subscriptions",
"type": "User",
"url": "https://api.github.com/users/zachmoshe",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Thanks for the report.\nHave you tried on main? This should work, there was recently a PR merged to address this problem, see #7817",
"Works on main 👍 \nWhat's the release schedule for `datasets`? Seems like a cadence of ~2weeks so I assume a real version is due pretty soon?",
"let's say we do a new release later today ? :)",
"Premium service! \n😂 👑 \nJust checked 4.4.0 - works as expected!"
] | 2025-11-02T09:09:06
| 2025-11-04T14:02:25
| 2025-11-04T14:02:25
|
NONE
| null | null | null | null |
### Describe the bug
Seems that `dataset` doesn't work with python==3.14. The root cause seems to be something with a `deel` API that was changed.
```
TypeError: Pickler._batch_setitems() takes 2 positional arguments but 3 were given
```
### Steps to reproduce the bug
(on a new folder)
uv init
uv python pin 3.14
uv add datasets
uv run python
(in REPL)
import datasets
datasets.load_dataset("cais/mmlu", "all") # will fail on any dataset
```
>>> datasets.load_dataset("cais/mmlu", "all")
Traceback (most recent call last):
File "<python-input-2>", line 1, in <module>
datasets.load_dataset("cais/mmlu", "all")
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^
File "/Users/zmoshe/temp/test_datasets_py3.14/.venv/lib/python3.14/site-packages/datasets/load.py", line 1397, in load_dataset
builder_instance = load_dataset_builder(
path=path,
...<10 lines>...
**config_kwargs,
)
File "/Users/zmoshe/temp/test_datasets_py3.14/.venv/lib/python3.14/site-packages/datasets/load.py", line 1185, in load_dataset_builder
builder_instance._use_legacy_cache_dir_if_possible(dataset_module)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^
File "/Users/zmoshe/temp/test_datasets_py3.14/.venv/lib/python3.14/site-packages/datasets/builder.py", line 615, in _use_legacy_cache_dir_if_possible
self._check_legacy_cache2(dataset_module) or self._check_legacy_cache() or None
~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^
File "/Users/zmoshe/temp/test_datasets_py3.14/.venv/lib/python3.14/site-packages/datasets/builder.py", line 487, in _check_legacy_cache2
config_id = self.config.name + "-" + Hasher.hash({"data_files": self.config.data_files})
~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/zmoshe/temp/test_datasets_py3.14/.venv/lib/python3.14/site-packages/datasets/fingerprint.py", line 188, in hash
return cls.hash_bytes(dumps(value))
~~~~~^^^^^^^
File "/Users/zmoshe/temp/test_datasets_py3.14/.venv/lib/python3.14/site-packages/datasets/utils/_dill.py", line 120, in dumps
dump(obj, file)
~~~~^^^^^^^^^^^
File "/Users/zmoshe/temp/test_datasets_py3.14/.venv/lib/python3.14/site-packages/datasets/utils/_dill.py", line 114, in dump
Pickler(file, recurse=True).dump(obj)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^
File "/Users/zmoshe/temp/test_datasets_py3.14/.venv/lib/python3.14/site-packages/dill/_dill.py", line 428, in dump
StockPickler.dump(self, obj)
~~~~~~~~~~~~~~~~~^^^^^^^^^^^
File "/Users/zmoshe/.local/uv/python/cpython-3.14.0rc2-macos-aarch64-none/lib/python3.14/pickle.py", line 498, in dump
self.save(obj)
~~~~~~~~~^^^^^
File "/Users/zmoshe/temp/test_datasets_py3.14/.venv/lib/python3.14/site-packages/datasets/utils/_dill.py", line 70, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/zmoshe/temp/test_datasets_py3.14/.venv/lib/python3.14/site-packages/dill/_dill.py", line 422, in save
StockPickler.save(self, obj, save_persistent_id)
~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/zmoshe/.local/uv/python/cpython-3.14.0rc2-macos-aarch64-none/lib/python3.14/pickle.py", line 572, in save
f(self, obj) # Call unbound method with explicit self
~^^^^^^^^^^^
File "/Users/zmoshe/temp/test_datasets_py3.14/.venv/lib/python3.14/site-packages/dill/_dill.py", line 1262, in save_module_dict
StockPickler.save_dict(pickler, obj)
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^
File "/Users/zmoshe/.local/uv/python/cpython-3.14.0rc2-macos-aarch64-none/lib/python3.14/pickle.py", line 1064, in save_dict
self._batch_setitems(obj.items(), obj)
~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^
TypeError: Pickler._batch_setitems() takes 2 positional arguments but 3 were given
```
### Expected behavior
should work.
### Environment info
datasets==v4.3.0
python==3.14
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/4789087?v=4",
"events_url": "https://api.github.com/users/zachmoshe/events{/privacy}",
"followers_url": "https://api.github.com/users/zachmoshe/followers",
"following_url": "https://api.github.com/users/zachmoshe/following{/other_user}",
"gists_url": "https://api.github.com/users/zachmoshe/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/zachmoshe",
"id": 4789087,
"login": "zachmoshe",
"node_id": "MDQ6VXNlcjQ3ODkwODc=",
"organizations_url": "https://api.github.com/users/zachmoshe/orgs",
"received_events_url": "https://api.github.com/users/zachmoshe/received_events",
"repos_url": "https://api.github.com/users/zachmoshe/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/zachmoshe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zachmoshe/subscriptions",
"type": "User",
"url": "https://api.github.com/users/zachmoshe",
"user_view_type": "public"
}
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7839/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7839/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 2 days, 4:53:19
|
https://api.github.com/repos/huggingface/datasets/issues/7837
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7837/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7837/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7837/events
|
https://github.com/huggingface/datasets/issues/7837
| 3,575,454,726
|
I_kwDODunzps7VHRwG
| 7,837
|
mono parameter to the Audio feature is missing
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1250234?v=4",
"events_url": "https://api.github.com/users/ernestum/events{/privacy}",
"followers_url": "https://api.github.com/users/ernestum/followers",
"following_url": "https://api.github.com/users/ernestum/following{/other_user}",
"gists_url": "https://api.github.com/users/ernestum/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ernestum",
"id": 1250234,
"login": "ernestum",
"node_id": "MDQ6VXNlcjEyNTAyMzQ=",
"organizations_url": "https://api.github.com/users/ernestum/orgs",
"received_events_url": "https://api.github.com/users/ernestum/received_events",
"repos_url": "https://api.github.com/users/ernestum/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ernestum/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ernestum/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ernestum",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hey, we removed the misleading passage in the docstring and enabled support for `num_channels` as torchcodec does",
"thanks!"
] | 2025-10-31T15:41:39
| 2025-11-03T15:59:18
| 2025-11-03T14:24:12
|
NONE
| null | null | null | null |
According to the docs, there is a "mono" parameter to the Audio feature, which turns any stereo into mono. In practice the signal is not touched and the mono parameter, even though documented, does not exist.
https://github.com/huggingface/datasets/blob/41c05299348a499807432ab476e1cdc4143c8772/src/datasets/features/audio.py#L52C1-L54C22
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7837/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7837/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 2 days, 22:42:33
|
https://api.github.com/repos/huggingface/datasets/issues/7834
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7834/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7834/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7834/events
|
https://github.com/huggingface/datasets/issues/7834
| 3,558,802,959
|
I_kwDODunzps7UHwYP
| 7,834
|
Audio.cast_column() or Audio.decode_example() causes Colab kernel crash (std::bad_alloc)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/2559570?v=4",
"events_url": "https://api.github.com/users/rachidio/events{/privacy}",
"followers_url": "https://api.github.com/users/rachidio/followers",
"following_url": "https://api.github.com/users/rachidio/following{/other_user}",
"gists_url": "https://api.github.com/users/rachidio/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rachidio",
"id": 2559570,
"login": "rachidio",
"node_id": "MDQ6VXNlcjI1NTk1NzA=",
"organizations_url": "https://api.github.com/users/rachidio/orgs",
"received_events_url": "https://api.github.com/users/rachidio/received_events",
"repos_url": "https://api.github.com/users/rachidio/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rachidio/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rachidio/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rachidio",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[
"Hi ! `datasets` v4 uses `torchcodec` for audio decoding (previous versions were using `soundfile`). What is your `torchcodec` version ? Can you try other versions of `torchcodec` and see if it works ?",
"When I install `datasets` with `pip install datasets[audio]` it install this version of `torchcodec`:\n```\nName: torchcodec\nVersion: 0.8.1\n```\nCan you please point to a working version of `torchcodec`?\n\nThanks for your help",
"I believe you simply need to make sure the torchcodec and torch versions work together. Here is how to fix it:\n\n```python\n!pip install -U torchcodec torch\n```",
"I am also encountering this same issue when i run `print(ug_court[\"train\"][0])` to view the features of the first row of my audio data",
"the problem still goes on to when i force training with seeing these features",
"Thank you @lhoestq I've reinstalled the packages an the error is gone.\nMy new versions are:\n```\nName: torch\nVersion: 2.8.0\n---\nName: torchaudio\nVersion: 2.8.0\n---\nName: torchcodec\nVersion: 0.8.1\n```\n\nRegards",
"mine too has worked ",
"Hi,\n\nI encounter the same problem when trying to inspect the first element in the dataset. My environment is:\n```\nroot@3ac6f9f8c6c4:/workspace# pip3 list | grep torch\npytorch-lightning 2.5.6\npytorch-metric-learning 2.9.0\ntorch 2.8.0+cu126\ntorch-audiomentations 0.12.0\ntorch_pitch_shift 1.2.5\ntorchaudio 2.8.0+cu126\ntorchcodec 0.8.1\ntorchelastic 0.2.2\ntorchmetrics 1.8.2\ntorchvision 0.23.0+cu126\n```\nthe same as @rachidio 's new version that works.\n\nI am in a Docker container environment, and here is the code I am working with:\n\n<img width=\"1350\" height=\"388\" alt=\"Image\" src=\"https://github.com/user-attachments/assets/4cf0400f-9ee7-47c7-ba57-c4ef3c1e7fd6\" />"
] | 2025-10-27T22:02:00
| 2025-11-15T16:28:04
| null |
NONE
| null | null | null | null |
### Describe the bug
When using the huggingface datasets.Audio feature to decode a local or remote (public HF dataset) audio file inside Google Colab, the notebook kernel crashes with std::bad_alloc (C++ memory allocation failure).
The crash happens even with a minimal code example and valid .wav file that can be read successfully using soundfile.
Here is a sample Collab notebook to reproduce the problem.
https://colab.research.google.com/drive/1nnb-GC5748Tux3xcYRussCGp2x-zM9Id?usp=sharing
code sample:
```
...
audio_dataset = audio_dataset.cast_column("audio", Audio(sampling_rate=16000))
# Accessing the first element crashes the Colab kernel
print(audio_dataset[0]["audio"])
```
Error log
```
WARNING what(): std::bad_alloc
terminate called after throwing an instance of 'std::bad_alloc'
```
Environment
Platform: Google Colab (Python 3.12.12)
datasets Version: 4.3.0
soundfile Version: 0.13.1
torchaudio Version: 2.8.0+cu126
Thanks in advance to help me on this error I get approx two weeks now after it was working before.
Regards
### Steps to reproduce the bug
https://colab.research.google.com/drive/1nnb-GC5748Tux3xcYRussCGp2x-zM9Id?usp=sharing
### Expected behavior
Loading the audio and decode it.
It should safely return:
{
"path": "path/filaname.wav",
"array": np.ndarray([...]),
"sampling_rate": 16000
}
### Environment info
Environment
Platform: Google Colab (Python 3.12.12)
datasets Version: 4.3.0
soundfile Version: 0.13.1
torchaudio Version: 2.8.0+cu126
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7834/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7834/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/7832
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7832/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7832/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7832/events
|
https://github.com/huggingface/datasets/issues/7832
| 3,555,991,552
|
I_kwDODunzps7T9CAA
| 7,832
|
[DOCS][minor] TIPS paragraph not compiled in docs/stream
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/110672812?v=4",
"events_url": "https://api.github.com/users/art-test-stack/events{/privacy}",
"followers_url": "https://api.github.com/users/art-test-stack/followers",
"following_url": "https://api.github.com/users/art-test-stack/following{/other_user}",
"gists_url": "https://api.github.com/users/art-test-stack/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/art-test-stack",
"id": 110672812,
"login": "art-test-stack",
"node_id": "U_kgDOBpi7rA",
"organizations_url": "https://api.github.com/users/art-test-stack/orgs",
"received_events_url": "https://api.github.com/users/art-test-stack/received_events",
"repos_url": "https://api.github.com/users/art-test-stack/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/art-test-stack/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/art-test-stack/subscriptions",
"type": "User",
"url": "https://api.github.com/users/art-test-stack",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[] | 2025-10-27T10:03:22
| 2025-10-27T10:10:54
| 2025-10-27T10:10:54
|
CONTRIBUTOR
| null | null | null | null |
In the client documentation, the markdown 'TIP' paragraph for paragraph in docs/stream#shuffle is not well executed — not as the other in the same page / while markdown is correctly considering it.
Documentation:
https://huggingface.co/docs/datasets/v4.3.0/en/stream#shuffle:~:text=%5B!TIP%5D%5BIterableDataset.shuffle()%5D(/docs/datasets/v4.3.0/en/package_reference/main_classes%23datasets.IterableDataset.shuffle)%20will%20also%20shuffle%20the%20order%20of%20the%20shards%20if%20the%20dataset%20is%20sharded%20into%20multiple%20files.
Github source:
https://github.com/huggingface/datasets/blob/main/docs/source/stream.mdx#:~:text=Casting%20only%20works%20if%20the%20original%20feature%20type%20and%20new%20feature%20type%20are%20compatible.%20For%20example%2C%20you%20can%20cast%20a%20column%20with%20the%20feature%20type%20Value(%27int32%27)%20to%20Value(%27bool%27)%20if%20the%20original%20column%20only%20contains%20ones%20and%20zeros.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/110672812?v=4",
"events_url": "https://api.github.com/users/art-test-stack/events{/privacy}",
"followers_url": "https://api.github.com/users/art-test-stack/followers",
"following_url": "https://api.github.com/users/art-test-stack/following{/other_user}",
"gists_url": "https://api.github.com/users/art-test-stack/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/art-test-stack",
"id": 110672812,
"login": "art-test-stack",
"node_id": "U_kgDOBpi7rA",
"organizations_url": "https://api.github.com/users/art-test-stack/orgs",
"received_events_url": "https://api.github.com/users/art-test-stack/received_events",
"repos_url": "https://api.github.com/users/art-test-stack/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/art-test-stack/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/art-test-stack/subscriptions",
"type": "User",
"url": "https://api.github.com/users/art-test-stack",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7832/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7832/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 0:07:32
|
https://api.github.com/repos/huggingface/datasets/issues/7829
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7829/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7829/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7829/events
|
https://github.com/huggingface/datasets/issues/7829
| 3,548,584,085
|
I_kwDODunzps7TgxiV
| 7,829
|
Memory leak / Large memory usage with num_workers = 0 and numerous dataset within DatasetDict
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/24591024?v=4",
"events_url": "https://api.github.com/users/raphaelsty/events{/privacy}",
"followers_url": "https://api.github.com/users/raphaelsty/followers",
"following_url": "https://api.github.com/users/raphaelsty/following{/other_user}",
"gists_url": "https://api.github.com/users/raphaelsty/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/raphaelsty",
"id": 24591024,
"login": "raphaelsty",
"node_id": "MDQ6VXNlcjI0NTkxMDI0",
"organizations_url": "https://api.github.com/users/raphaelsty/orgs",
"received_events_url": "https://api.github.com/users/raphaelsty/received_events",
"repos_url": "https://api.github.com/users/raphaelsty/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/raphaelsty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/raphaelsty/subscriptions",
"type": "User",
"url": "https://api.github.com/users/raphaelsty",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[
"Thanks for the report, this is possibly related #7722 and #7694.\n\nCould you pls provide steps to reproduce this?",
"To overcome this issue right now I did simply reduce the size of the dataset and ended up running a for loop (my training has now a constant learning rate schedule). From what I understood, and I don't know if it's possible, the solution would be to tell the backend of `datasets` to leave x% of the memory free (including memory mapping). Can't release the data right now but I will and then allow to reproduce this issue. But it will involve to have some free TB of disk",
"@raphaelsty thanks for coming back to this. I assume you are running in streaming mode? That should prevent these errors but it looks like more people than just you have this problem, so a clearly reproducing example (including data + code) is highly appreciated.",
"This could be related to this issue: https://github.com/huggingface/datasets/issues/4883 in which we discussed how RSS and memory mapping works and depends on the OS and disk type."
] | 2025-10-24T09:51:38
| 2025-11-06T13:31:26
| null |
NONE
| null | null | null | null |
### Describe the bug
Hi team, first off, I love the datasets library! 🥰
I'm encountering a potential memory leak / increasing memory usage when training a model on a very large DatasetDict.
Setup: I have a DatasetDict containing 362 distinct datasets, which sum up to ~2.8 billion rows.
Training Task: I'm performing contrastive learning with SentenceTransformer and Accelerate on a single node with 4 H100, which requires me to sample from only one dataset at a time.
Training Loop: At each training step, I sample ~16,000 examples from a single dataset, and then switch to a different dataset for the next step. I iterate through all 362 datasets this way.
Problem: The process's memory usage continuously increases over time, eventually causing a stale status where GPUs would stop working. It seems memory from previously sampled datasets isn't being released. I've set num_workers=0 for all experiments.
Chart 1: Standard DatasetDict The memory usage grows steadily until it make the training stale (RSS memory) <img width="773" height="719" alt="Image" src="https://github.com/user-attachments/assets/6606bef5-1153-4f2d-bf08-82da249d6e8d" />
Chart 2: IterableDatasetDict I also tried to use IterableDatasetDict and IterableDataset. The memory curve is "smoother," but the result is the same: it grows indefinitely and the training become stale. <img width="339" height="705" alt="Image" src="https://github.com/user-attachments/assets/ee90c1a1-6c3b-4135-9edc-90955cb1695a" />
Any feedback or guidance on how to manage this memory would be greatly appreciated!
### Steps to reproduce the bug
WIP, I'll add some code that manage to reproduce this error, but not straightforward.
### Expected behavior
The memory usage should remain relatively constant or plateau after a few steps. Memory used for sampling one dataset should be released before or during the sampling of the next dataset.
### Environment info
Python: 3.12
Datasets: 4.3.0
SentenceTransformers: 5.1.1
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7829/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7829/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/7821
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7821/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7821/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7821/events
|
https://github.com/huggingface/datasets/issues/7821
| 3,520,913,195
|
I_kwDODunzps7R3N8r
| 7,821
|
Building a dataset with large variable size arrays results in error ArrowInvalid: Value X too large to fit in C integer type
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/51880718?v=4",
"events_url": "https://api.github.com/users/kkoutini/events{/privacy}",
"followers_url": "https://api.github.com/users/kkoutini/followers",
"following_url": "https://api.github.com/users/kkoutini/following{/other_user}",
"gists_url": "https://api.github.com/users/kkoutini/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kkoutini",
"id": 51880718,
"login": "kkoutini",
"node_id": "MDQ6VXNlcjUxODgwNzE4",
"organizations_url": "https://api.github.com/users/kkoutini/orgs",
"received_events_url": "https://api.github.com/users/kkoutini/received_events",
"repos_url": "https://api.github.com/users/kkoutini/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kkoutini/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kkoutini/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kkoutini",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[
"Thanks for reporting ! You can fix this by specifying the output type explicitly and use `LargeList` which uses int64 for offsets:\n\n```python\nfeatures = Features({\"audio\": LargeList(Value(\"uint16\"))})\nds = ds.map(..., features=features)\n```\n\nIt would be cool to improve `list_of_pa_arrays_to_pyarrow_listarray()` to automatically use `LargeList` if the lists are longer than the int32 limit though. Contributions are welcome if you'd like to improve it"
] | 2025-10-16T08:45:17
| 2025-10-20T13:42:05
| null |
CONTRIBUTOR
| null | null | null | null |
### Describe the bug
I used map to store raw audio waveforms of variable lengths in a column of a dataset the `map` call fails with ArrowInvalid: Value X too large to fit in C integer type.
```
Traceback (most recent call last):
Traceback (most recent call last):
File "...lib/python3.12/site-packages/multiprocess/pool.py", line 125, in worker
result = (True, func(*args, **kwds))
^^^^^^^^^^^^^^^^^^^
File "...lib/python3.12/site-packages/datasets/utils/py_utils.py", line 678, in _write_generator_to_queue
for i, result in enumerate(func(**kwargs)):
^^^^^^^^^^^^^^^^^^^^^^^^^
File "...lib/python3.12/site-packages/datasets/arrow_dataset.py", line 3526, in _map_single
writer.write_batch(batch)
File "...lib/python3.12/site-packages/datasets/arrow_writer.py", line 605, in write_batch
arrays.append(pa.array(typed_sequence))
^^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/array.pxi", line 252, in pyarrow.lib.array
File "pyarrow/array.pxi", line 114, in pyarrow.lib._handle_arrow_array_protocol
File "...lib/python3.12/site-packages/datasets/arrow_writer.py", line 225, in __arrow_array__
out = list_of_np_array_to_pyarrow_listarray(data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "...lib/python3.12/site-packages/datasets/features/features.py", line 1538, in list_of_np_array_to_pyarrow_listarray
return list_of_pa_arrays_to_pyarrow_listarray(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "...lib/python3.12/site-packages/datasets/features/features.py", line 1530, in list_of_pa_arrays_to_pyarrow_listarray
offsets = pa.array(offsets, type=pa.int32())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/array.pxi", line 362, in pyarrow.lib.array
File "pyarrow/array.pxi", line 87, in pyarrow.lib._ndarray_to_array
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Value 2148479376 too large to fit in C integer type
```
### Steps to reproduce the bug
Calling map on a dataset that returns a column with long 1d numpy arrays of variable length.
Example:
```python
# %%
import logging
import datasets
import pandas as pd
import numpy as np
# %%
def process_batch(batch, rank):
res = []
for _ in batch["id"]:
res.append(np.zeros((2**30)).astype(np.uint16))
return {"audio": res}
if __name__ == "__main__":
df = pd.DataFrame(
{
"id": list(range(400)),
}
)
ds = datasets.Dataset.from_pandas(df)
try:
from multiprocess import set_start_method
set_start_method("spawn")
except RuntimeError:
print("Spawn method already set, continuing...")
mapped_ds = ds.map(
process_batch,
batched=True,
batch_size=2,
with_rank=True,
num_proc=2,
cache_file_name="path_to_cache/tmp.arrow",
writer_batch_size=200,
remove_columns=ds.column_names,
# disable_nullable=True,
)
```
### Expected behavior
I think the offsets should be pa.int64() if needed and not forced to be `pa.int32()`
in https://github.com/huggingface/datasets/blob/3e13d30823f8ec498d56adbc18c6880a5463b313/src/datasets/features/features.py#L1535
### Environment info
- `datasets` version: 3.3.1
- Platform: Linux-5.15.0-94-generic-x86_64-with-glibc2.35
- Python version: 3.12.9
- `huggingface_hub` version: 0.29.0
- PyArrow version: 19.0.1
- Pandas version: 2.2.3
- `fsspec` version: 2024.12.0
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7821/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7821/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/7819
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7819/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7819/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7819/events
|
https://github.com/huggingface/datasets/issues/7819
| 3,517,086,110
|
I_kwDODunzps7Ronme
| 7,819
|
Cannot download opus dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/51946663?v=4",
"events_url": "https://api.github.com/users/liamsun2019/events{/privacy}",
"followers_url": "https://api.github.com/users/liamsun2019/followers",
"following_url": "https://api.github.com/users/liamsun2019/following{/other_user}",
"gists_url": "https://api.github.com/users/liamsun2019/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/liamsun2019",
"id": 51946663,
"login": "liamsun2019",
"node_id": "MDQ6VXNlcjUxOTQ2NjYz",
"organizations_url": "https://api.github.com/users/liamsun2019/orgs",
"received_events_url": "https://api.github.com/users/liamsun2019/received_events",
"repos_url": "https://api.github.com/users/liamsun2019/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/liamsun2019/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/liamsun2019/subscriptions",
"type": "User",
"url": "https://api.github.com/users/liamsun2019",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[
"Hi ! it seems \"en-zh\" doesn't exist for this dataset\n\nYou can see the list of subsets here: https://huggingface.co/datasets/Helsinki-NLP/opus_books"
] | 2025-10-15T09:06:19
| 2025-10-20T13:45:16
| null |
NONE
| null | null | null | null |
When I tried to download opus_books using:
from datasets import load_dataset
dataset = load_dataset("Helsinki-NLP/opus_books")
I got the following errors:
FileNotFoundError: Couldn't find any data file at /workspace/Helsinki-NLP/opus_books. Couldn't find 'Helsinki-NLP/opus_books' on the Hugging Face Hub either: LocalEntryNotFoundError: An error happened while trying to locate the file on the Hub and we cannot find the requested files in the local cache. Please check your connection and try again or make sure your Internet connection is on.
I also tried:
dataset = load_dataset("opus_books", "en-zh")
and the errors remain the same. However, I can download "mlabonne/FineTome-100k" successfully.
My datasets is version 4.2.0
Any clues? Big thanks.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7819/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7819/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/7818
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7818/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7818/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7818/events
|
https://github.com/huggingface/datasets/issues/7818
| 3,515,887,618
|
I_kwDODunzps7RkDAC
| 7,818
|
train_test_split and stratify breaks with Numpy 2.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/24845694?v=4",
"events_url": "https://api.github.com/users/davebulaval/events{/privacy}",
"followers_url": "https://api.github.com/users/davebulaval/followers",
"following_url": "https://api.github.com/users/davebulaval/following{/other_user}",
"gists_url": "https://api.github.com/users/davebulaval/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/davebulaval",
"id": 24845694,
"login": "davebulaval",
"node_id": "MDQ6VXNlcjI0ODQ1Njk0",
"organizations_url": "https://api.github.com/users/davebulaval/orgs",
"received_events_url": "https://api.github.com/users/davebulaval/received_events",
"repos_url": "https://api.github.com/users/davebulaval/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/davebulaval/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davebulaval/subscriptions",
"type": "User",
"url": "https://api.github.com/users/davebulaval",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"I can't reproduce this. Could you pls provide an example with a public dataset/artificial dataset and show how you loaded that?\n\nThis works for me:\n\n```python\nimport numpy as np\nfrom datasets import Dataset, Features, ClassLabel, Value\n\ndata = {\"text\": [f\"sample_{i}\" for i in range(100)], \"label\": [i % 3 for i in range(100)]}\nfeatures = Features({\"text\": Value(\"string\"),\n \"label\": ClassLabel(names=[\"class_0\", \"class_1\", \"class_2\"])})\ndataset = Dataset.from_dict(data, features=features)\nsplits = dataset.train_test_split(test_size=0.2, stratify_by_column=\"label\")\nprint(f\"Success with numpy {np.__version__}\")\n```\nbut it also works for `numpy<2`",
"@davebulaval tried with numpy 2.3.4, and maybe i have successfully reproduced the bug!\n```\nValueError: Unable to avoid copy while creating an array as requested.\nIf using `np.array(obj, copy=False)` replace it with `np.asarray(obj)` to allow a copy when needed (no behavior change in NumPy 1.x).\nFor more details, see https://numpy.org/devdocs/numpy_2_0_migration_guide.html#adapting-to-changes-in-the-copy-keyword.\n```\n\nAlso i downgraded to numpy 1.26.4\n```\n(hf-reproduce) F:\\Python\\Machine learning\\reproducing>python repro.py\nDatasetDict({\n train: Dataset({\n features: ['text', 'label'],\n num_rows: 16\n })\n test: Dataset({\n features: ['text', 'label'],\n num_rows: 4\n })\n})\n```",
"Also @CloseChoice The bug only appears in cases where the Arrow array cannot be represented as a contiguous NumPy array without copying.\n\nSo closing the discussion loop here - \n\nThe error occurs because `train_test_split(..., stratify_by_column=...)` attempts to convert\nan Arrow column to a NumPy array using `np.array(..., copy=False)`.\n\nIn NumPy <2.0 this silently allowed a copy if needed.\nIn NumPy ≥2.0 this raises:\nValueError: Unable to avoid copy while creating an array as requested.\n\nThis only happens when the Arrow column is not contiguous in memory, which explains\nwhy some datasets reproduce it and others do not."
] | 2025-10-15T00:01:19
| 2025-10-28T16:10:44
| 2025-10-28T16:10:44
|
NONE
| null | null | null | null |
### Describe the bug
As stated in the title, since Numpy changed in version >2.0 with copy, the stratify parameters break.
e.g. `all_dataset.train_test_split(test_size=0.2,stratify_by_column="label")` returns a Numpy error.
It works if you downgrade Numpy to a version lower than 2.0.
### Steps to reproduce the bug
1. Numpy > 2.0
2. `all_dataset.train_test_split(test_size=0.2,stratify_by_column="label")`
### Expected behavior
It returns a stratified split as per the results of Numpy < 2.0
### Environment info
- `datasets` version: 2.14.4
- Platform: Linux-6.8.0-85-generic-x86_64-with-glibc2.35
- Python version: 3.13.7
- Huggingface_hub version: 0.34.4
- PyArrow version: 19.0.0
- Pandas version: 2.3.2
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7818/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7818/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 13 days, 16:09:25
|
https://api.github.com/repos/huggingface/datasets/issues/7816
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7816/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7816/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7816/events
|
https://github.com/huggingface/datasets/issues/7816
| 3,512,210,206
|
I_kwDODunzps7RWBMe
| 7,816
|
disable_progress_bar() not working as expected
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/5577741?v=4",
"events_url": "https://api.github.com/users/windmaple/events{/privacy}",
"followers_url": "https://api.github.com/users/windmaple/followers",
"following_url": "https://api.github.com/users/windmaple/following{/other_user}",
"gists_url": "https://api.github.com/users/windmaple/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/windmaple",
"id": 5577741,
"login": "windmaple",
"node_id": "MDQ6VXNlcjU1Nzc3NDE=",
"organizations_url": "https://api.github.com/users/windmaple/orgs",
"received_events_url": "https://api.github.com/users/windmaple/received_events",
"repos_url": "https://api.github.com/users/windmaple/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/windmaple/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/windmaple/subscriptions",
"type": "User",
"url": "https://api.github.com/users/windmaple",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"@xianbaoqian ",
"Closing this one since it's a Xet issue."
] | 2025-10-14T03:25:39
| 2025-10-14T23:49:26
| 2025-10-14T23:49:26
|
NONE
| null | null | null | null |
### Describe the bug
Hi,
I'm trying to load a dataset on Kaggle TPU image. There is some known compat issue with progress bar on Kaggle, so I'm trying to disable the progress bar globally. This does not work as you can see in [here](https://www.kaggle.com/code/windmaple/hf-datasets-issue).
In contract, disabling progress bar for snapshot_download() works as expected as in [here](https://www.kaggle.com/code/windmaple/snapshot-download-error).
### Steps to reproduce the bug
See this [notebook](https://www.kaggle.com/code/windmaple/hf-datasets-issue).
There is sth. wrong with `shell_paraent`.
### Expected behavior
The downloader should disable progress bar and move forward w/ no error.
### Environment info
The latest version as I did:
!pip install -U datasets ipywidgets ipykernel
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/5577741?v=4",
"events_url": "https://api.github.com/users/windmaple/events{/privacy}",
"followers_url": "https://api.github.com/users/windmaple/followers",
"following_url": "https://api.github.com/users/windmaple/following{/other_user}",
"gists_url": "https://api.github.com/users/windmaple/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/windmaple",
"id": 5577741,
"login": "windmaple",
"node_id": "MDQ6VXNlcjU1Nzc3NDE=",
"organizations_url": "https://api.github.com/users/windmaple/orgs",
"received_events_url": "https://api.github.com/users/windmaple/received_events",
"repos_url": "https://api.github.com/users/windmaple/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/windmaple/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/windmaple/subscriptions",
"type": "User",
"url": "https://api.github.com/users/windmaple",
"user_view_type": "public"
}
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7816/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7816/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 20:23:47
|
https://api.github.com/repos/huggingface/datasets/issues/7813
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7813/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7813/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7813/events
|
https://github.com/huggingface/datasets/issues/7813
| 3,503,446,288
|
I_kwDODunzps7Q0lkQ
| 7,813
|
Caching does not work when using python3.14
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/142020129?v=4",
"events_url": "https://api.github.com/users/intexcor/events{/privacy}",
"followers_url": "https://api.github.com/users/intexcor/followers",
"following_url": "https://api.github.com/users/intexcor/following{/other_user}",
"gists_url": "https://api.github.com/users/intexcor/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/intexcor",
"id": 142020129,
"login": "intexcor",
"node_id": "U_kgDOCHcOIQ",
"organizations_url": "https://api.github.com/users/intexcor/orgs",
"received_events_url": "https://api.github.com/users/intexcor/received_events",
"repos_url": "https://api.github.com/users/intexcor/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/intexcor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/intexcor/subscriptions",
"type": "User",
"url": "https://api.github.com/users/intexcor",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"https://github.com/uqfoundation/dill/issues/725",
"@intexcor does #7817 fix your problem?"
] | 2025-10-10T15:36:46
| 2025-10-27T17:08:26
| 2025-10-27T17:08:26
|
NONE
| null | null | null | null |
### Describe the bug
Traceback (most recent call last):
File "/workspace/ctn.py", line 8, in <module>
ds = load_dataset(f"naver-clova-ix/synthdog-{lang}") # или "synthdog-zh" для китайского
File "/workspace/.venv/lib/python3.14/site-packages/datasets/load.py", line 1397, in load_dataset
builder_instance = load_dataset_builder(
path=path,
...<10 lines>...
**config_kwargs,
)
File "/workspace/.venv/lib/python3.14/site-packages/datasets/load.py", line 1185, in load_dataset_builder
builder_instance._use_legacy_cache_dir_if_possible(dataset_module)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^
File "/workspace/.venv/lib/python3.14/site-packages/datasets/builder.py", line 612, in _use_legacy_cache_dir_if_possible
self._check_legacy_cache2(dataset_module) or self._check_legacy_cache() or None
~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^
File "/workspace/.venv/lib/python3.14/site-packages/datasets/builder.py", line 485, in _check_legacy_cache2
config_id = self.config.name + "-" + Hasher.hash({"data_files": self.config.data_files})
~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/.venv/lib/python3.14/site-packages/datasets/fingerprint.py", line 188, in hash
return cls.hash_bytes(dumps(value))
~~~~~^^^^^^^
File "/workspace/.venv/lib/python3.14/site-packages/datasets/utils/_dill.py", line 120, in dumps
dump(obj, file)
~~~~^^^^^^^^^^^
File "/workspace/.venv/lib/python3.14/site-packages/datasets/utils/_dill.py", line 114, in dump
Pickler(file, recurse=True).dump(obj)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^
File "/workspace/.venv/lib/python3.14/site-packages/dill/_dill.py", line 428, in dump
StockPickler.dump(self, obj)
~~~~~~~~~~~~~~~~~^^^^^^^^^^^
File "/usr/lib/python3.14/pickle.py", line 498, in dump
self.save(obj)
~~~~~~~~~^^^^^
File "/workspace/.venv/lib/python3.14/site-packages/datasets/utils/_dill.py", line 70, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/.venv/lib/python3.14/site-packages/dill/_dill.py", line 422, in save
StockPickler.save(self, obj, save_persistent_id)
~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.14/pickle.py", line 572, in save
f(self, obj) # Call unbound method with explicit self
~^^^^^^^^^^^
File "/workspace/.venv/lib/python3.14/site-packages/dill/_dill.py", line 1262, in save_module_dict
StockPickler.save_dict(pickler, obj)
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^
File "/usr/lib/python3.14/pickle.py", line 1064, in save_dict
self._batch_setitems(obj.items(), obj)
~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^
TypeError: Pickler._batch_setitems() takes 2 positional arguments but 3 were given
### Steps to reproduce the bug
ds_train = ds["train"].map(lambda x: {**x, "lang": lang})
### Expected behavior
Fixed bugs
### Environment info
- `datasets` version: 4.2.0
- Platform: Linux-6.8.0-85-generic-x86_64-with-glibc2.39
- Python version: 3.14.0
- `huggingface_hub` version: 0.35.3
- PyArrow version: 21.0.0
- Pandas version: 2.3.3
- `fsspec` version: 2025.9.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7813/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7813/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 17 days, 1:31:40
|
https://api.github.com/repos/huggingface/datasets/issues/7811
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7811/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7811/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7811/events
|
https://github.com/huggingface/datasets/issues/7811
| 3,500,741,658
|
I_kwDODunzps7QqRQa
| 7,811
|
SIGSEGV when Python exits due to near null deref
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/5192353?v=4",
"events_url": "https://api.github.com/users/iankronquist/events{/privacy}",
"followers_url": "https://api.github.com/users/iankronquist/followers",
"following_url": "https://api.github.com/users/iankronquist/following{/other_user}",
"gists_url": "https://api.github.com/users/iankronquist/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/iankronquist",
"id": 5192353,
"login": "iankronquist",
"node_id": "MDQ6VXNlcjUxOTIzNTM=",
"organizations_url": "https://api.github.com/users/iankronquist/orgs",
"received_events_url": "https://api.github.com/users/iankronquist/received_events",
"repos_url": "https://api.github.com/users/iankronquist/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/iankronquist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iankronquist/subscriptions",
"type": "User",
"url": "https://api.github.com/users/iankronquist",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[
"The issue seems to come from `dill` which is a `datasets` dependency, e.g. this segfaults:\n\n```python\nimport dill\nfrom tqdm import tqdm\nprogress_bar = tqdm(total=(1000), unit='cols', desc='cols ')\nprogress_bar.update(1)\n```\n\n`tqdm` seems to segfault when `dill` is imported. I only found this about segfault but it's maybe not related https://github.com/tqdm/tqdm/issues/1678 ?",
"After more investigation it seems to be because of it imports `__main__`. This segfaults:\n\n```python\nimport __main__\nfrom tqdm import tqdm\nprogress_bar = tqdm(total=(1000), unit='cols', desc='cols ')\nprogress_bar.update(1)\n```\n\nI opened an issue at https://github.com/tqdm/tqdm/issues/1687",
"Here is a workaround. You can run your code as long as the progress bar is closed before exiting.\n\n```python\nfrom datasets import load_dataset\nfrom tqdm import tqdm\n\nprogress_bar = tqdm(total=(1000), unit='cols', desc='cols ')\nprogress_bar.update(1)\nprogress_bar.close() # avoids the segfault\n```",
"https://github.com/tqdm/tqdm/issues/1687#issuecomment-3392457094"
] | 2025-10-09T22:00:11
| 2025-10-10T22:09:24
| null |
NONE
| null | null | null | null |
### Describe the bug
When I run the following python script using datasets I get a segfault.
```python
from datasets import load_dataset
from tqdm import tqdm
progress_bar = tqdm(total=(1000), unit='cols', desc='cols ')
progress_bar.update(1)
```
```
% lldb -- python3 crashmin.py
(lldb) target create "python3"
Current executable set to '/Users/ian/bug/venv/bin/python3' (arm64).
(lldb) settings set -- target.run-args "crashmin.py"
(lldb) r
Process 8095 launched: '/Users/ian/bug/venv/bin/python3' (arm64)
Process 8095 stopped
* thread #2, stop reason = exec
frame #0: 0x0000000100014b30 dyld`_dyld_start
dyld`_dyld_start:
-> 0x100014b30 <+0>: mov x0, sp
0x100014b34 <+4>: and sp, x0, #0xfffffffffffffff0
0x100014b38 <+8>: mov x29, #0x0 ; =0
Target 0: (Python) stopped.
(lldb) c
Process 8095 resuming
cols : 0% 0/1000 [00:00<?, ?cols/s]Process 8095 stopped
* thread #2, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x10)
frame #0: 0x0000000101783454 _datetime.cpython-313-darwin.so`delta_new + 188
_datetime.cpython-313-darwin.so`delta_new:
-> 0x101783454 <+188>: ldr x3, [x20, #0x10]
0x101783458 <+192>: adrp x0, 10
0x10178345c <+196>: add x0, x0, #0x6fc ; "seconds"
Target 0: (Python) stopped.
(lldb) bt
* thread #2, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x10)
* frame #0: 0x0000000101783454 _datetime.cpython-313-darwin.so`delta_new + 188
frame #1: 0x0000000100704b60 Python`type_call + 96
frame #2: 0x000000010067ba34 Python`_PyObject_MakeTpCall + 120
frame #3: 0x00000001007aae3c Python`_PyEval_EvalFrameDefault + 30236
frame #4: 0x000000010067c900 Python`PyObject_CallOneArg + 112
frame #5: 0x000000010070f0a0 Python`slot_tp_finalize + 116
frame #6: 0x000000010070c3b4 Python`subtype_dealloc + 788
frame #7: 0x00000001006c378c Python`insertdict + 756
frame #8: 0x00000001006db2b0 Python`_PyModule_ClearDict + 660
frame #9: 0x000000010080a9a8 Python`finalize_modules + 1772
frame #10: 0x0000000100809a44 Python`_Py_Finalize + 264
frame #11: 0x0000000100837630 Python`Py_RunMain + 252
frame #12: 0x0000000100837ef8 Python`pymain_main + 304
frame #13: 0x0000000100837f98 Python`Py_BytesMain + 40
frame #14: 0x000000019cfcc274 dyld`start + 2840
(lldb) register read x20
x20 = 0x0000000000000000
(lldb)
```
### Steps to reproduce the bug
Run the script above, and observe the segfault.
### Expected behavior
No segfault
### Environment info
```
% pip freeze datasets | grep -i datasets
datasets==4.2.0
(venv) 0 ~/bug 14:58:06
% pip freeze tqdm | grep -i tqdm
tqdm==4.67.1
(venv) 0 ~/bug 14:58:16
% python --version
Python 3.13.7
```
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7811/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7811/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/7804
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7804/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7804/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7804/events
|
https://github.com/huggingface/datasets/issues/7804
| 3,498,534,596
|
I_kwDODunzps7Qh2bE
| 7,804
|
Support scientific data formats
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[
"Please add the support for `Zarr`! That's what we use in the Bioimaging community. It is crucial, because raw upload of a *single* bio image can take _terrabytes in memory_!\n\nThe python library would be `bioio` or `zarr`:\n- [ ] Zarr: `bioio` or `zarr`\n\nSee a [Zarr example](https://ome.github.io/ome-ngff-validator/?source=https://uk1s3.embassy.ebi.ac.uk/bia-integrator-data/S-BIAD845/796b9fb8-f4ec-4c4b-bfc3-5cb00ccf19fe/796b9fb8-f4ec-4c4b-bfc3-5cb00ccf19fe.zarr)\n\ncc @joshmoore",
"@stefanches7 `zarr` is already usable with the hf hub as an array store. See this example from the [docs](https://huggingface.co/docs/huggingface_hub/en/guides/hf_file_system):\n\n```python\nimport numpy as np\nimport zarr\n\nembeddings = np.random.randn(50000, 1000).astype(\"float32\")\n\n# Write an array to a repo\nwith zarr.open_group(\"hf://my-username/my-model-repo/array-store\", mode=\"w\") as root:\n foo = root.create_group(\"embeddings\")\n foobar = foo.zeros('experiment_0', shape=(50000, 1000), chunks=(10000, 1000), dtype='f4')\n foobar[:] = embeddings\n\n# Read an array from a repo\nwith zarr.open_group(\"hf://my-username/my-model-repo/array-store\", mode=\"r\") as root:\n first_row = root[\"embeddings/experiment_0\"][0]\n```\n\nIs there additional functionality that would not be covered by this?",
"@cakiki I think some tiling capabilities, as well as metadata / labels handling. Consult ome-zarr doc here: https://ome-zarr.readthedocs.io/en/stable/python.html\nVisualization would be the cherry on the top. \n\ncc @joshmoore @lubianat @St3V0Bay: curious what you think",
"zarr-specific dataset viewer would be very cool",
"A support for BIDS it would be perfect, I think it's possible to do all the biosinal can be done with mne. There's a cool community for decoding brain signals, and now with EMG. The new META bracelet EMG is saving things in BIDS.\n\nI can help to interface, coding and try to make this happen. I am available at hugging face discord with the username aristimunha, if some 1-to-1 discuss it would be necessary :)",
"@lhoestq , @cakiki , do you think we can make this happen?",
"If you give me the OK, I'll create the PR to make everything for a Biosignal Reader logic, I already studied the nilabel PR :)",
"That would be an amazing addition ! Feel free to ping me in your PR for review or if you have questions / if I can help",
"@bruAristimunha @lhoestq I've recalled a gold of a resource for BIDS: https://openneuro.org/\n\nDo you think there is a data-easy way to make those visible here on HuggingFace? Afaik they use `datalad` to fetch the data. Maybe the best way is to leave OpenNeuro as-is, not connecting it to HuggingFace at all - just an idea I had spontaneously.",
"I know an \"easy\" way to create interoperability with all biosignal datasets from OpenNeuro =) \n\nFor biosignal data, we can use [EEGDash](https://eegdash.org/) to create a Pytorch dataset, which automates fetch, lazy read, and converts to a pytorch dataset. \n\nI have a question about the best serialization for a Hugging Face dataset, but I can discuss it with some of you on Discord; my username is aristimunha.",
"I can explain it publicly too, but I think a short 5-minute conversation would be better than many, many texts to explain the details.",
"It's ok to have discussions in one place here (or in a separate issue if it's needed) - I also generally check github more often than discord ^^'",
"Hi @bruAristimunha @lhoestq any way we could proceed on this?\nI see someone posted a Nifti vizualization PR: https://github.com/huggingface/datasets/pull/7874 - I think it would be a shame if we couldn't accompany that by a neat way to import BIDS Nifti!",
"@stefanches7 author of #7874 here, would be open to expand the current support to BIDS as well after having a brief look. \nMaybe having a brief call over Discord (my username: TobiasPitters on the huggingface discord server) might help sorting things out, since I am not familiar with BIDS. So getting an understanding over test cases needed, etc. would be great!",
"Hey!!\n\nFrom a bids perspective, I can provide full support for all biosignal types (EEG, iEEG, MEG, EMG). BIDS is a well-established contract format; I believe we can design something that supports the entire medical domain. I think it just requires a few details to be aligned.\n\nFrom my perspective, the tricky part is how to best adapt and serialize from the Hugging Face perspective.\n\nUnder the hood, for the biosignal part, I think I would use [mne](https://mne.tools/) for interoperability and [eegdash](https://eegdash.org/dataset_summary.html) to create the serialized dataset, but we can definitely discuss this further. I will ping you @CloseChoice on Discord.",
"had a discussion with @neurolabusc and here's a quick wrap-up:\n - BIDS support would be huge (@bruAristimunha would be great if we could catch up on that)\n - DICOM support as well, but that might be harder due to a lot of variety in how headers are handled, vendor specifics etc. So to have a reliable pipeline to interact with whole folders of DICOM files (including metadata) would require a lot of work and a lot of testing. Therefore I set https://github.com/huggingface/datasets/pull/7835 back to draft mode. But there are tools that ease the way, especially https://github.com/ImagingDataCommons/highdicom (or potentially https://github.com/QIICR/dcmqi). \n - Getting users would help in order to understand what other formats/features are required therefore loading a bunch of open datasets to the hub using the new Nifti feature would be great. Some tutorials might help here as well.",
"Hi @CloseChoice and @bruAristimunha, glad to meet you both! We could appoint a call; I am currently moving to a new job, so the time slots are limited, but let's connect over Discord and see what we could do.\n\n* BIDS: our hackathon team @zuazo @ekarrieta @lakshya16157 put up a BIDS format converter: https://huggingface.co/spaces/stefanches/OpenBIDSifier. Might be useful for imaging dataset conversion to BIDS.\n* DICOM support: cc @St3V0Bay, the author of DICOM support in CroissantML (https://github.com/mlcommons/croissant/pull/942)\n\ncc @nolden",
"my username is aristimunha within the huggieng face discord to discuss more"
] | 2025-10-09T10:18:24
| 2025-11-26T16:09:43
| null |
MEMBER
| null | null | null | null |
List of formats and libraries we can use to load the data in `datasets`:
- [ ] DICOMs: pydicom
- [x] NIfTIs: nibabel
- [ ] WFDB: wfdb
cc @zaRizk7 for viz
Feel free to comment / suggest other formats and libs you'd like to see or to share your interest in one of the mentioned format
| null |
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 5,
"hooray": 4,
"laugh": 0,
"rocket": 0,
"total_count": 10,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7804/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7804/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/7802
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7802/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7802/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7802/events
|
https://github.com/huggingface/datasets/issues/7802
| 3,497,454,119
|
I_kwDODunzps7Qduon
| 7,802
|
[Docs] Missing documentation for `Dataset.from_dict`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/69421545?v=4",
"events_url": "https://api.github.com/users/aaronshenhao/events{/privacy}",
"followers_url": "https://api.github.com/users/aaronshenhao/followers",
"following_url": "https://api.github.com/users/aaronshenhao/following{/other_user}",
"gists_url": "https://api.github.com/users/aaronshenhao/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/aaronshenhao",
"id": 69421545,
"login": "aaronshenhao",
"node_id": "MDQ6VXNlcjY5NDIxNTQ1",
"organizations_url": "https://api.github.com/users/aaronshenhao/orgs",
"received_events_url": "https://api.github.com/users/aaronshenhao/received_events",
"repos_url": "https://api.github.com/users/aaronshenhao/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/aaronshenhao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aaronshenhao/subscriptions",
"type": "User",
"url": "https://api.github.com/users/aaronshenhao",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[
"I'd like to work on this documentation issue.",
"Hi I'd like to work on this. I can see the docstring is already in the code. \nCould you confirm:\n1. Is this still available?\n2. Should I add this to the main_classes.md file, or is there a specific \n documentation file I should update?\n3. Are there any formatting guidelines I should follow?\n\nI'm new to contributing but eager to learn!"
] | 2025-10-09T02:54:41
| 2025-10-19T16:09:33
| null |
NONE
| null | null | null | null |
Documentation link: https://huggingface.co/docs/datasets/en/package_reference/main_classes
Link to method (docstring present): https://github.com/huggingface/datasets/blob/6f2502c5a026caa89839713f6f7c8b958e5e83eb/src/datasets/arrow_dataset.py#L1029
The docstring is present for the function, but seems missing from the official documentation for the `Dataset` class on HuggingFace.
The method in question:
```python
@classmethod
def from_dict(
cls,
mapping: dict,
features: Optional[Features] = None,
info: Optional[DatasetInfo] = None,
split: Optional[NamedSplit] = None,
) -> "Dataset":
"""
Convert `dict` to a `pyarrow.Table` to create a [`Dataset`].
Important: a dataset created with from_dict() lives in memory
and therefore doesn't have an associated cache directory.
This may change in the future, but in the meantime if you
want to reduce memory usage you should write it back on disk
and reload using e.g. save_to_disk / load_from_disk.
Args:
mapping (`Mapping`):
Mapping of strings to Arrays or Python lists.
features ([`Features`], *optional*):
Dataset features.
info (`DatasetInfo`, *optional*):
Dataset information, like description, citation, etc.
split (`NamedSplit`, *optional*):
Name of the dataset split.
Returns:
[`Dataset`]
"""
```
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7802/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7802/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/7798
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7798/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7798/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7798/events
|
https://github.com/huggingface/datasets/issues/7798
| 3,484,470,782
|
I_kwDODunzps7PsM3-
| 7,798
|
Audio dataset is not decoding on 4.1.1
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/61390950?v=4",
"events_url": "https://api.github.com/users/thewh1teagle/events{/privacy}",
"followers_url": "https://api.github.com/users/thewh1teagle/followers",
"following_url": "https://api.github.com/users/thewh1teagle/following{/other_user}",
"gists_url": "https://api.github.com/users/thewh1teagle/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/thewh1teagle",
"id": 61390950,
"login": "thewh1teagle",
"node_id": "MDQ6VXNlcjYxMzkwOTUw",
"organizations_url": "https://api.github.com/users/thewh1teagle/orgs",
"received_events_url": "https://api.github.com/users/thewh1teagle/received_events",
"repos_url": "https://api.github.com/users/thewh1teagle/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/thewh1teagle/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thewh1teagle/subscriptions",
"type": "User",
"url": "https://api.github.com/users/thewh1teagle",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[
"Previously (datasets<=3.6.0), audio columns were decoded automatically when accessing a row. Now, for performance reasons, audio decoding is lazy by default: you just see the file path unless you explicitly cast the column to Audio.\n\nHere’s the fix (following the current [datasets audio docs](https://huggingface.co/docs/datasets/en/audio_load)\n):\n\n```\nfrom datasets import load_dataset, Audio\n\ndataset = load_dataset(\"MrDragonFox/Elise\", split=\"train\")\n\n# Explicitly decode the audio column\ndataset = dataset.cast_column(\"audio\", Audio(sampling_rate=16_000))\n\nprint(dataset[0][\"audio\"])\n# {'path': '...', 'array': array([...], dtype=float32), 'sampling_rate': 16000}\n```",
"@haitam03-yo's comment is right that the data is not decoded by default anymore indeed, but here is how it works in practice now:\n\nFrom `datasets` v4, audio data are read as [AudioDecoder](https://meta-pytorch.org/torchcodec/0.4/generated/torchcodec.decoders.AudioDecoder.html) objects from torchcodec. This doesn't decode the data by default, but you can call `audio.get_all_samples()` to decode the audio.\n\nSee the documentation on how to process audio data here: https://huggingface.co/docs/datasets/audio_process",
"To resolve this, you need to explicitly cast the audio column to the Audio feature. This will decode the audio data and make it accessible as an array. Here is the corrected code snippet\n\n\nfrom datasets import load_dataset, Audio\n\n# Load your dataset\ndataset = load_dataset(\"MrDragonFox/Elise\", split=\"train\")\n\n# Explicitly cast the 'audio' column to the Audio feature\ndataset = dataset.cast_column(\"audio\", Audio(sampling_rate=16_000))\n\n# Now you can access the decoded audio array\nprint(dataset[0][\"audio\"])\n\nBy adding the cast_column step, you are telling the datasets library to decode the audio data with the specified sampling rate, and you will then be able to access the audio array as you were used to in previous versions."
] | 2025-10-05T06:37:50
| 2025-10-06T14:07:55
| null |
NONE
| null | null | null | null |
### Describe the bug
The audio column remain as non-decoded objects even when accessing them.
```python
dataset = load_dataset("MrDragonFox/Elise", split = "train")
dataset[0] # see that it doesn't show 'array' etc...
```
Works fine with `datasets==3.6.0`
Followed the docs in
- https://huggingface.co/docs/datasets/en/audio_load
### Steps to reproduce the bug
```python
dataset = load_dataset("MrDragonFox/Elise", split = "train")
dataset[0] # see that it doesn't show 'array' etc...
```
### Expected behavior
It should decode when accessing the elemenet
### Environment info
4.1.1
ubuntu 22.04
Related
- https://github.com/huggingface/datasets/issues/7707
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7798/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7798/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/7793
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7793/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7793/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7793/events
|
https://github.com/huggingface/datasets/issues/7793
| 3,459,496,971
|
I_kwDODunzps7OM7wL
| 7,793
|
Cannot load dataset, fails with nested data conversions not implemented for chunked array outputs
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/41182432?v=4",
"events_url": "https://api.github.com/users/neevparikh/events{/privacy}",
"followers_url": "https://api.github.com/users/neevparikh/followers",
"following_url": "https://api.github.com/users/neevparikh/following{/other_user}",
"gists_url": "https://api.github.com/users/neevparikh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/neevparikh",
"id": 41182432,
"login": "neevparikh",
"node_id": "MDQ6VXNlcjQxMTgyNDMy",
"organizations_url": "https://api.github.com/users/neevparikh/orgs",
"received_events_url": "https://api.github.com/users/neevparikh/received_events",
"repos_url": "https://api.github.com/users/neevparikh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/neevparikh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/neevparikh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/neevparikh",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[
"Hey @neevparikh,\nThanks for reporting this! I can reproduce the issue and have identified the root cause.\nProblem: The metr-evals/malt-public dataset contains deeply nested conversation data that exceeds PyArrow's 16MB chunk limit. When PyArrow tries to read it in chunks, it hits a fundamental limitation: \"Nested data conversions not implemented for chunked array outputs\".\nRoot Cause: Your dataset has large nested arrays (conversation trees with 4k-87k elements) that get automatically chunked by PyArrow, but the nested data conversion logic can't handle repetition levels across chunk boundaries\n I'm preparing a PR that adds a fallback mechanism to the parquet reader. When this specific error occurs, it will:\n\nDetect the nested data issue\nCombine chunks selectively for problematic columns\nContinue processing normally\n\nThis maintains backward compatibility while fixing the issue for nested datasets like yours.\nWorkaround (if you need immediate access): Try loading with smaller batch sizes:\npythonds = datasets.load_dataset(\"metr-evals/malt-public\", name=\"irrelevant_detail\", \n download_config=datasets.DownloadConfig(\n parquet_batch_size=1000\n ))"
] | 2025-09-27T01:03:12
| 2025-09-27T21:35:31
| null |
NONE
| null | null | null | null |
### Describe the bug
Hi! When I load this dataset, it fails with a pyarrow error. I'm using datasets 4.1.1, though I also see this with datasets 4.1.2
To reproduce:
```
import datasets
ds = datasets.load_dataset(path="metr-evals/malt-public", name="irrelevant_detail")
```
Error:
```
Traceback (most recent call last):
File "/Users/neev/scratch/.venv/lib/python3.13/site-packages/datasets/builder.py", line 1815, in _prepare_split_single
for _, table in generator:
^^^^^^^^^
File "/Users/neev/scratch/.venv/lib/python3.13/site-packages/datasets/packaged_modules/parquet/parquet.py", line 93, in _generate_tables
for batch_idx, record_batch in enumerate(
~~~~~~~~~^
parquet_fragment.to_batches(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...<5 lines>...
)
^
):
^
File "pyarrow/_dataset.pyx", line 3904, in _iterator
File "pyarrow/_dataset.pyx", line 3494, in pyarrow._dataset.TaggedRecordBatchIterator.__next__
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowNotImplementedError: Nested data conversions not implemented for chunked array outputs
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/neev/scratch/test_hf.py", line 3, in <module>
ds = datasets.load_dataset(path="metr-evals/malt-public", name="irrelevant_detail")
File "/Users/neev/scratch/.venv/lib/python3.13/site-packages/datasets/load.py", line 1412, in load_dataset
builder_instance.download_and_prepare(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
download_config=download_config,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...<3 lines>...
storage_options=storage_options,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/Users/neev/scratch/.venv/lib/python3.13/site-packages/datasets/builder.py", line 894, in download_and_prepare
self._download_and_prepare(
~~~~~~~~~~~~~~~~~~~~~~~~~~^
dl_manager=dl_manager,
^^^^^^^^^^^^^^^^^^^^^^
...<2 lines>...
**download_and_prepare_kwargs,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/Users/neev/scratch/.venv/lib/python3.13/site-packages/datasets/builder.py", line 970, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/neev/scratch/.venv/lib/python3.13/site-packages/datasets/builder.py", line 1702, in _prepare_split
for job_id, done, content in self._prepare_split_single(
~~~~~~~~~~~~~~~~~~~~~~~~~~^
gen_kwargs=gen_kwargs, job_id=job_id, **_prepare_split_args
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
):
^
File "/Users/neev/scratch/.venv/lib/python3.13/site-packages/datasets/builder.py", line 1858, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset
```
### Steps to reproduce the bug
To reproduce:
```
import datasets
ds = datasets.load_dataset(path="metr-evals/malt-public", name="irrelevant_detail")
```
### Expected behavior
The dataset loads
### Environment info
Datasets: 4.1.1
Python: 3.13
Platform: Macos
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7793/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7793/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/7792
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7792/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7792/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7792/events
|
https://github.com/huggingface/datasets/issues/7792
| 3,456,802,210
|
I_kwDODunzps7OCp2i
| 7,792
|
Concatenate IterableDataset instances and distribute underlying shards in a RoundRobin manner
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/13559010?v=4",
"events_url": "https://api.github.com/users/LTMeyer/events{/privacy}",
"followers_url": "https://api.github.com/users/LTMeyer/followers",
"following_url": "https://api.github.com/users/LTMeyer/following{/other_user}",
"gists_url": "https://api.github.com/users/LTMeyer/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/LTMeyer",
"id": 13559010,
"login": "LTMeyer",
"node_id": "MDQ6VXNlcjEzNTU5MDEw",
"organizations_url": "https://api.github.com/users/LTMeyer/orgs",
"received_events_url": "https://api.github.com/users/LTMeyer/received_events",
"repos_url": "https://api.github.com/users/LTMeyer/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/LTMeyer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LTMeyer/subscriptions",
"type": "User",
"url": "https://api.github.com/users/LTMeyer",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
| null |
[] |
[
"# With `datasets.Dataset`\n\nHere is an small script that shows the distribution differences of samples between `interleave_datasets`, `concatenate_datasets` and `concatenate_datasets` + shuffling.\n\n```python\nimport datasets as hf_datasets\n\ndef gen(dataset: int, n_samples: int):\n for i in range(n_samples):\n yield {\"dataset\": dataset, \"sample\": i}\n\nds_1 = hf_datasets.Dataset.from_generator(gen, gen_kwargs={\"dataset\": 0, \"n_samples\": 2})\nds_2 = hf_datasets.Dataset.from_generator(gen, gen_kwargs={\"dataset\": 1, \"n_samples\": 1})\nds_3 = hf_datasets.Dataset.from_generator(gen, gen_kwargs={\"dataset\": 2, \"n_samples\": 3})\n\nn_workers = 3\nprint(f\"Simulate run with {n_workers} workers\")\n\nprint(\"Interleave datasets\")\nfor w in range(n_workers):\n ds_interleave = hf_datasets.interleave_datasets([ds_1, ds_2, ds_3]).shard(n_workers, w)\n for i, sample in enumerate(ds_interleave):\n print(f\"Worker {w} process sample {i} {sample}\")\n\nprint(\"Concatenate datasets\")\nfor w in range(n_workers):\n ds_concatenate = hf_datasets.concatenate_datasets([ds_1, ds_2, ds_3]).shard(n_workers, w)\n for i, sample in enumerate(ds_concatenate):\n print(f\"Worker {w} process sample {i} {sample}\")\n\nprint(\"Concated and shuffled datasets\")\nfor w in range(n_workers):\n ds_concatenate = hf_datasets.concatenate_datasets([ds_1, ds_2, ds_3]).shuffle().shard(n_workers, w)\n for i, sample in enumerate(ds_concatenate):\n print(f\"Worker {w} process sample {i} {sample}\")\n```\n\n> Interleave datasets\nWorker 0 process sample 0 {'dataset': 0, 'sample': 0}\nWorker 1 process sample 0 {'dataset': 1, 'sample': 0}\nWorker 2 process sample 0 {'dataset': 2, 'sample': 0}\n\n> Concatenate datasets\nWorker 0 process sample 0 {'dataset': 0, 'sample': 0}\nWorker 0 process sample 1 {'dataset': 0, 'sample': 1}\nWorker 1 process sample 0 {'dataset': 1, 'sample': 0}\nWorker 1 process sample 1 {'dataset': 2, 'sample': 0}\nWorker 2 process sample 0 {'dataset': 2, 'sample': 1}\nWorker 2 process sample 1 {'dataset': 2, 'sample': 2}\n\n> Concated and shuffled datasets\nWorker 0 process sample 0 {'dataset': 2, 'sample': 2}\nWorker 0 process sample 1 {'dataset': 2, 'sample': 0}\nWorker 1 process sample 0 {'dataset': 0, 'sample': 1}\nWorker 1 process sample 1 {'dataset': 2, 'sample': 1}\nWorker 2 process sample 0 {'dataset': 2, 'sample': 2}\nWorker 2 process sample 1 {'dataset': 0, 'sample': 0}\n\nWithout shuffling, round robin would yield:\n> Worker 0 process sample 0 {'dataset': 0, 'sample': 0}\nWorker 0 process sample 1 {'dataset': 2, 'sample': 0}\nWorker 1 process sample 0 {'dataset': 0, 'sample': 1}\nWorker 1 process sample 1 {'dataset': 2, 'sample': 1}\nWorker 2 process sample 0 {'dataset': 1, 'sample': 0}\nWorker 2 process sample 1 {'dataset': 2, 'sample': 2}",
"# With `datasets.IterableDataset`\n\nThe above works for `Dataset`, but with a sharded `IterableDataset` some data get discarded. See the following results obtained with the script below.\n\n> Simulate run with 3 workers\n\n> Interleave datasets\nWorker 0 process sample 0 {'dataset': 0, 'sample': 0}\nWorker 1 fails with list index out of range.\nWorker 2 fails with list index out of range.\nWith dataloader\nToo many dataloader workers: 3 (max is dataset.num_shards=1). Stopping 2 dataloader workers.\n{'dataset': tensor([0]), 'sample': tensor([0])}\n\n> Concatenate datasets\nWorker 0 process sample 0 {'dataset': 0, 'sample': 0}\nWorker 0 process sample 1 {'dataset': 1, 'sample': 0}\nWorker 0 process sample 2 {'dataset': 2, 'sample': 0}\nWorker 1 fails with list index out of range\nWorker 2 fails with list index out of range\nWith dataloader\nToo many dataloader workers: 3 (max is dataset.num_shards=1). Stopping 2 dataloader workers.\n{'dataset': tensor([0]), 'sample': tensor([0])}\n{'dataset': tensor([1]), 'sample': tensor([0])}\n{'dataset': tensor([2]), 'sample': tensor([0])}\n\n> Concated and shuffled datasets\nWorker 0 process sample 0 {'dataset': 0, 'sample': 0}\nWorker 0 process sample 1 {'dataset': 1, 'sample': 0}\nWorker 0 process sample 2 {'dataset': 2, 'sample': 0}\nWorker 1 fails with list index out of range\nWorker 2 fails with list index out of range\nWith dataloader\nToo many dataloader workers: 3 (max is dataset.num_shards=1). Stopping 2 dataloader workers.\n{'dataset': tensor([0]), 'sample': tensor([0])}\n{'dataset': tensor([1]), 'sample': tensor([0])}\n{'dataset': tensor([2]), 'sample': tensor([0])}\n\n<details>\n\n<summary>Experiment script</summary>\n\n```python\nds_1 = hf_datasets.Dataset.from_generator(gen, gen_kwargs={\"dataset\": 0, \"n_samples\": 2}).to_iterable_dataset(\n num_shards=2\n)\nds_2 = hf_datasets.Dataset.from_generator(gen, gen_kwargs={\"dataset\": 1, \"n_samples\": 1}).to_iterable_dataset(\n num_shards=1\n)\nds_3 = hf_datasets.Dataset.from_generator(gen, gen_kwargs={\"dataset\": 2, \"n_samples\": 3}).to_iterable_dataset(\n num_shards=3\n)\n\nn_workers = 3\nprint(f\"Simulate run with {n_workers} workers\")\n\nprint(\"\\nInterleave datasets\")\nds_interleave = hf_datasets.interleave_datasets([ds_1, ds_2, ds_3])\nfor w in range(n_workers):\n try:\n for i, sample in enumerate(ds_interleave.shard(n_workers, w)):\n print(f\"Worker {w} process sample {i} {sample}\")\n except IndexError as e:\n print(f\"Worker {w} fails with {e}.\")\n\nprint(\"With dataloader\")\nfor sample in torch.utils.data.DataLoader(ds_interleave, num_workers=n_workers):\n print(f\"{sample}\")\n\nprint(\"\\nConcatenate datasets\")\nds_concatenate = hf_datasets.concatenate_datasets([ds_1, ds_2, ds_3])\nfor w in range(n_workers):\n try:\n for i, sample in enumerate(ds_concatenate.shard(n_workers, w)):\n print(f\"Worker {w} process sample {i} {sample}\")\n except IndexError as e:\n print(f\"Worker {w} fails with {e}\")\n\nprint(\"With dataloader\")\nfor sample in torch.utils.data.DataLoader(ds_concatenate, num_workers=n_workers):\n print(f\"{sample}\")\n\nprint(\"\\nConcated and shuffled datasets\")\nds_concatenate = hf_datasets.concatenate_datasets([ds_1, ds_2, ds_3]).shuffle()\nfor w in range(n_workers):\n try:\n for i, sample in enumerate(ds_concatenate.shard(n_workers, w)):\n print(f\"Worker {w} process sample {i} {sample}\")\n except IndexError as e:\n print(f\"Worker {w} fails with {e}\")\n\nprint(\"With dataloader\")\nfor sample in torch.utils.data.DataLoader(ds_concatenate, num_workers=n_workers):\n print(f\"{sample}\")\n```\n\n</details>\n\n# Round Robin with fixed logic\n\n> I started implementing the following, but I'm afraid my sharding logic is incorrect.\n\nHere is a solution for mixing the data in a round robin fashion that allows to distribute the data to all workers. In the previous example above only 1 worker over 3 was actually retrieving data, which resulted in discarding some data.\n\n```python\ndef shard_data_sources(self, num_shards: int, index: int, contiguous=True) -> \"MixMultiSourceExampleIterable\":\n \"\"\"Shard the underlying iterables in a roundrobin manner.\n\n Let's consider we have our iterables as [[s0_0, s0_1], [s1_0], [s2_0, s2_1, s2_3]],\n and we request 3 shards.\n index 0 gets s0_0 s2_0\n index 1 gets s0_1 s2_1\n index 2 gets s1_0 s2_3\n \"\"\"\n return MixMultiSourcesExampleIterable(\n list(\n islice(\n # flatten all underlying iterables (fixed logic)\n [\n ex_iterable.shard_data_sources(ex_iterable.num_shards, index)\n for ex_iterable in self.ex_iterables\n for index in range(ex_iterable.num_shards)\n ],\n # offset the starting point by the index\n index,\n # take over the full list, so exhaust the iterators\n None,\n # step by the number of shards requested\n num_shards,\n )\n )\n )\n```\n\nEditing the example above with the following we obtain the expected result:\n```python\nprint(\"\\nMix datasets\")\nds_mix = mix_dataset([ds_1, ds_2, ds_3])\nfor w in range(n_workers):\n try:\n for i, sample in enumerate(ds_mix.shard(n_workers, w)):\n print(f\"Worker {w} process sample {i} {sample}\")\n except IndexError as e:\n print(f\"Worker {w} fails with {e}\")\n\nprint(\"With dataloader\")\nfor sample in torch.utils.data.DataLoader(ds_mix, num_workers=n_workers):\n print(f\"{sample}\")\n```\n> Mix datasets\nMix datasets\nWorker 0 process sample 0 {'dataset': 0, 'sample': 0}\nWorker 0 process sample 1 {'dataset': 2, 'sample': 0}\nWorker 1 process sample 0 {'dataset': 0, 'sample': 1}\nWorker 1 process sample 1 {'dataset': 2, 'sample': 1}\nWorker 2 process sample 0 {'dataset': 1, 'sample': 0}\nWorker 2 process sample 1 {'dataset': 2, 'sample': 2}\nWith dataloader\n{'dataset': tensor([0]), 'sample': tensor([0])}\n{'dataset': tensor([0]), 'sample': tensor([1])}\n{'dataset': tensor([1]), 'sample': tensor([0])}\n{'dataset': tensor([2]), 'sample': tensor([0])}\n{'dataset': tensor([2]), 'sample': tensor([1])}\n{'dataset': tensor([2]), 'sample': tensor([2])}\n\n# Questions \n\n- The example is quite small, showing that some data get discarded, but on large datasets is this significant?\n- How does the suggested solution interplays with shuffling?\n\n\n\n\n",
"# Larger Experiment\n\n> The example is quite small, showing that some data get discarded, but on large datasets is this significant?\n\nContinuing the experiment above, but with 3 larger and unbalanced datasets, with respectively 1000, 150, and 300 samples, and a dataloader with 4 workers:\n \n> Interleave datasets\nWith dataloader\nToo many dataloader workers: 4 (max is dataset.num_shards=1). Stopping 3 dataloader workers.\nYield 300 samples\n\n> Concatenate datasets\nWith dataloader\nToo many dataloader workers: 4 (max is dataset.num_shards=1). Stopping 3 dataloader workers.\nYield 705 samples\n\n> Concated and shuffled datasets\nWith dataloader\nToo many dataloader workers: 4 (max is dataset.num_shards=1). Stopping 3 dataloader workers.\nYield 705 samples\n\n> Mix datasets\nWith dataloader\nYield 1405 samples\n\nThe dataset mixing proposed above is the only one that yields all the samples while using all the dataloaders.\nAdditional checks should include training metrics (does it improve training quality to mix the data like this), and behavior check in a DDP settings, we don't want to face any deadlock due to some GPU having more batches than other. But this later point should be already handled by the iterator of the `IterableDataset`.\n\n# Follow up?\n\n@lhoestq would there be any interest in making a PR of it? Otherwise I can close the issue as I found a solution to my problem. ",
"I believe this PR could solve your issue? :)\n\nhttps://github.com/huggingface/datasets/pull/7786",
"> I believe this PR could solve your issue? :)\n\nThank you @lhoestq for the reply.\nI have just tested it with the script above. It gives:\n\n> Interleave datasets without replacement\nWith dataloader\nToo many dataloader workers: 4 (max is dataset.num_shards=1). Stopping 3 dataloader workers.\nYield 705 samples\n\nIf we compare with the original `interleave_dataset` method it produces 405 samples more. However, it only uses 1 worker on the 4 available. Moreover it doesn't yield all the samples as the mixing strategy with RoundRobin above does (1405 samples vs 705).",
"@LTMeyer With the following script and using the code from #7786 I get all 1450 samples\n\n```\nimport datasets as hf_datasets\n\n\ndef gen(dataset: int, n_samples: int):\n for i in range(n_samples):\n yield {\"dataset\": dataset, \"sample\": i}\n\n\nds_1 = hf_datasets.Dataset.from_generator(gen, gen_kwargs={\"dataset\": 0, \"n_samples\": 1000}).to_iterable_dataset()\nds_2 = hf_datasets.Dataset.from_generator(gen, gen_kwargs={\"dataset\": 1, \"n_samples\": 150}).to_iterable_dataset()\nds_3 = hf_datasets.Dataset.from_generator(gen, gen_kwargs={\"dataset\": 2, \"n_samples\": 300}).to_iterable_dataset()\n\nprint(\"Interleave datasets\")\nds_interleave = hf_datasets.interleave_datasets(\n [ds_1, ds_2, ds_3],\n probabilities=[1 / 3, 1 / 3, 1 / 3],\n stopping_strategy=\"all_exhausted_without_replacement\",\n)\nfor i, sample in enumerate(ds_interleave):\n print(f\"process sample {i} {sample}\")\n```\nI'm not sure on the workers side how many will be spawned and so on. ",
"> [@LTMeyer](https://github.com/LTMeyer) With the following script and using the code from [#7786](https://github.com/huggingface/datasets/pull/7786) I get all 1450 samples\n\nThis depends on the number of shards and the number of processes being used.\nIn the example below there is only one shard per dataset (the default of `to_iterable_dataset` method). Then, the for loop is running in the main process. It thus consumes all the shards, hence the 1450 samples.\n\n> \n> ```\n> import datasets as hf_datasets\n> \n> \n> def gen(dataset: int, n_samples: int):\n> for i in range(n_samples):\n> yield {\"dataset\": dataset, \"sample\": i}\n> \n> \n> ds_1 = hf_datasets.Dataset.from_generator(gen, gen_kwargs={\"dataset\": 0, \"n_samples\": 1000}).to_iterable_dataset()\n> ds_2 = hf_datasets.Dataset.from_generator(gen, gen_kwargs={\"dataset\": 1, \"n_samples\": 150}).to_iterable_dataset()\n> ds_3 = hf_datasets.Dataset.from_generator(gen, gen_kwargs={\"dataset\": 2, \"n_samples\": 300}).to_iterable_dataset()\n> \n> print(\"Interleave datasets\")\n> ds_interleave = hf_datasets.interleave_datasets(\n> [ds_1, ds_2, ds_3],\n> probabilities=[1 / 3, 1 / 3, 1 / 3],\n> stopping_strategy=\"all_exhausted_without_replacement\",\n> )\n> for i, sample in enumerate(ds_interleave):\n> print(f\"process sample {i} {sample}\")\n> ```\n> \n\n\n> I'm not sure on the workers side how many will be spawned and so on.\n\nWhile using the data to train a model, I would like to use the `torch.utils.data.DataLoader` to feed batches of data to my model. To make the data loading fast, it is common to use `num_workers>0` in the dataloader. This will consume data in parallel. In practice, it copies the dataset instance and read in parallel different chunks of data. These chunks correspond to the underlying shards of the iterable dataset.\n\nIf we have 1 shard per dataset, as it is the case in the example above, the dataloading will indeed get all the 1450 samples, but it will run only in one process even if multiple are available. This is inefficient because it doesn't utilize all available resources. See the script and results below.\n\n```python\nfor num_workers in [0, 1, 2, 3, 4]:\n print(f\"Dataloader with {num_workers} workers.\")\n dataloader = DataLoader(ds_interleave, num_workers=num_workers, batch_size=1)\n for i, sample in enumerate(dataloader, start=1):\n pass\n print(f\"{i} processed samples\")\n```\n\n```\nDataloader with 0 workers.\n1450 processed samples\nDataloader with 1 workers.\n1450 processed samples\nDataloader with 2 workers.\nToo many dataloader workers: 2 (max is dataset.num_shards=1). Stopping 1 dataloader workers.\n1450 processed samples\nDataloader with 3 workers.\nToo many dataloader workers: 3 (max is dataset.num_shards=1). Stopping 2 dataloader workers.\n1450 processed samples\nDataloader with 4 workers.\nToo many dataloader workers: 4 (max is dataset.num_shards=1). Stopping 3 dataloader workers.\n1450 processed samples\n```\n\nNow if we shard our data differently, like 2, 1, and 3 for each dataset respectively as the [previous example](https://github.com/huggingface/datasets/issues/7792#issuecomment-3345970293), and use a dataloader with different number of workers (same script as above), we obtain:\n\n```\nDataloader with 0 workers.\n1450 processed samples\nDataloader with 1 workers.\n1450 processed samples\nDataloader with 2 workers.\nToo many dataloader workers: 2 (max is dataset.num_shards=1). Stopping 1 dataloader workers.\n850 processed samples\nDataloader with 3 workers.\nToo many dataloader workers: 3 (max is dataset.num_shards=1). Stopping 2 dataloader workers.\n750 processed samples\nDataloader with 4 workers.\nToo many dataloader workers: 4 (max is dataset.num_shards=1). Stopping 3 dataloader workers.\n750 processed samples\n```",
"I added a small fix to your PR @radulescupetru to try to make @LTMeyer 's example work :)\n\nCan you confirm it works for you now @LTMeyer ?\n\nNote that maximum parallelism requires each subset to have num_shards >= num_workers, otherwise there aren't enough shards to distribute to every worker for interleaving. In your example one of the subsets has only 1 shard, so only 1 worker can take care of interleaving.",
"> Can you confirm it works for you now [@LTMeyer](https://github.com/LTMeyer) ?\n\nResult with https://github.com/huggingface/datasets/pull/7786/commits/a547d81469128bea4acc3bcc2a4a6a95968936ee:\n```\nDataloader with 0 workers.\n1450 processed samples\nDataloader with 1 workers.\n1450 processed samples\nDataloader with 2 workers.\nToo many dataloader workers: 2 (max is dataset.num_shards=1). Stopping 1 dataloader workers.\n1450 processed samples\nDataloader with 3 workers.\nToo many dataloader workers: 3 (max is dataset.num_shards=1). Stopping 2 dataloader workers.\n1450 processed samples\nDataloader with 4 workers.\nToo many dataloader workers: 4 (max is dataset.num_shards=1). Stopping 3 dataloader workers.\n1450 processed samples\n```\n\n I have checked with the script above and I confirm that all samples are now correctly returned, thank you @lhoestq .\n\n> Note that maximum parallelism requires each subset to have num_shards >= num_workers, otherwise there aren't enough shards to distribute to every worker for interleaving. In your example one of the subsets has only 1 shard, so only 1 worker can take care of interleaving.\n\nThis point I'm not sure I understand. That is maybe where @radulescupetru's intent and mine differ. Why should we limit the number of workers to the minimum number of shards? My initial goal was to distribute shards among workers to maximize data loading speed, and to mix the data so batches are representative of the whole dataset and diverse enough (hence the round-robin). \n\nIn the example above, we have 6 shards in total, can we not distribute these shards among workers? That what the `MixMultiSourcesExampleIterable` in https://github.com/huggingface/datasets/issues/7792#issuecomment-3345970293 above does.\n- If 2 workers, 3 shards for each. \n- If 3 workers, 2 shards for each.\n- If 4 workers, the 2 first ones get 2 shards while the two last ones get only 1.\n- Above 6 workers, the 6 first ones get 1 shard each, and the remaining workers get none.\n\n\n",
"@LTMeyer I think it's just a design choice that datasets library took. From my interaction with it, it seems that even when concatenating or interleaving, individual components are still treated individually (for example, num_shards is not summed).\n\nI guess in a real scenario you wouldn't end up with 1 shard only, but it's true that you need to be a bit careful with the setup. For workers it's a bit more automated in the sense that if you have more it will stop the extra ones, but when distributing a dataset over multiple gpus it's even more tricky as if the number of shards is not a factor of world size iterating is slower.",
"> [@LTMeyer](https://github.com/LTMeyer) I think it's just a design choice that datasets library took. From my interaction with it, it seems that even when concatenating or interleaving, individual components are still treated individually (for example, num_shards is not summed).\n\nIndeed. I am curious to know if there is any explanation for this choice that I am missing.\n\n> I guess in a real scenario you wouldn't end up with 1 shard only, but it's true that you need to be a bit careful with the setup. \n\nIn my case I would like to mix many small datasets which are individually based on only few shards. So it's actually close to the case with 1 shard only.\n\n> For workers it's a bit more automated in the sense that if you have more it will stop the extra ones, but when distributing a dataset over multiple gpus it's even more tricky as if the number of shards is not a factor of world size iterating is slower.\n\nMy understanding is that, in a multi-gpu settings, we want each GPU to receive the same number of batches to avoid deadlock in any synchronization process. \nMulti-GPU related sharding of the `IterableDataset` is managed there https://github.com/huggingface/datasets/blob/4.1.1/src/datasets/iterable_dataset.py#L2371-L2392,\nwhile the sharding for dataloaders with multiple workers is handled there https://github.com/huggingface/datasets/blob/4.1.1/src/datasets/iterable_dataset.py#L2292-L2314.\n\nHere is a script to check the behavior in case of multi-gpus, using `split_dataset_by_node`. In the example I consider just 2 GPUs.\n\n```python\nworld_size = 2\nfor num_workers in [0, 1, 2, 3, 4]:\n for rank in range(world_size):\n print(f\"Rank {rank}\")\n ds_interleave_rank = split_dataset_by_node(ds_interleave, rank, world_size)\n print(f\"Dataloader with {num_workers} workers.\")\n dataloader = DataLoader(ds_interleave_rank, num_workers=num_workers, batch_size=1)\n for i in enumerate(dataloader, start=1):\n pass\n print(f\"{i} processed samples\")\n print(\"\\n\")\n```\n\nThe results using https://github.com/huggingface/datasets/pull/7786/commits/455bfaaa6d574aa9d9c9592baee390017512cc5f:\n```\nRank 0\nDataloader with 0 workers.\n725 processed samples\nRank 1\nDataloader with 0 workers.\n725 processed samples\n\n\nRank 0\nDataloader with 1 workers.\n725 processed samples\nRank 1\nDataloader with 1 workers.\n725 processed samples\n\n\nRank 0\nDataloader with 2 workers.\nToo many dataloader workers: 2 (max is dataset.num_shards=1). Stopping 1 dataloader workers.\n725 processed samples\nRank 1\nDataloader with 2 workers.\n725 processed samples\n\n\nRank 0\nDataloader with 3 workers.\nToo many dataloader workers: 3 (max is dataset.num_shards=1). Stopping 2 dataloader workers.\n725 processed samples\nRank 1\nDataloader with 3 workers.\n725 processed samples\n\n\nRank 0\nDataloader with 4 workers.\nToo many dataloader workers: 4 (max is dataset.num_shards=1). Stopping 3 dataloader workers.\n725 processed samples\nRank 1\nDataloader with 4 workers.\n725 processed samples\n```\n\nIf now I use the mixing described above the results are:\n```\nRank 0\nDataloader with 0 workers.\n750 processed samples\nRank 1\nDataloader with 0 workers.\n700 processed samples\n\n\nRank 0\nDataloader with 1 workers.\n750 processed samples\nRank 1\nDataloader with 1 workers.\n700 processed samples\n\n\nRank 0\nDataloader with 2 workers.\n750 processed samples\nRank 1\nDataloader with 2 workers.\n700 processed samples\n\n\nRank 0\nDataloader with 3 workers.\n750 processed samples\nRank 1\nDataloader with 3 workers.\n700 processed samples\n\n\nRank 0\nDataloader with 4 workers.\n750 processed samples\nRank 1\nDataloader with 4 workers.\n700 processed samples\n```\n\nDifferent GPUs received different number of batches which is problematic. The interleave method, on the other hand, feeds each GPU with the same number of batches. Nonetheless, it doesn't leverage all available workers.\nI'll check if I can fix the distribution of shards across GPU in the last configuration.",
"When concatenating or interleaving, the resulting `num_shards` is the *minimum `num_shards` of the input datasets*. This allows each new shard to always contain data from every input dataset. This ensures in every shard the right sampling when interleaving and the right data order when concatenating.\n\nSumming the dataset shards isn't ideal since each shard would contain data from only one of the dataset and would not contain any interleaved/concatenated data.",
"Thank you @lhoestq, it makes perfect sense. The part I am missing is that if I concatenate many datasets with small number of shards it will result in a global dataset with not so many shards, thus limiting the use of available workers. Data loading will be consequently inefficient. I was looking for a solution to leverage all parallelism available to maximize data loading speed.\n\nMy original use case was:\nI want to use a dataset stored on the HF hub. It is composed of many subfolders. Each of this subfolder contain only a few shards. I would like to use the dataset but only on a subset of folders, while keeping information about the origin of each sample (i.e. from which subfolder they come from).\nThe first part would possible with the `data_files` argument of `load_dataset` method. However, I would not have the origin information about the sample, as it is not provided in the original dataset. I was thus thinking about considering each subfolder as an independent HF iterable dataset and concatenate them. This method does not work because it drastically reduces the dataloading efficiency due to the low number of shards.\n\n> Summing the dataset shards isn't ideal `since` each shard would contain data from only one of the dataset and would not contain any interleaved/concatenated data.\n\nThis is not necessarily a problem for my use case. It will be the case for the original dataset anyway.",
"Also, I notice in the example above that if we modify the number of shards, we get different number of samples per GPU and workers even with the implementation of @radulescupetru. This will cause a deadlock in the DDP. So I guess HF expects all shards to contain the same number of samples. Is that a correct assumption @lhoestq?\n\nSetting the number of shards for the datasets above to 2, 2 and 3. Using the `interleave_datasets` I get the following:\n```\nRank 0\nAssigning 1 shard (or data source) of the dataset to each node.\nDataloader with 0 workers.\nAssigning 1 shard (or data source) of the dataset to each node.\n775 processed samples\nRank 1\nDataloader with 0 workers.\n675 processed samples\n\n\nRank 0\nAssigning 1 shard (or data source) of the dataset to each node.\nDataloader with 1 workers.\nAssigning 1 shard (or data source) of the dataset to each node.\n775 processed samples\nRank 1\nDataloader with 1 workers.\n675 processed samples\n\n\nRank 0\nAssigning 1 shard (or data source) of the dataset to each node.\nDataloader with 2 workers.\nToo many dataloader workers: 2 (max is dataset.num_shards=1). Stopping 1 dataloader workers.\nWARNING:datasets.iterable_dataset:Too many dataloader workers: 2 (max is dataset.num_shards=1). Stopping 1 dataloader workers.\nAssigning 1 shard (or data source) of the dataset to each node.\n775 processed samples\nRank 1\nDataloader with 2 workers.\n675 processed samples\n\n\nRank 0\nAssigning 1 shard (or data source) of the dataset to each node.\nDataloader with 3 workers.\nToo many dataloader workers: 3 (max is dataset.num_shards=1). Stopping 2 dataloader workers.\nWARNING:datasets.iterable_dataset:Too many dataloader workers: 3 (max is dataset.num_shards=1). Stopping 2 dataloader workers.\nAssigning 1 shard (or data source) of the dataset to each node.\n775 processed samples\nRank 1\nDataloader with 3 workers.\n675 processed samples\n\n\nRank 0\nAssigning 1 shard (or data source) of the dataset to each node.\nDataloader with 4 workers.\nToo many dataloader workers: 4 (max is dataset.num_shards=1). Stopping 3 dataloader workers.\nWARNING:datasets.iterable_dataset:Too many dataloader workers: 4 (max is dataset.num_shards=1). Stopping 3 dataloader workers.\nAssigning 1 shard (or data source) of the dataset to each node.\n775 processed samples\nRank 1\nDataloader with 4 workers.\n675 processed samples\n```",
"I see @LTMeyer, that makes sense. Do you think we should sum the shards by default for concatenating then ? I feel like your use case is more important than ensuring each worker has data of every subdataset in order.\n\n(I wouldn't touch the interleaving logic though)\n\n> Also, I notice in the example above that if we modify the number of shards, we get different number of samples per GPU and workers even with the implementation of @radulescupetru. This will cause a deadlock in the DDP. So I guess HF expects all shards to contain the same number of samples. Is that a correct assumption @lhoestq?\n\nShards rarely have the same number of samples, so the DDP algorithm itself should be able to stop on its own or have a strategy to circumvent this. For example it can loop until all the nodes have exhausted their data:\n\n```python\ndef loop():\n while True:\n yield from dataloader\n yield \"end\"\n\nfor x in loop():\n if x == \"end\":\n exhausted[rank] = True\n continue\n # stop once the data from all the ranks are exhausted\n dist.all_reduce(exhausted)\n if torch.all(exhausted):\n break\n # do your forward pass + loss here\n # model.forward(...)\n```\n\nI made a full example here: https://github.com/huggingface/datasets/issues/6623#issuecomment-2379458138",
"To summarize, and highlight the distinction with https://github.com/huggingface/datasets/pull/7786, there are actually two feature requests:\n1. Similarly to `interleave_datasets`, we want to interleave the longest dataset without repetition. This is handled by https://github.com/huggingface/datasets/pull/7786, and is consistant with the rest of the HF features (i.e. `concatenate_datasets` and `interleave_datasets`);\n2. We want to be able to _fuse_ datasets and distribute their shards across workers to maximize data loading speed.\n\n > I feel like your use case is more important than ensuring each worker has data of every subdataset in order.\n\nIndeed my use case, pointed as 2. above is first about maximizing data loading speed and second about mixing the data. The order of priority seems to be the opposite in 1.\n\n> Do you think we should sum the shards by default for concatenating then?\n\nI think the library should at least provide a method for this. Users can then decide what matters the most for their use case (data order or dataloading speed). What do you think?\n\n> Shards rarely have the same number of samples, so the DDP algorithm itself should be able to stop on its own or have a strategy to circumvent this.\n\nIf imbalanced data stream in a DDP context is not the responsibility of the datasets library, it is, for me, a reason more to provides a fuse or mix dataset method that sum the shards.\n\n> I made a full example here: https://github.com/huggingface/datasets/issues/6623#issuecomment-2379458138 \n\nThank you for the example. Pytorch now provides also utilities to handle this problematic case, see [Join context manager in DDP](https://docs.pytorch.org/tutorials/advanced/generic_join.html#:%7E:text=The%20context%20manager%20allows%20the,shadowed%20are%20specified%20by%20hooks)",
"I'm closing this issue because of several existing solutions:\n- https://github.com/huggingface/datasets/pull/7786 allows to interleave datasets without replacement.\n- Using [`.shard`](https://huggingface.co/docs/datasets/v4.2.0/en/package_reference/main_classes#datasets.IterableDataset.shard) instead of [`split_dataset_by_node`](https://huggingface.co/docs/datasets/v4.2.0/en/package_reference/main_classes#datasets.distributed.split_dataset_by_node). Given _m_ shards and _n_ ranks, if m % n != 0, the later function will make each of the _n_ ranks go through all of the _m_ shards, although not fetching the same data. On the other hand, the former function can distribute the _m_ shards across the _n_ ranks and make better use of parallel reads.\n\nThank you @lhoestq and @radulescupetru for the help."
] | 2025-09-26T10:05:19
| 2025-10-15T18:05:23
| 2025-10-15T18:05:23
|
NONE
| null | null | null | null |
### Feature request
I would like to be able to concatenate multiple `IterableDataset` with possibly different features. I would like to then be able to stream the results in parallel (both using DDP and multiple workers in the pytorch DataLoader). I want the merge of datasets to be well balanced between the different processes.
### Motivation
I want to train a model on a combination of datasets, which I can convert to a single representation. This applies to converting different datasets items to the same Python class, as using a tokenizer on multiple modalities.
Assuming that my original datasets are not necessarily well balanced as they may have different size and thus different number of shards, I would like the merged dataset to be distributed evenly over the multiple processes. I don't mind if it's not perfectly balanced, and as result, some workers of the torch DataLoader do nothing, as long as the DDP is properly handled causing no deadlock.
### What I've tried
I've tried the two functions already provided in datasets, namely `interleave_datasets` and `concatenate_datasets`.
- Interleave seems to be the best approach of what I'm trying to do. However, it doesn't suit my purpose because as I understand it, it stops as soon as one of the dataset source is exhausted, or repeat the smallest source items until the largest is exhausted. I would like something in-between, similarly to what [roundrobin does](https://more-itertools.readthedocs.io/en/stable/api.html#more_itertools.roundrobin).
- Concatenate does not mix the data enough and one dataset may be overrepresented in some early batches.
Let's consider we have 3 datasets composed of different number of shards as follow [[s0_0, s0_1], [s1_0], [s2_0, s2_1, s2_3]], where s denotes the underlying shard, the first index the dataset and the second the shard number.
If we request 3 shards in the `shard_data_source` we should obtain the following:
index 0 gets s0_0 s2_0
index 1 gets s0_1 s2_1
index 2 gets s1_0 s2_3
I started implementing the following, but I'm afraid my sharding logic is incorrect.
```python
from copy import deepcopy
from itertools import chain, islice
import datasets
import numpy as np
from datasets import IterableDataset
from datasets.iterable_dataset import _BaseExamplesIterable
from more_itertools import roundrobin
class MixMultiSourcesExampleIterable(_BaseExamplesIterable):
def __init__(self, ex_iterables: list[_BaseExamplesIterable]):
super().__init__()
self.ex_iterables = ex_iterables
def _init_state_dict(self) -> dict:
self._state_dict = {
"ex_iterables": [ex_iterable._init_state_dict() for ex_iterable in self.ex_iterables],
"type": self.__class__.__name__,
}
return self._state_dict
@property
def num_shards(self) -> int:
return sum(ex_iterable.num_shards for ex_iterable in self.ex_iterables)
def __iter__(self):
yield from roundrobin(*self.ex_iterables)
def shuffle_data_sources(self, generator: np.random.Generator) -> "MixMultiSourcesExampleIterable":
"""Shuffle the list of examples iterable, as well as each underlying examples iterable."""
rng = deepcopy(generator)
ex_iterables = list(self.ex_iterables)
rng.shuffle(ex_iterables)
ex_iterables = [ex_iterable.shuffle_data_sources(generator) for ex_iterable in ex_iterables]
return MixMultiSourcesExampleIterable(ex_iterables)
def shard_data_sources(self, num_shards: int, index: int, contiguous=True) -> "MixMultiSourceExampleIterable":
"""Shard the underlying iterables in a roundrobin manner.
Let's consider we have our iterables as [[s0_0, s0_1], [s1_0], [s2_0, s2_1, s2_3]],
and we request 3 shards.
index 0 gets s0_0 s2_0
index 1 gets s0_1 s2_1
index 2 gets s1_0 s2_3
"""
return MixMultiSourcesExampleIterable(
list(
islice(
# flatten all underlying iterables
chain.from_iterable([ex_iterable.shard_data_sources(1, 0) for ex_iterable in self.ex_iterables]),
# offset the starting point by the index
index,
# take over the full list, so exhaust the iterators
None,
# step by the number of shards requested
num_shards,
)
)
)
def mix_dataset(iterable_datasets: list[datasets.IterableDataset]) -> IterableDataset:
ex_iterable = MixMultiSourcesExampleIterable([ds._ex_iterable for ds in iterable_datasets])
return IterableDataset(
ex_iterable, distributed=iterable_datasets[0]._distributed, formatting=iterable_datasets[0]._formatting
)
```
### Questions
- Am I missing something? Is there a way to use `interleave_datasets` or `concatenate_datasets` to fit my purpose?
- Would it be the right approach to spread the maximum number of underlying shards across my different processes?
### Your contribution
As much as I can.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/13559010?v=4",
"events_url": "https://api.github.com/users/LTMeyer/events{/privacy}",
"followers_url": "https://api.github.com/users/LTMeyer/followers",
"following_url": "https://api.github.com/users/LTMeyer/following{/other_user}",
"gists_url": "https://api.github.com/users/LTMeyer/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/LTMeyer",
"id": 13559010,
"login": "LTMeyer",
"node_id": "MDQ6VXNlcjEzNTU5MDEw",
"organizations_url": "https://api.github.com/users/LTMeyer/orgs",
"received_events_url": "https://api.github.com/users/LTMeyer/received_events",
"repos_url": "https://api.github.com/users/LTMeyer/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/LTMeyer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LTMeyer/subscriptions",
"type": "User",
"url": "https://api.github.com/users/LTMeyer",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7792/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7792/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 19 days, 8:00:04
|
https://api.github.com/repos/huggingface/datasets/issues/7788
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7788/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7788/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7788/events
|
https://github.com/huggingface/datasets/issues/7788
| 3,450,913,796
|
I_kwDODunzps7NsMQE
| 7,788
|
`Dataset.to_sql` doesn't utilize `num_proc`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/30357072?v=4",
"events_url": "https://api.github.com/users/tcsmaster/events{/privacy}",
"followers_url": "https://api.github.com/users/tcsmaster/followers",
"following_url": "https://api.github.com/users/tcsmaster/following{/other_user}",
"gists_url": "https://api.github.com/users/tcsmaster/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/tcsmaster",
"id": 30357072,
"login": "tcsmaster",
"node_id": "MDQ6VXNlcjMwMzU3MDcy",
"organizations_url": "https://api.github.com/users/tcsmaster/orgs",
"received_events_url": "https://api.github.com/users/tcsmaster/received_events",
"repos_url": "https://api.github.com/users/tcsmaster/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/tcsmaster/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tcsmaster/subscriptions",
"type": "User",
"url": "https://api.github.com/users/tcsmaster",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[] | 2025-09-24T20:34:47
| 2025-09-24T20:35:01
| null |
NONE
| null | null | null | null |
The underlying `SqlDatasetWriter` has `num_proc` as an available argument [here](https://github.com/huggingface/datasets/blob/5dc1a179783dff868b0547c8486268cfaea1ea1f/src/datasets/io/sql.py#L63) , but `Dataset.to_sql()` does not accept it, therefore it is always using one process for the SQL conversion.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7788/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7788/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/7780
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7780/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7780/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7780/events
|
https://github.com/huggingface/datasets/issues/7780
| 3,429,267,259
|
I_kwDODunzps7MZnc7
| 7,780
|
BIGPATENT dataset inaccessible (deprecated script loader)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/137755081?v=4",
"events_url": "https://api.github.com/users/ishmaifan/events{/privacy}",
"followers_url": "https://api.github.com/users/ishmaifan/followers",
"following_url": "https://api.github.com/users/ishmaifan/following{/other_user}",
"gists_url": "https://api.github.com/users/ishmaifan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ishmaifan",
"id": 137755081,
"login": "ishmaifan",
"node_id": "U_kgDOCDX5yQ",
"organizations_url": "https://api.github.com/users/ishmaifan/orgs",
"received_events_url": "https://api.github.com/users/ishmaifan/received_events",
"repos_url": "https://api.github.com/users/ishmaifan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ishmaifan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ishmaifan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ishmaifan",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi ! I opened https://huggingface.co/datasets/NortheasternUniversity/big_patent/discussions/7 to update the dataset, hopefully it's merged soon !",
"The dataset now works with `datasets` v4 ! closing this issue"
] | 2025-09-18T08:25:34
| 2025-09-25T14:36:13
| 2025-09-25T14:36:13
|
NONE
| null | null | null | null |
dataset: https://huggingface.co/datasets/NortheasternUniversity/big_patent
When I try to load it with the datasets library, it fails with:
RuntimeError: Dataset scripts are no longer supported, but found big_patent.py
Could you please publish a Parquet/Arrow export of BIGPATENT on the Hugging Face so that it can be accessed with datasets>=4.x.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7780/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7780/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 7 days, 6:10:39
|
https://api.github.com/repos/huggingface/datasets/issues/7777
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7777/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7777/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7777/events
|
https://github.com/huggingface/datasets/issues/7777
| 3,424,462,082
|
I_kwDODunzps7MHSUC
| 7,777
|
push_to_hub not overwriting but stuck in a loop when there are existing commits
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/55143337?v=4",
"events_url": "https://api.github.com/users/Darejkal/events{/privacy}",
"followers_url": "https://api.github.com/users/Darejkal/followers",
"following_url": "https://api.github.com/users/Darejkal/following{/other_user}",
"gists_url": "https://api.github.com/users/Darejkal/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Darejkal",
"id": 55143337,
"login": "Darejkal",
"node_id": "MDQ6VXNlcjU1MTQzMzM3",
"organizations_url": "https://api.github.com/users/Darejkal/orgs",
"received_events_url": "https://api.github.com/users/Darejkal/received_events",
"repos_url": "https://api.github.com/users/Darejkal/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Darejkal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Darejkal/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Darejkal",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"HTTP 412 means a commit happened in the meantime, so `get_deletions_and_dataset_card` has to retry to get the latest version of the dataset card and what files to delete based on the latest version of the dataset repository\n\nAre you running other operations in the dataset repo for your push_to_hub ?",
"There was only a map() followed by a push_to_hub(). The repo had one prior commit also by using push_to_hub(). The error disappeared when I downgraded datasets to 4.0.0.",
"It is reproducible if you use finegrained token with Read+Write (Open pull request) access to only that repo.",
"Ah it was due to the use of requests_cache with POST methods, closing this. "
] | 2025-09-17T03:15:35
| 2025-09-17T19:31:14
| 2025-09-17T19:31:14
|
NONE
| null | null | null | null |
### Describe the bug
`get_deletions_and_dataset_card` stuck at error a commit has happened error since push to hub for http error 412 for tag 4.1.0. The error does not exists in 4.0.0.
### Steps to reproduce the bug
Create code to use push_to_hub, ran twice each time with different content for datasets.Dataset.
The code will stuck in time.sleep loop for `get_deletions_and_dataset_card`. If error is explicitly printed, the error is HTTP 412.
### Expected behavior
New datasets overwrite existing one on repo.
### Environment info
datasets 4.1.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/55143337?v=4",
"events_url": "https://api.github.com/users/Darejkal/events{/privacy}",
"followers_url": "https://api.github.com/users/Darejkal/followers",
"following_url": "https://api.github.com/users/Darejkal/following{/other_user}",
"gists_url": "https://api.github.com/users/Darejkal/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Darejkal",
"id": 55143337,
"login": "Darejkal",
"node_id": "MDQ6VXNlcjU1MTQzMzM3",
"organizations_url": "https://api.github.com/users/Darejkal/orgs",
"received_events_url": "https://api.github.com/users/Darejkal/received_events",
"repos_url": "https://api.github.com/users/Darejkal/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Darejkal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Darejkal/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Darejkal",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7777/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7777/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 16:15:39
|
https://api.github.com/repos/huggingface/datasets/issues/7772
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7772/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7772/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7772/events
|
https://github.com/huggingface/datasets/issues/7772
| 3,417,353,751
|
I_kwDODunzps7LsK4X
| 7,772
|
Error processing scalar columns using tensorflow.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/3871483?v=4",
"events_url": "https://api.github.com/users/khteh/events{/privacy}",
"followers_url": "https://api.github.com/users/khteh/followers",
"following_url": "https://api.github.com/users/khteh/following{/other_user}",
"gists_url": "https://api.github.com/users/khteh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/khteh",
"id": 3871483,
"login": "khteh",
"node_id": "MDQ6VXNlcjM4NzE0ODM=",
"organizations_url": "https://api.github.com/users/khteh/orgs",
"received_events_url": "https://api.github.com/users/khteh/received_events",
"repos_url": "https://api.github.com/users/khteh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/khteh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/khteh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/khteh",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[
"Using tf.convert_to_tensor works fine:\n\n```\nimport tensorflow as tf\n\nstart_pos = tf.convert_to_tensor(train_ds['start_positions'], dtype=tf.int64)\nstart_pos = tf.reshape(start_pos, [-1, 1])\n```\n\n\nAlternatively, using the built-in to_tf_dataset also avoids the issue:\n\n```\ntrain_tf = train_ds.to_tf_dataset(\n columns=['input_ids','attention_mask'],\n label_cols=['start_positions','end_positions'],\n shuffle=True,\n batch_size=32\n)\n```",
"```\n start_pos = tf.convert_to_tensor(self._train_ds['start_positions'], dtype=tf.int64)\n File \"/home/khteh/.local/share/virtualenvs/pAIthon-GaqEDHQT/lib/python3.13/site-packages/tensorflow/python/util/traceback_utils.py\", line 153, in error_handler\n raise e.with_traceback(filtered_tb) from None\n File \"/home/khteh/.local/share/virtualenvs/pAIthon-GaqEDHQT/lib/python3.13/site-packages/tensorflow/python/framework/constant_op.py\", line 108, in convert_to_eager_tensor\n return ops.EagerTensor(value, ctx.device_name, dtype)\n ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nValueError: TypeError: Scalar tensor has no `len()`\nTraceback (most recent call last):\n\n File \"/home/khteh/.local/share/virtualenvs/pAIthon-GaqEDHQT/lib/python3.13/site-packages/tensorflow/python/framework/ops.py\", line 361, in __len__\n raise TypeError(\"Scalar tensor has no `len()`\")\n\nTypeError: Scalar tensor has no `len()`\n```\n\n`to_tf_dataset` works perfectly."
] | 2025-09-15T10:36:31
| 2025-09-27T08:22:44
| null |
NONE
| null | null | null | null |
`datasets==4.0.0`
```
columns_to_return = ['input_ids','attention_mask', 'start_positions', 'end_positions']
train_ds.set_format(type='tf', columns=columns_to_return)
```
`train_ds`:
```
train_ds type: <class 'datasets.arrow_dataset.Dataset'>, shape: (1000, 9)
columns: ['question', 'sentences', 'answer', 'str_idx', 'end_idx', 'input_ids', 'attention_mask', 'start_positions', 'end_positions']
features:{'question': Value('string'), 'sentences': Value('string'), 'answer': Value('string'), 'str_idx': Value('int64'), 'end_idx': Value('int64'), 'input_ids': List(Value('int32')), 'attention_mask': List(Value('int8')), 'start_positions': Value('int64'), 'end_positions': Value('int64')}
```
`train_ds_tensor = train_ds['start_positions'].to_tensor(shape=(-1,1))` hits the following error:
```
AttributeError: 'Column' object has no attribute 'to_tensor'
```
`tf.reshape(train_ds['start_positions'], shape=[-1,1])` hits the following error:
```
TypeError: Scalar tensor has no `len()`
```
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7772/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7772/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/7767
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7767/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7767/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7767/events
|
https://github.com/huggingface/datasets/issues/7767
| 3,411,654,444
|
I_kwDODunzps7LWbcs
| 7,767
|
Custom `dl_manager` in `load_dataset`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/13214530?v=4",
"events_url": "https://api.github.com/users/ain-soph/events{/privacy}",
"followers_url": "https://api.github.com/users/ain-soph/followers",
"following_url": "https://api.github.com/users/ain-soph/following{/other_user}",
"gists_url": "https://api.github.com/users/ain-soph/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ain-soph",
"id": 13214530,
"login": "ain-soph",
"node_id": "MDQ6VXNlcjEzMjE0NTMw",
"organizations_url": "https://api.github.com/users/ain-soph/orgs",
"received_events_url": "https://api.github.com/users/ain-soph/received_events",
"repos_url": "https://api.github.com/users/ain-soph/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ain-soph/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ain-soph/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ain-soph",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] |
[] | 2025-09-12T19:06:23
| 2025-09-12T19:07:52
| null |
NONE
| null | null | null | null |
### Feature request
https://github.com/huggingface/datasets/blob/4.0.0/src/datasets/load.py#L1411-L1418
```
def load_dataset(
...
dl_manager: Optional[DownloadManager] = None, # add this new argument
**config_kwargs,
) -> Union[DatasetDict, Dataset, IterableDatasetDict, IterableDataset]:
...
# Create a dataset builder
builder_instance = load_dataset_builder(
path=path,
name=name,
data_dir=data_dir,
data_files=data_files,
cache_dir=cache_dir,
features=features,
download_config=download_config,
download_mode=download_mode,
revision=revision,
token=token,
storage_options=storage_options,
**config_kwargs,
)
# Return iterable dataset in case of streaming
if streaming:
return builder_instance.as_streaming_dataset(split=split)
# Note: This is the revised part
if dl_manager is None:
if download_config is None:
download_config = DownloadConfig(
cache_dir=builder_instance._cache_downloaded_dir,
force_download=download_mode == DownloadMode.FORCE_REDOWNLOAD,
force_extract=download_mode == DownloadMode.FORCE_REDOWNLOAD,
use_etag=False,
num_proc=num_proc,
token=builder_instance.token,
storage_options=builder_instance.storage_options,
) # We don't use etag for data files to speed up the process
dl_manager = DownloadManager(
dataset_name=builder_instance.dataset_name,
download_config=download_config,
data_dir=builder_instance.config.data_dir,
record_checksums=(
builder_instance._record_infos or verification_mode == VerificationMode.ALL_CHECKS
),
)
# Download and prepare data
builder_instance.download_and_prepare(
download_config=download_config,
download_mode=download_mode,
verification_mode=verification_mode,
dl_manager=dl_manager, # pass the new argument
num_proc=num_proc,
storage_options=storage_options,
)
...
```
### Motivation
In my case, I'm hoping to deal with the cache files downloading manually (not using hash filenames and save to another location, or using potential existing local files).
### Your contribution
It's already implemented above. If maintainers think this should be considered, I'll open a PR.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7767/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7767/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/7766
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7766/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7766/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7766/events
|
https://github.com/huggingface/datasets/issues/7766
| 3,411,611,165
|
I_kwDODunzps7LWQ4d
| 7,766
|
cast columns to Image/Audio/Video with `storage_options`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/13214530?v=4",
"events_url": "https://api.github.com/users/ain-soph/events{/privacy}",
"followers_url": "https://api.github.com/users/ain-soph/followers",
"following_url": "https://api.github.com/users/ain-soph/following{/other_user}",
"gists_url": "https://api.github.com/users/ain-soph/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ain-soph",
"id": 13214530,
"login": "ain-soph",
"node_id": "MDQ6VXNlcjEzMjE0NTMw",
"organizations_url": "https://api.github.com/users/ain-soph/orgs",
"received_events_url": "https://api.github.com/users/ain-soph/received_events",
"repos_url": "https://api.github.com/users/ain-soph/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ain-soph/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ain-soph/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ain-soph",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] |
[
"A",
"1",
"1",
"Ok",
"> ### Feature request\n> Allow `storage_options` to be passed in\n> \n> 1. `cast` related operations (e.g., `cast_columns, cast`)\n> 2. `info` related reading (e.g., `from_dict, from_pandas, from_polars`) together with `info.features`\n> \n> import datasets\n> \n> image_path = \"s3://bucket/sample.png\"\n> dataset = datasets.Dataset.from_dict({\"image_path\": [image_path]})\n> \n> # dataset = dataset.cast_column(\"image_path\", datasets.Image()) # now works without `storage_options`\n> \n> # expected behavior\n> dataset = dataset.cast_column(\"image_path\", datasets.Image(), storage_options={\"anon\": True})\n> ### Motivation\n> I'm using my own registered fsspec filesystem (s3 with customized local cache support). I need to pass cache folder paths `cache_dirs: list[str]` to the filesystem when I read the remote images (cast from file_paths).\n> \n> ### Your contribution\n> Could help with a PR at weekends\n\n\n\n>"
] | 2025-09-12T18:51:01
| 2025-09-27T08:14:47
| null |
NONE
| null | null | null | null |
### Feature request
Allow `storage_options` to be passed in
1. `cast` related operations (e.g., `cast_columns, cast`)
2. `info` related reading (e.g., `from_dict, from_pandas, from_polars`) together with `info.features`
```python3
import datasets
image_path = "s3://bucket/sample.png"
dataset = datasets.Dataset.from_dict({"image_path": [image_path]})
# dataset = dataset.cast_column("image_path", datasets.Image()) # now works without `storage_options`
# expected behavior
dataset = dataset.cast_column("image_path", datasets.Image(), storage_options={"anon": True})
```
### Motivation
I'm using my own registered fsspec filesystem (s3 with customized local cache support). I need to pass cache folder paths `cache_dirs: list[str]` to the filesystem when I read the remote images (cast from file_paths).
### Your contribution
Could help with a PR at weekends
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7766/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7766/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/7765
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7765/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7765/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7765/events
|
https://github.com/huggingface/datasets/issues/7765
| 3,411,556,378
|
I_kwDODunzps7LWDga
| 7,765
|
polars dataset cannot cast column to Image/Audio/Video
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/13214530?v=4",
"events_url": "https://api.github.com/users/ain-soph/events{/privacy}",
"followers_url": "https://api.github.com/users/ain-soph/followers",
"following_url": "https://api.github.com/users/ain-soph/following{/other_user}",
"gists_url": "https://api.github.com/users/ain-soph/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ain-soph",
"id": 13214530,
"login": "ain-soph",
"node_id": "MDQ6VXNlcjEzMjE0NTMw",
"organizations_url": "https://api.github.com/users/ain-soph/orgs",
"received_events_url": "https://api.github.com/users/ain-soph/received_events",
"repos_url": "https://api.github.com/users/ain-soph/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ain-soph/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ain-soph/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ain-soph",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"I fixed this with a combination of `to_dict` and `from_dict`:\n\n```py\ndatasets.Dataset.from_dict(df.to_dict(as_series=False))\n```",
"@samuelstevens Yeah, I'm using similar workaround as well. But it would be ideal if we can avoid the copy."
] | 2025-09-12T18:32:49
| 2025-10-13T14:39:48
| 2025-10-13T14:39:48
|
NONE
| null | null | null | null |
### Describe the bug
`from_polars` dataset cannot cast column to Image/Audio/Video, while it works on `from_pandas` and `from_dict`
### Steps to reproduce the bug
```python3
import datasets
import pandas as pd
import polars as pl
image_path = "./sample.png"
# polars
df = pl.DataFrame({"image_path": [image_path]})
dataset = datasets.Dataset.from_polars(df)
dataset = dataset.cast_column("image_path", datasets.Image())
# # raises Error
pyarrow.lib.ArrowNotImplementedError: Unsupported cast from large_string to struct using function cast_struct
# pandas
df = pd.DataFrame({"image_path": [image_path]})
dataset = datasets.Dataset.from_pandas(df)
dataset = dataset.cast_column("image_path", datasets.Image())
# # pass
{'image_path': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=338x277 at 0x7FBA719D4050>}
# dict
dataset = datasets.Dataset.from_dict({"image_path": [image_path]})
dataset = dataset.cast_column("image_path", datasets.Image())
# # pass
{'image_path': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=338x277 at 0x7FBA719D4050>}
```
### Expected behavior
`from_polars` case shouldn't raise error and have the same outputs as `from_pandas` and `from_dict`
### Environment info
```
# Name Version Build Channel
datasets 4.0.0 pypi_0 pypi
pandas 2.3.1 pypi_0 pypi
polars 1.32.3 pypi_0 pypi
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7765/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7765/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 30 days, 20:06:59
|
https://api.github.com/repos/huggingface/datasets/issues/7760
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7760/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7760/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7760/events
|
https://github.com/huggingface/datasets/issues/7760
| 3,401,799,485
|
I_kwDODunzps7Kw1c9
| 7,760
|
Hugging Face Hub Dataset Upload CAS Error
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/142820182?v=4",
"events_url": "https://api.github.com/users/n-bkoe/events{/privacy}",
"followers_url": "https://api.github.com/users/n-bkoe/followers",
"following_url": "https://api.github.com/users/n-bkoe/following{/other_user}",
"gists_url": "https://api.github.com/users/n-bkoe/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/n-bkoe",
"id": 142820182,
"login": "n-bkoe",
"node_id": "U_kgDOCINDVg",
"organizations_url": "https://api.github.com/users/n-bkoe/orgs",
"received_events_url": "https://api.github.com/users/n-bkoe/received_events",
"repos_url": "https://api.github.com/users/n-bkoe/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/n-bkoe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/n-bkoe/subscriptions",
"type": "User",
"url": "https://api.github.com/users/n-bkoe",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[
"cc @jsulz maybe ?",
"Curious! I took a look at this and was unable to see why this would be occurring on our side. Tagging in @jgodlew and @bpronan since they might have insights. \n\n@n-bkoe just a few questions if you wouldn't mind: \n1. What kind of data are you uploading and what is the difference in file size (in bytes) between 100 and 10,000 samples?\n2. Could you provide a specific repository where you encountered this so we could look at to attempt to trace this in our systems?\n3. I cannot currently reproduce this, but I'm just trying locally; have you tried to attempt this outside of SageMaker? I'm wondering if there is something unique about that environment causing this. \n4. How/where did you set `HF_HUB_DISABLE_XET`?",
"Hi, and thank you for your quick answer 🙏 \n\n1. Its fairly simple string data, four cols, all string, some long. The script works for data up to 8000 samples long, which is two parquet files totalling 260 kb. It breaks at 10k. \n2. Unfortunately, both data and code is private for now !\n3. I will try \n4. I did it both at CLI level when call my script, and tried inside the python script with os.environ[\"HF_HUB_DISABLE_XET\"] = \"1\"\n\nThe load is also partial, it starts for one file, but does not complete and no data file is pushed. \n\n```\n5. Pushing to Hugging Face Hub...\nPushing dataset to YourOrg/dataset-10000-test_set...\nCreating parquet from Arrow format: 100%|███████████████████████████████████████████████████████████████████████████████████████| 9/9 [00:00<00:00, 1235.07ba/s]\nProcessing Files (0 / 0) : | | 0.00B / 0.00B 2025-09-11T15:14:37.018887Z ERROR Fatal Error: \"cas::upload_xorb\" api call failed (request id 01K4WNFGSQV1FH8846S0DNS91C): HTTP status client error (401 Unauthorized) for url (https://cas-server.xethub.hf.co/xorb/default/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX)\n at /home/runner/work/xet-core/xet-core/cas_client/src/retry_wrapper.rs:113\n\nProcessing Files (0 / 0) : 0%| | 0.00B / 291kB, 0.00B/s \nNew Data Upload : 0%| | 0.00B / 291kB, 0.00B/s \n❌ Failed to push test_set: Data processing error: CAS service error : Reqwest Error: HTTP status client error (401 Unauthorized), domain: https://cas-server.xethub.hf.co/xorb/default/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX\nUploading the dataset shards: 0%| | 0/1 [00:00<?, ? shards/s]\nPushing dataset to YourOrg/dataset-10000-indic_test_set...\nCreating parquet from Arrow format: 100%|███████████████████████████████████████████████████████████████████████████████████████| 9/9 [00:00<00:00, 1289.10ba/s]\nProcessing Files (0 / 0) : | | 0.00B / 0.00B 2025-09-11T15:14:37.721996Z ERROR Fatal Error: \"cas::upload_xorb\" api call failed (request id 01K4WNFHFPJ2DC5D6JC93172H9): HTTP status client error (401 Unauthorized) for url (https://cas-server.xethub.hf.co/xorb/default/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX)\n at /home/runner/work/xet-core/xet-core/cas_client/src/retry_wrapper.rs:113\n\nProcessing Files (0 / 0) : 0%| | 0.00B / 277kB, 0.00B/s \nNew Data Upload : 0%| | 0.00B / 277kB, 0.00B/s \n❌ Failed to push indic_test_set: Data processing error: CAS service error : Reqwest Error: HTTP status client error (401 Unauthorized), domain: https://cas-server.xethub.hf.co/xorb/default/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX\nUploading the dataset shards: 0%| | 0/1 [00:00<?, ? shards/s]\nPushing dataset to YourOrg/dataset-10000-indic_test_set_combined...\nCreating parquet from Arrow format: 100%|███████████████████████████████████████████████████████████████████████████████████████| 6/6 [00:00<00:00, 1310.04ba/s]\nProcessing Files (0 / 0) : | | 0.00B / 0.00B 2025-09-11T15:14:38.685575Z ERROR Fatal Error: \"cas::upload_xorb\" api call failed (request id 01K4WNFJDTVAYM9MFTRDSWKTD6): HTTP status client error (401 Unauthorized) for url (https://cas-server.xethub.hf.co/xorb/default/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX)\n at /home/runner/work/xet-core/xet-core/cas_client/src/retry_wrapper.rs:113\n\nProcessing Files (0 / 0) : 0%| | 0.00B / 184kB, 0.00B/s \nNew Data Upload : 0%| | 0.00B / 184kB, 0.00B/s \n❌ Failed to push indic_test_set_combined: Data processing error: CAS service error : Reqwest Error: HTTP status client error (401 Unauthorized), domain: https://cas-server.xethub.hf.co/xorb/default/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX\nUploading the dataset shards: 0%| | 0/1 [00:00<?, ? shards/s]\n\nSummary:\n Succeeded: None\n Failed: [('test_set', 'Data processing error: CAS service error : Reqwest Error: HTTP status client error (401 Unauthorized), domain: https://cas-server.xethub.hf.co/xorb/default/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX'), ('indic_test_set', 'Data processing error: CAS service error : Reqwest Error: HTTP status client error (401 Unauthorized), domain: https://cas-server.xethub.hf.co/xorb/default/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX'), ('indic_test_set_combined', 'Data processing error: CAS service error : Reqwest Error: HTTP status client error (401 Unauthorized), domain: https://cas-server.xethub.hf.co/xorb/default/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX')]\n❌ Some datasets failed to upload\n```\n\n",
"Thanks for following up with more details, @n-bkoe \n\nCould you tell me more about your Sagemaker environment and how you are running this script? In testing with your steps to reproduce in a Sagemaker Jupyter notebook instance (and uploading Parquet datasets with splits of anywhere from a few KBs to a few hundred MBs), I've yet to reproduce this error. This makes me believe that it's either something about the Sagemaker environment or the reproduction steps that I'm not yet emulating. \n\nConcerning the `HF_HUB_DISABLE_XET` flag, you should ensure it is set before any package imports and in the same process where you are running the script itself. If either aren't true, then this environment variable will not work. You could also explicitly uninstall `hf-xet` from the environment, although that should be unnecessary with the `HF_HUB_DISABLE_XET` flag."
] | 2025-09-10T10:01:19
| 2025-09-16T20:01:36
| null |
NONE
| null | null | null | null |
### Describe the bug
Experiencing persistent 401 Unauthorized errors when attempting to upload datasets to Hugging Face Hub using the `datasets` library. The error occurs specifically with the CAS (Content Addressable Storage) service during the upload process. Tried using HF_HUB_DISABLE_XET=1. It seems to work for smaller files.
Exact error message :
```
Processing Files (0 / 0) : | | 0.00B / 0.00B 2025-09-10T09:44:35.657565Z ERROR Fatal Error: "cas::upload_xorb" api call failed (request id 01b[...]XXX): HTTP status client error (401 Unauthorized) for url (https://cas-server.xethub.hf.co/xorb/default/7f3abdc[...]XXX)
at /home/runner/work/xet-core/xet-core/cas_client/src/retry_wrapper.rs:113
Processing Files (0 / 0) : 0%| | 0.00B / 184kB, 0.00B/s
New Data Upload : 0%| | 0.00B / 184kB, 0.00B/s
❌ Failed to push some_dataset: Data processing error: CAS service error : Reqwest Error: HTTP status client error (401 Unauthorized), domain: https://cas-server.xethub.hf.co/xorb/default/7f3abdc[...]XXX
```
Workaround Attempts
1. **Disabled XET**: Set `HF_HUB_DISABLE_XET=1` environment variable
2. **Updated hf-xet**: Use `hf-xet==1.1.9` rather than latest
3. **Verified Authentication**: Confirmed HF token is valid and has write permissions
4. **Tested with Smaller Datasets**:
- 100 samples: ✅ **SUCCESS** (uploaded successfully)
- 10,000 samples: ❌ **FAILS** (401 Unauthorized)
### Steps to reproduce the bug
```python
from datasets import Dataset, DatasetDict
# Create dataset (example with 10,000 samples)
dataset = Dataset.from_dict({
"question": questions,
"answer": answers,
# ... other fields
})
# Split into train/test
dataset_dict = dataset.train_test_split(test_size=0.1)
# Upload to Hub
dataset_dict.push_to_hub("Org/some-dataset")
```
### Expected behavior
## Expected Behavior
- Dataset should upload successfully to Hugging Face Hub
- Progress bars should complete without authentication errors
- Dataset should be accessible at the specified repository URL
## Actual Behavior
- Upload fails consistently with 401 Unauthorized error
- Error occurs specifically during CAS service interaction
- No progress is made on the upload (0% completion)
- Dataset is created on Hugging Face Hub with no data folder
### Environment info
- **Platform**: SageMaker (AWS)
- **Python Version**: 3.12
- **Libraries**:
- `datasets` library (latest version)
- `hf-xet==1.1.9` (attempted fix)
- **Authentication**: Hugging Face token configured
- **Dataset Size**: ~10,000 samples, works for smaller sizes (e.g. 100)
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7760/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7760/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/7759
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7759/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7759/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7759/events
|
https://github.com/huggingface/datasets/issues/7759
| 3,398,099,513
|
I_kwDODunzps7KiuI5
| 7,759
|
Comment/feature request: Huggingface 502s from GHA
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/52365471?v=4",
"events_url": "https://api.github.com/users/Scott-Simmons/events{/privacy}",
"followers_url": "https://api.github.com/users/Scott-Simmons/followers",
"following_url": "https://api.github.com/users/Scott-Simmons/following{/other_user}",
"gists_url": "https://api.github.com/users/Scott-Simmons/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Scott-Simmons",
"id": 52365471,
"login": "Scott-Simmons",
"node_id": "MDQ6VXNlcjUyMzY1NDcx",
"organizations_url": "https://api.github.com/users/Scott-Simmons/orgs",
"received_events_url": "https://api.github.com/users/Scott-Simmons/received_events",
"repos_url": "https://api.github.com/users/Scott-Simmons/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Scott-Simmons/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Scott-Simmons/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Scott-Simmons",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[] | 2025-09-09T11:59:20
| 2025-09-09T13:02:28
| null |
NONE
| null | null | null | null |
This is no longer a pressing issue, but for completeness I am reporting that in August 26th, GET requests to `https://datasets-server.huggingface.co/info\?dataset\=livebench/math` were returning 502s when invoked from [github actions](https://github.com/UKGovernmentBEIS/inspect_evals/actions/runs/17241892475/job/48921123754) (that link will expire eventually, [here are the logs](https://github.com/user-attachments/files/22233578/logs_44225296943.zip)).
When invoked from actions, it appeared to be consistently failing for ~6 hours. However, these 502s never occurred when the request was invoked from my local machine in that same time period.
I suspect that this is related to how the requests are routed with github actions versus locally.
Its not clear to me if the request even reached huggingface servers or if its the github proxy that stopped it from going through, but I wanted to report it nonetheless in case this is helpful information. I'm curious if huggingface can do anything on their end to confirm cause.
And a feature request for if this happens in the future (assuming huggingface has visibilty on it): A "datasets status" page highlighting if 502s occur for specific individual datasets could be useful for people debugging on the other end of this!
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7759/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7759/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/7758
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7758/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7758/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7758/events
|
https://github.com/huggingface/datasets/issues/7758
| 3,395,590,783
|
I_kwDODunzps7KZJp_
| 7,758
|
Option for Anonymous Dataset link
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/38985481?v=4",
"events_url": "https://api.github.com/users/egrace479/events{/privacy}",
"followers_url": "https://api.github.com/users/egrace479/followers",
"following_url": "https://api.github.com/users/egrace479/following{/other_user}",
"gists_url": "https://api.github.com/users/egrace479/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/egrace479",
"id": 38985481,
"login": "egrace479",
"node_id": "MDQ6VXNlcjM4OTg1NDgx",
"organizations_url": "https://api.github.com/users/egrace479/orgs",
"received_events_url": "https://api.github.com/users/egrace479/received_events",
"repos_url": "https://api.github.com/users/egrace479/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/egrace479/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/egrace479/subscriptions",
"type": "User",
"url": "https://api.github.com/users/egrace479",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] |
[] | 2025-09-08T20:20:10
| 2025-09-08T20:20:10
| null |
NONE
| null | null | null | null |
### Feature request
Allow for anonymized viewing of datasets. For instance, something similar to [Anonymous GitHub](https://anonymous.4open.science/).
### Motivation
We generally publish our data through Hugging Face. This has worked out very well as it's both our repository and archive (thanks to the DOI feature!). However, we have an increasing challenge when it comes to sharing our datasets for paper (both conference and journal) submissions. Due to the need to share data anonymously, we can't use the Hugging Face URLs, but datasets tend to be too large for inclusion as a zip. Being able to have an anonymous link would be great since we can't be double-publishing the data.
### Your contribution
Sorry, I don't have a contribution to make to the implementation of this. Perhaps it would be possible to work off the [Anonymous GitHub](https://github.com/tdurieux/anonymous_github) code to generate something analogous with pointers to the data still on Hugging Face's servers (instead of the duplication of data required for the GitHub version)?
| null |
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7758/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7758/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/7757
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7757/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7757/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7757/events
|
https://github.com/huggingface/datasets/issues/7757
| 3,389,535,011
|
I_kwDODunzps7KCDMj
| 7,757
|
Add support for `.conll` file format in datasets
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/88763593?v=4",
"events_url": "https://api.github.com/users/namesarnav/events{/privacy}",
"followers_url": "https://api.github.com/users/namesarnav/followers",
"following_url": "https://api.github.com/users/namesarnav/following{/other_user}",
"gists_url": "https://api.github.com/users/namesarnav/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/namesarnav",
"id": 88763593,
"login": "namesarnav",
"node_id": "MDQ6VXNlcjg4NzYzNTkz",
"organizations_url": "https://api.github.com/users/namesarnav/orgs",
"received_events_url": "https://api.github.com/users/namesarnav/received_events",
"repos_url": "https://api.github.com/users/namesarnav/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/namesarnav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/namesarnav/subscriptions",
"type": "User",
"url": "https://api.github.com/users/namesarnav",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] |
[
"That would be cool ! feel free to ping me if I can help reviewing a PR"
] | 2025-09-06T07:25:39
| 2025-09-10T14:22:48
| null |
NONE
| null | null | null | null |
### Feature request
I’d like to request native support in the Hugging Face datasets library for reading .conll files (CoNLL format). This format is widely used in NLP tasks, especially for Named Entity Recognition (NER), POS tagging, and other token classification problems.
Right now `.conll` datasets need to be manually parsed or preprocessed before being loaded into datasets. Having built in support would save time and make workflows smoother for researchers and practitioners.
I propose -
Add a conll dataset builder or file parser to datasets that can:
- Read `.conll` files with customizable delimiters (space, tab).
- Handle sentence/document boundaries (typically indicated by empty lines).
- Support common CoNLL variants (e.g., CoNLL-2000 chunking, CoNLL-2003 NER).
- Output a dataset where each example contains:
- tokens: list of strings
- tags (or similar): list of labels aligned with tokens
Given a .conll snippet like:
```
EU NNP B-ORG
rejects VBZ O
German JJ B-MISC
call NN O
. . O
```
The dataset should load as:
```
{
"tokens": ["EU", "rejects", "German", "call", "."],
"tags": ["B-ORG", "O", "B-MISC", "O", "O"]
}
```
### Motivation
- CoNLL files are a standard benchmark format in NLP (e.g., CoNLL-2003, CoNLL-2000).
- Many users train NER or sequence labeling models (like BERT for token classification) directly on `.conll`
- Right now you have to write your own parsing scripts. Built in support would unify this process and would be much more convenient
### Your contribution
I’d be happy to contribute by implementing this feature. My plan is to-
- Add a new dataset script (conll.py) to handle .conll files.
- Implement parsing logic that supports sentence/document boundaries and token-label alignment.
- Write unit tests with small `.conll` examples to ensure correctness.
- Add documentation and usage examples so new users can easily load `.conll` datasets.
This would be my first open source contribution, so I’ll follow the `CONTRIBUTING.md` guidelines closely and adjust based on feedback from the maintainers.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7757/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7757/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/7756
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7756/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7756/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7756/events
|
https://github.com/huggingface/datasets/issues/7756
| 3,387,076,693
|
I_kwDODunzps7J4rBV
| 7,756
|
datasets.map(f, num_proc=N) hangs with N>1 when run on import
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/20065?v=4",
"events_url": "https://api.github.com/users/arjunguha/events{/privacy}",
"followers_url": "https://api.github.com/users/arjunguha/followers",
"following_url": "https://api.github.com/users/arjunguha/following{/other_user}",
"gists_url": "https://api.github.com/users/arjunguha/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/arjunguha",
"id": 20065,
"login": "arjunguha",
"node_id": "MDQ6VXNlcjIwMDY1",
"organizations_url": "https://api.github.com/users/arjunguha/orgs",
"received_events_url": "https://api.github.com/users/arjunguha/received_events",
"repos_url": "https://api.github.com/users/arjunguha/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/arjunguha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arjunguha/subscriptions",
"type": "User",
"url": "https://api.github.com/users/arjunguha",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[] | 2025-09-05T10:32:01
| 2025-09-05T10:32:01
| null |
NONE
| null | null | null | null |
### Describe the bug
If you `import` a module that runs `datasets.map(f, num_proc=N)` at the top-level, Python hangs.
### Steps to reproduce the bug
1. Create a file that runs datasets.map at the top-level:
```bash
cat <<EOF > import_me.py
import datasets
the_dataset = datasets.load_dataset("openai/openai_humaneval")
the_dataset = the_dataset.map(lambda item: item, num_proc=2)
EOF
```
2. Start Python REPL:
```bash
uv run --python 3.12.3 --with "datasets==4.0.0" python3
Python 3.12.3 (main, Aug 14 2025, 17:47:21) [GCC 13.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
```
3. Import the file:
```python
import import_me
````
Observe hang.
### Expected behavior
Ideally would not hang, or would fallback to num_proc=1 with a warning.
### Environment info
- `datasets` version: 4.0.0
- Platform: Linux-6.14.0-29-generic-x86_64-with-glibc2.39
- Python version: 3.12.3
- `huggingface_hub` version: 0.34.4
- PyArrow version: 21.0.0
- Pandas version: 2.3.2
- `fsspec` version: 2025.3.0
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7756/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7756/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/7753
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7753/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7753/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7753/events
|
https://github.com/huggingface/datasets/issues/7753
| 3,381,831,487
|
I_kwDODunzps7Jkqc_
| 7,753
|
datasets massively slows data reads, even in memory
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1191040?v=4",
"events_url": "https://api.github.com/users/lrast/events{/privacy}",
"followers_url": "https://api.github.com/users/lrast/followers",
"following_url": "https://api.github.com/users/lrast/following{/other_user}",
"gists_url": "https://api.github.com/users/lrast/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lrast",
"id": 1191040,
"login": "lrast",
"node_id": "MDQ6VXNlcjExOTEwNDA=",
"organizations_url": "https://api.github.com/users/lrast/orgs",
"received_events_url": "https://api.github.com/users/lrast/received_events",
"repos_url": "https://api.github.com/users/lrast/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lrast/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lrast/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lrast",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[
"Hi ! you should try\n\n```python\nfrom datasets import Array3D, Dataset, Features, Value\n\nfeatures = Features({\"image\": Array3D(shape=(3, 224, 224), dtype=\"uint8\"), \"label\": Value(\"uint8\")})\nhf_dataset = Dataset.from_dict({'image': images, 'label':labels}, features=features)\n```\n\notherwise the type of the \"image\" column is List(List(List(Value(\"uint8\")))) and is less efficient.",
"Thanks! This leads to a 10x speedup:\n```python\nimport torch\nimport time\nfrom datasets import Array3D, Dataset, Features, Value\n\nimages = torch.randint(0, 255, (1000, 3, 224, 224), dtype=torch.uint8)\nlabels = torch.randint(0, 200, (1000,), dtype=torch.uint8)\n\npt_dataset = torch.utils.data.TensorDataset(images, labels)\n\nfeatures = Features({\"image\": Array3D(shape=(3, 224, 224), dtype=\"uint8\"), \"label\": Value(\"uint8\")})\nhf_dataset = Dataset.from_dict({'image': images, 'label':labels}, features=features)\nhf_in_memory = hf_dataset.map(lambda x: x, keep_in_memory=True)\n\nhf_dataset.set_format('torch', dtype=torch.uint8)\nhf_in_memory.set_format('torch', dtype=torch.uint8)\n\n# measure access speeds\ndef time_access(dataset, img_col):\n start_time = time.time()\n for i in range(1000):\n _ = dataset[i][img_col].shape\n end_time = time.time()\n return end_time - start_time\n\n\nprint(f\"In-memory Tensor access: {time_access(pt_dataset, 0):.4f} seconds\")\nprint(f\"HF Dataset access: {time_access(hf_dataset, 'image'):.4f} seconds\")\nprint(f\"In-memory HF Dataset access: {time_access(hf_in_memory, 'image'):.4f} seconds\")\n```\nProduces\n```\nIn-memory Tensor access: 0.0026 seconds\nHF Dataset access: 0.2070 seconds\nIn-memory HF Dataset access: 0.2112 seconds\n```\n\nCurious if there is a reason why this is not the default behavior for huggingface image processors?\n```python\nfrom transformers import ViTImageProcessor\nfrom transformers import AutoImageProcessor\n\nfrom datasets import load_dataset\n# Load the dataset\nds = load_dataset('ylecun/mnist', split='train[0:100]')\n\n# Instantiate the processor, explicitly requesting NumPy arrays\nprocessor1 = ViTImageProcessor.from_pretrained('facebook/vit-mae-base', do_convert_rgb=True)\nprocessor2 = AutoImageProcessor.from_pretrained(\"facebook/detr-resnet-50\", use_fast=True)\n\nprocessed1 = ds.map(lambda row: processor1(row['image']))\nprocessed2 = ds.map(lambda row: processor2(row['image']))\n\nprint( type(processed1['pixel_values'][0]), type(processed1['pixel_values'][0]))\n```\nproduces\n```\n<class 'list'> <class 'list'>\n```\n\nI can, of course, manually manipulate the dataset to the use the correct format, but this is fairly standard for images, and the performance implications seem large."
] | 2025-09-04T01:45:24
| 2025-09-18T22:08:51
| null |
NONE
| null | null | null | null |
### Describe the bug
Loading image data in a huggingface dataset results in very slow read speeds, approximately 1000 times longer than reading the same data from a pytorch dataset. This applies even when the dataset is loaded into RAM using a `keep_in_memory=True` flag.
The following script reproduces the result with random data, but it applies equally to datasets that are loaded from the hub.
### Steps to reproduce the bug
The following script should reproduce the behavior
```
import torch
import time
from datasets import Dataset
images = torch.randint(0, 255, (1000, 3, 224, 224), dtype=torch.uint8)
labels = torch.randint(0, 200, (1000,), dtype=torch.uint8)
pt_dataset = torch.utils.data.TensorDataset(images, labels)
hf_dataset = Dataset.from_dict({'image': images, 'label':labels})
hf_dataset.set_format('torch', dtype=torch.uint8)
hf_in_memory = hf_dataset.map(lambda x: x, keep_in_memory=True)
# measure access speeds
def time_access(dataset, img_col):
start_time = time.time()
for i in range(1000):
_ = dataset[i][img_col].shape
end_time = time.time()
return end_time - start_time
print(f"In-memory Tensor access: {time_access(pt_dataset, 0):.4f} seconds")
print(f"HF Dataset access: {time_access(hf_dataset, 'image'):.4f} seconds")
print(f"In-memory HF Dataset access: {time_access(hf_in_memory, 'image'):.4f} seconds")
```
### Expected behavior
For me, the above script produces
```
In-memory Tensor access: 0.0025 seconds
HF Dataset access: 2.9317 seconds
In-memory HF Dataset access: 2.8082 seconds
```
I think that this difference is larger than expected.
### Environment info
- `datasets` version: 4.0.0
- Platform: macOS-14.7.7-arm64-arm-64bit
- Python version: 3.12.11
- `huggingface_hub` version: 0.34.3
- PyArrow version: 18.0.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.9.0
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7753/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7753/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/7751
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7751/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7751/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7751/events
|
https://github.com/huggingface/datasets/issues/7751
| 3,358,369,976
|
I_kwDODunzps7ILKi4
| 7,751
|
Dill version update
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/98005188?v=4",
"events_url": "https://api.github.com/users/Navanit-git/events{/privacy}",
"followers_url": "https://api.github.com/users/Navanit-git/followers",
"following_url": "https://api.github.com/users/Navanit-git/following{/other_user}",
"gists_url": "https://api.github.com/users/Navanit-git/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Navanit-git",
"id": 98005188,
"login": "Navanit-git",
"node_id": "U_kgDOBddwxA",
"organizations_url": "https://api.github.com/users/Navanit-git/orgs",
"received_events_url": "https://api.github.com/users/Navanit-git/received_events",
"repos_url": "https://api.github.com/users/Navanit-git/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Navanit-git/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Navanit-git/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Navanit-git",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[
"#7752 ",
"related: #7510 "
] | 2025-08-27T07:38:30
| 2025-09-10T14:24:02
| null |
NONE
| null | null | null | null |
### Describe the bug
Why the datasets is not updating the dill ?
Just want to know if I update the dill version in dill what will be the repucssion.
For now in multiplaces I have to update the library like process requirequire dill 0.4.0 so why not datasets.
Adding a pr too.
### Steps to reproduce the bug
.
### Expected behavior
.
### Environment info
.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7751/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7751/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/7746
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7746/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7746/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7746/events
|
https://github.com/huggingface/datasets/issues/7746
| 3,345,391,211
|
I_kwDODunzps7HZp5r
| 7,746
|
Fix: Canonical 'multi_news' dataset is broken and should be updated to a Parquet version
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/187888489?v=4",
"events_url": "https://api.github.com/users/Awesome075/events{/privacy}",
"followers_url": "https://api.github.com/users/Awesome075/followers",
"following_url": "https://api.github.com/users/Awesome075/following{/other_user}",
"gists_url": "https://api.github.com/users/Awesome075/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Awesome075",
"id": 187888489,
"login": "Awesome075",
"node_id": "U_kgDOCzLzaQ",
"organizations_url": "https://api.github.com/users/Awesome075/orgs",
"received_events_url": "https://api.github.com/users/Awesome075/received_events",
"repos_url": "https://api.github.com/users/Awesome075/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Awesome075/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Awesome075/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Awesome075",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[
"@sayakpaul @a-r-r-o-w could you verify this issue then i can contribute to solve this issue!😊"
] | 2025-08-22T12:52:03
| 2025-08-27T20:23:35
| null |
NONE
| null | null | null | null |
Hi,
The canonical `multi_news` dataset is currently broken and fails to load. This is because it points to the [alexfabri/multi_news](https://huggingface.co/datasets/alexfabbri/multi_news) repository, which contains a legacy loading script (`multi_news.py`) that requires the now-removed `trust_remote_code` parameter.
The original maintainer's GitHub and Hugging Face repositories appear to be inactive, so a community-led fix is needed.
I have created a working fix by converting the dataset to the modern Parquet format, which does not require a loading script. The fixed version is available here and loads correctly:
**[Awesome075/multi_news_parquet](https://huggingface.co/datasets/Awesome075/multi_news_parquet)**
Could the maintainers please guide me or themselves update the official `multi_news` dataset to use this working Parquet version? This would involve updating the canonical pointer for "multi_news" to resolve to the new repository.
This action would fix the dataset for all users and ensure its continued availability.
Thank you!
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7746/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7746/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/7745
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7745/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7745/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7745/events
|
https://github.com/huggingface/datasets/issues/7745
| 3,345,286,773
|
I_kwDODunzps7HZQZ1
| 7,745
|
Audio mono argument no longer supported, despite class documentation
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/5666041?v=4",
"events_url": "https://api.github.com/users/jheitz/events{/privacy}",
"followers_url": "https://api.github.com/users/jheitz/followers",
"following_url": "https://api.github.com/users/jheitz/following{/other_user}",
"gists_url": "https://api.github.com/users/jheitz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jheitz",
"id": 5666041,
"login": "jheitz",
"node_id": "MDQ6VXNlcjU2NjYwNDE=",
"organizations_url": "https://api.github.com/users/jheitz/orgs",
"received_events_url": "https://api.github.com/users/jheitz/received_events",
"repos_url": "https://api.github.com/users/jheitz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jheitz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jheitz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jheitz",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[
"I want to solve this problem can you please assign it to me\nand also can you please guide whether the mono parameter is required to be re-added or the documentation needs an update?"
] | 2025-08-22T12:15:41
| 2025-08-24T18:22:41
| null |
NONE
| null | null | null | null |
### Describe the bug
Either update the documentation, or re-introduce the flag (and corresponding logic to convert the audio to mono)
### Steps to reproduce the bug
Audio(sampling_rate=16000, mono=True) raises the error
TypeError: Audio.__init__() got an unexpected keyword argument 'mono'
However, in the class documentation, is says:
Args:
sampling_rate (`int`, *optional*):
Target sampling rate. If `None`, the native sampling rate is used.
mono (`bool`, defaults to `True`):
Whether to convert the audio signal to mono by averaging samples across
channels.
[...]
### Expected behavior
The above call should either work, or the documentation within the Audio class should be updated
### Environment info
- `datasets` version: 4.0.0
- Platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.35
- Python version: 3.12.11
- `huggingface_hub` version: 0.34.4
- PyArrow version: 21.0.0
- Pandas version: 2.3.2
- `fsspec` version: 2025.3.0
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7745/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7745/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/7744
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7744/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7744/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7744/events
|
https://github.com/huggingface/datasets/issues/7744
| 3,343,510,686
|
I_kwDODunzps7HSeye
| 7,744
|
dtype: ClassLabel is not parsed correctly in `features.py`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/43553003?v=4",
"events_url": "https://api.github.com/users/cmatKhan/events{/privacy}",
"followers_url": "https://api.github.com/users/cmatKhan/followers",
"following_url": "https://api.github.com/users/cmatKhan/following{/other_user}",
"gists_url": "https://api.github.com/users/cmatKhan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cmatKhan",
"id": 43553003,
"login": "cmatKhan",
"node_id": "MDQ6VXNlcjQzNTUzMDAz",
"organizations_url": "https://api.github.com/users/cmatKhan/orgs",
"received_events_url": "https://api.github.com/users/cmatKhan/received_events",
"repos_url": "https://api.github.com/users/cmatKhan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cmatKhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cmatKhan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cmatKhan",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"I think it's \"class_label\"",
"> I think it's \"class_label\"\n\nI see -- thank you. This works\n\n```yaml\nlicense: mit\nlanguage:\n- en\ntags:\n- genomics\n- yeast\n- transcription\n- perturbation\n- response\n- overexpression\npretty_name: Hackett, 2020 Overexpression\nsize_categories:\n- 1M<n<10M\ndataset_info:\n features:\n ...\n - name: mechanism\n dtype:\n class_label:\n names: [\"GEV\", \"ZEV\"]\n description: induction system (GEV or ZEV)\n - name: restriction\n dtype:\n class_label:\n names: [\"M\", \"N\", \"P\"]\n description: nutrient limitation (M, N or P)\n```\n\nI see the documentation for [datasets.ClassLabel](https://huggingface.co/docs/datasets/v4.0.0/en/package_reference/main_classes#datasets.ClassLabel). And the documentation for the [dataset cards](https://huggingface.co/docs/hub/en/datasets-cards). I don't see anything in either of those places, though, that specifies the pattern above.\n\nI suppose rather than writing the yaml by hand, the expected workflow is to use `datasets` to construct these features?",
"I generally copy/paste and adapt a YAML from another dataset.\n\nBut it's also possible to generate it from `datasets` like that\n\n```python\n>>> import yaml\n>>> print(yaml.dump(features._to_yaml_list(), sort_keys=False))\n- name: start\n dtype: int32\n- name: end\n dtype: int32\n- name: restriction\n dtype:\n class_label:\n names: [\"M\", \"N\", \"P\"]\n```"
] | 2025-08-21T23:28:50
| 2025-09-10T15:23:41
| 2025-09-10T15:23:41
|
NONE
| null | null | null | null |
`dtype: ClassLabel` in the README.md yaml metadata is parsed incorrectly and causes the data viewer to fail.
This yaml in my metadata ([source](https://huggingface.co/datasets/BrentLab/yeast_genome_resources/blob/main/README.md), though i changed `ClassLabel` to `string` to using different dtype in order to avoid the error):
```yaml
license: mit
pretty_name: BrentLab Yeast Genome Resources
size_categories:
- 1K<n<10K
language:
- en
dataset_info:
features:
- name: start
dtype: int32
description: Start coordinate (1-based, **inclusive**)
- name: end
dtype: int32
description: End coordinate (1-based, **inclusive**)
- name: strand
dtype: ClassLabel
...
```
is producing the following error in the data viewer:
```
Error code: ConfigNamesError
Exception: ValueError
Message: Feature type 'Classlabel' not found. Available feature types: ['Value', 'ClassLabel', 'Translation', 'TranslationVariableLanguages', 'LargeList', 'List', 'Array2D', 'Array3D', 'Array4D', 'Array5D', 'Audio', 'Image', 'Video', 'Pdf']
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response
config_names = get_dataset_config_names(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 161, in get_dataset_config_names
dataset_module = dataset_module_factory(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1031, in dataset_module_factory
raise e1 from None
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 996, in dataset_module_factory
return HubDatasetModuleFactory(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 605, in get_module
dataset_infos = DatasetInfosDict.from_dataset_card_data(dataset_card_data)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/info.py", line 386, in from_dataset_card_data
dataset_info = DatasetInfo._from_yaml_dict(dataset_card_data["dataset_info"])
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/info.py", line 317, in _from_yaml_dict
yaml_data["features"] = Features._from_yaml_list(yaml_data["features"])
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 2027, in _from_yaml_list
return cls.from_dict(from_yaml_inner(yaml_data))
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1872, in from_dict
obj = generate_from_dict(dic)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1459, in generate_from_dict
return {key: generate_from_dict(value) for key, value in obj.items()}
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1459, in <dictcomp>
return {key: generate_from_dict(value) for key, value in obj.items()}
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1465, in generate_from_dict
raise ValueError(f"Feature type '{_type}' not found. Available feature types: {list(_FEATURE_TYPES.keys())}")
ValueError: Feature type 'Classlabel' not found. Available feature types: ['Value', 'ClassLabel', 'Translation', 'TranslationVariableLanguages', 'LargeList', 'List', 'Array2D', 'Array3D', 'Array4D', 'Array5D', 'Audio', 'Image', 'Video', 'Pdf']
```
I think that this is caused by this line
https://github.com/huggingface/datasets/blob/896616c6cb03d92a33248c3529b0796cda27e955/src/datasets/features/features.py#L2013
Reproducible example from [naming.py](https://github.com/huggingface/datasets/blob/896616c6cb03d92a33248c3529b0796cda27e955/src/datasets/naming.py)
```python
import itertools
import os
import re
_uppercase_uppercase_re = re.compile(r"([A-Z]+)([A-Z][a-z])")
_lowercase_uppercase_re = re.compile(r"([a-z\d])([A-Z])")
_single_underscore_re = re.compile(r"(?<!_)_(?!_)")
_multiple_underscores_re = re.compile(r"(_{2,})")
_split_re = r"^\w+(\.\w+)*$"
def snakecase_to_camelcase(name):
"""Convert snake-case string to camel-case string."""
name = _single_underscore_re.split(name)
name = [_multiple_underscores_re.split(n) for n in name]
return "".join(n.capitalize() for n in itertools.chain.from_iterable(name) if n != "")
snakecase_to_camelcase("ClassLabel")
```
Result:
```raw
'Classlabel'
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/43553003?v=4",
"events_url": "https://api.github.com/users/cmatKhan/events{/privacy}",
"followers_url": "https://api.github.com/users/cmatKhan/followers",
"following_url": "https://api.github.com/users/cmatKhan/following{/other_user}",
"gists_url": "https://api.github.com/users/cmatKhan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cmatKhan",
"id": 43553003,
"login": "cmatKhan",
"node_id": "MDQ6VXNlcjQzNTUzMDAz",
"organizations_url": "https://api.github.com/users/cmatKhan/orgs",
"received_events_url": "https://api.github.com/users/cmatKhan/received_events",
"repos_url": "https://api.github.com/users/cmatKhan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cmatKhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cmatKhan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cmatKhan",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7744/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7744/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 19 days, 15:54:51
|
https://api.github.com/repos/huggingface/datasets/issues/7742
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7742/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7742/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7742/events
|
https://github.com/huggingface/datasets/issues/7742
| 3,336,704,928
|
I_kwDODunzps7G4hOg
| 7,742
|
module 'pyarrow' has no attribute 'PyExtensionType'
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/6106392?v=4",
"events_url": "https://api.github.com/users/mnedelko/events{/privacy}",
"followers_url": "https://api.github.com/users/mnedelko/followers",
"following_url": "https://api.github.com/users/mnedelko/following{/other_user}",
"gists_url": "https://api.github.com/users/mnedelko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mnedelko",
"id": 6106392,
"login": "mnedelko",
"node_id": "MDQ6VXNlcjYxMDYzOTI=",
"organizations_url": "https://api.github.com/users/mnedelko/orgs",
"received_events_url": "https://api.github.com/users/mnedelko/received_events",
"repos_url": "https://api.github.com/users/mnedelko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mnedelko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mnedelko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mnedelko",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[
"Just checked out the files and thishad already been addressed",
"For others who find this issue: \n\n`pip install --upgrade \"datasets>=2.20.0\"` \n\nfrom https://github.com/explodinggradients/ragas/issues/2170#issuecomment-3204393672 can fix it."
] | 2025-08-20T06:14:33
| 2025-09-09T02:51:46
| null |
NONE
| null | null | null | null |
### Describe the bug
When importing certain libraries, users will encounter the following error which can be traced back to the datasets library.
module 'pyarrow' has no attribute 'PyExtensionType'.
Example issue: https://github.com/explodinggradients/ragas/issues/2170
The issue occurs due to the following. I will proceed to submit a PR with the below fix:
**Issue Reason**
The issue is that PyArrow version 21.0.0 doesn’t have PyExtensionType. This was changed in newer versions of PyArrow. The
PyExtensionType class was renamed to ExtensionType in PyArrow 13.0.0 and later versions.
** Issue Solution**
Making the following changes to the following lib files should temporarily resolve the issue.
I will submit a PR to the dataets library in the meantime.
env_name/lib/python3.10/site-packages/datasets/features/features.py:
```
> 521 self.shape = tuple(shape)
522 self.value_type = dtype
523 self.storage_dtype = self._generate_dtype(self.value_type)
524 - pa.PyExtensionType.__init__(self, self.storage_dtype)
524 + pa.ExtensionType.__init__(self, self.storage_dtype)
525
526 def __reduce__(self):
527 return self.__class__, (
```
Updated venv_name/lib/python3.10/site-packages/datasets/features/features.py:
```
510 _type: str = field(default=“Array5D”, init=False, repr=False)
511
512
513 - class _ArrayXDExtensionType(pa.PyExtensionType):
513 + class _ArrayXDExtensionType(pa.ExtensionType):
514 ndims: Optional[int] = None
515
516 def __init__(self, shape: tuple, dtype: str):
```
### Steps to reproduce the bug
Ragas version: 0.3.1
Python version: 3.11
**Code to Reproduce**
_**In notebook:**_
!pip install ragas
from ragas import evaluate
### Expected behavior
The required package installs without issue.
### Environment info
In Jupyter Notebook.
venv
| null |
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7742/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7742/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/7741
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7741/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7741/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7741/events
|
https://github.com/huggingface/datasets/issues/7741
| 3,334,848,656
|
I_kwDODunzps7GxcCQ
| 7,741
|
Preserve tree structure when loading HDF5
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/17013474?v=4",
"events_url": "https://api.github.com/users/klamike/events{/privacy}",
"followers_url": "https://api.github.com/users/klamike/followers",
"following_url": "https://api.github.com/users/klamike/following{/other_user}",
"gists_url": "https://api.github.com/users/klamike/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/klamike",
"id": 17013474,
"login": "klamike",
"node_id": "MDQ6VXNlcjE3MDEzNDc0",
"organizations_url": "https://api.github.com/users/klamike/orgs",
"received_events_url": "https://api.github.com/users/klamike/received_events",
"repos_url": "https://api.github.com/users/klamike/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/klamike/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/klamike/subscriptions",
"type": "User",
"url": "https://api.github.com/users/klamike",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
| null |
[] |
[] | 2025-08-19T15:42:05
| 2025-08-26T15:28:06
| 2025-08-26T15:28:06
|
CONTRIBUTOR
| null | null | null | null |
### Feature request
https://github.com/huggingface/datasets/pull/7740#discussion_r2285605374
### Motivation
`datasets` has the `Features` class for representing nested features. HDF5 files have groups of datasets which are nested, though in #7690 the keys are flattened. We should preserve that structure for the user.
### Your contribution
I'll open a PR (#7743)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7741/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7741/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 6 days, 23:46:01
|
https://api.github.com/repos/huggingface/datasets/issues/7739
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7739/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7739/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7739/events
|
https://github.com/huggingface/datasets/issues/7739
| 3,331,537,762
|
I_kwDODunzps7Gkzti
| 7,739
|
Replacement of "Sequence" feature with "List" breaks backward compatibility
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/15764776?v=4",
"events_url": "https://api.github.com/users/evmaki/events{/privacy}",
"followers_url": "https://api.github.com/users/evmaki/followers",
"following_url": "https://api.github.com/users/evmaki/following{/other_user}",
"gists_url": "https://api.github.com/users/evmaki/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/evmaki",
"id": 15764776,
"login": "evmaki",
"node_id": "MDQ6VXNlcjE1NzY0Nzc2",
"organizations_url": "https://api.github.com/users/evmaki/orgs",
"received_events_url": "https://api.github.com/users/evmaki/received_events",
"repos_url": "https://api.github.com/users/evmaki/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/evmaki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/evmaki/subscriptions",
"type": "User",
"url": "https://api.github.com/users/evmaki",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[
"Backward compatibility here means 4.0.0 can load datasets saved with older versions.\n\nYou will need 4.0.0 to load datasets saved with 4.0.0"
] | 2025-08-18T17:28:38
| 2025-09-10T14:17:50
| null |
NONE
| null | null | null | null |
PR #7634 replaced the Sequence feature with List in 4.0.0, so datasets saved with version 4.0.0 with that feature cannot be loaded by earlier versions. There is no clear option in 4.0.0 to use the legacy feature type to preserve backward compatibility.
Why is this a problem? I have a complex preprocessing and training pipeline dependent on 3.6.0; we manage a very large number of separate datasets that get concatenated during training. If just one of those datasets is saved with 4.0.0, they become unusable, and we have no way of "fixing" them. I can load them in 4.0.0 but I can't re-save with the legacy feature type, and I can't load it in 3.6.0 for obvious reasons.
Perhaps I'm missing something here, since the PR says that backward compatibility is preserved; if so, it's not obvious to me how.
| null |
{
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7739/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7739/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/7738
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7738/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7738/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7738/events
|
https://github.com/huggingface/datasets/issues/7738
| 3,328,948,690
|
I_kwDODunzps7Ga7nS
| 7,738
|
Allow saving multi-dimensional ndarray with dynamic shapes
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/82735346?v=4",
"events_url": "https://api.github.com/users/ryan-minato/events{/privacy}",
"followers_url": "https://api.github.com/users/ryan-minato/followers",
"following_url": "https://api.github.com/users/ryan-minato/following{/other_user}",
"gists_url": "https://api.github.com/users/ryan-minato/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ryan-minato",
"id": 82735346,
"login": "ryan-minato",
"node_id": "MDQ6VXNlcjgyNzM1MzQ2",
"organizations_url": "https://api.github.com/users/ryan-minato/orgs",
"received_events_url": "https://api.github.com/users/ryan-minato/received_events",
"repos_url": "https://api.github.com/users/ryan-minato/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ryan-minato/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ryan-minato/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ryan-minato",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] |
[
"I agree this would be super valuable.\n\nIt looks like this was discussed a few years ago in https://github.com/huggingface/datasets/issues/5272#issuecomment-1550200824 but there were some issues. Those PRs are merged now and it looks like Arrow [officially supports](https://arrow.apache.org/docs/format/CanonicalExtensions.html#variable-shape-tensor) this so it's a good time to re-evaluate!",
"Happy to help with this, maybe we can think of adding a new type `Tensor` (instead of Array2D, 3D etc. which imply a fixed number of dims - we can keep them for backward compat anyways) that uses VariableShapeTensor (or FixedShapeTensor if the shape is provided maybe ? happy to discuss this)"
] | 2025-08-18T02:23:51
| 2025-08-26T15:25:02
| null |
NONE
| null | null | null | null |
### Feature request
I propose adding a dedicated feature to the datasets library that allows for the efficient storage and retrieval of multi-dimensional ndarray with dynamic shapes. Similar to how Image columns handle variable-sized images, this feature would provide a structured way to store array data where the dimensions are not fixed.
A possible implementation could be a new Array or Tensor feature type that stores the data in a structured format, for example,
```python
{
"shape": (5, 224, 224),
"dtype": "uint8",
"data": [...]
}
```
This would allow the datasets library to handle heterogeneous array sizes within a single column without requiring a fixed shape definition in the feature schema.
### Motivation
I am currently trying to upload data from astronomical telescopes, specifically FITS files, to the Hugging Face Hub. This type of data is very similar to images but often has more than three dimensions. For example, data from the SDSS project contains five channels (u, g, r, i, z), and the pixel values can exceed 255, making the Pillow based Image feature unsuitable.
The current datasets library requires a fixed shape to be defined in the feature schema for multi-dimensional arrays, which is a major roadblock. This prevents me from saving my data, as the dimensions of the arrays can vary across different FITS files.
https://github.com/huggingface/datasets/blob/985c9bee6bfc345787a8b9dd316e1d4f3b930503/src/datasets/features/features.py#L613-L614
A feature that supports dynamic shapes would be incredibly beneficial for the astronomy community and other fields dealing with similar high-dimensional, variable-sized data (e.g., medical imaging, scientific simulations).
### Your contribution
I am willing to create a PR to help implement this feature if the proposal is accepted.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7738/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7738/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/7733
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7733/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7733/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7733/events
|
https://github.com/huggingface/datasets/issues/7733
| 3,304,979,299
|
I_kwDODunzps7E_ftj
| 7,733
|
Dataset Repo Paths to Locally Stored Images Not Being Appended to Image Path
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/27898715?v=4",
"events_url": "https://api.github.com/users/dennys246/events{/privacy}",
"followers_url": "https://api.github.com/users/dennys246/followers",
"following_url": "https://api.github.com/users/dennys246/following{/other_user}",
"gists_url": "https://api.github.com/users/dennys246/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dennys246",
"id": 27898715,
"login": "dennys246",
"node_id": "MDQ6VXNlcjI3ODk4NzE1",
"organizations_url": "https://api.github.com/users/dennys246/orgs",
"received_events_url": "https://api.github.com/users/dennys246/received_events",
"repos_url": "https://api.github.com/users/dennys246/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dennys246/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dennys246/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dennys246",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"This is the download issues I come into, about ever other time it fails...\n<img width=\"1719\" height=\"1226\" alt=\"Image\" src=\"https://github.com/user-attachments/assets/2e5b4b3e-7c13-4bad-a77c-34b47a932831\" />",
"I’m guessing this is just a feature so I’m going to close this thread. I also altered my loading scheme to start on the first index of a particular modality within the dataset (index ~390) and this issue went away with client error from too many requests. Due to how the dataset is sorted in HF, there are gaps in my dataset between modalities (~500) that this issue should theoretically also occur on but it does not. It seems after initially downloading the first image in a dataset the connection becomes approved on HF end and long lapses in checking entries in a dataset, without actually loading the full sample, are enabled. \n\nTL;DR Local handling doesn’t appear to be possible with images in the datasets library. Load the first image you need right away through storing it’s index and calling to it. Don’t iterate long sequences of HF repo’s looking for a condition to be met without first loading in a sample."
] | 2025-08-08T19:10:58
| 2025-10-07T04:47:36
| 2025-10-07T04:32:48
|
NONE
| null | null | null | null |
### Describe the bug
I’m not sure if this is a bug or a feature and I just don’t fully understand how dataset loading is to work, but it appears there may be a bug with how locally stored Image() are being accessed. I’ve uploaded a new dataset to hugging face (rmdig/rocky_mountain_snowpack) but I’ve come into a ton of trouble trying to have the images handled properly (at least in the way I’d expect them to be handled).
I find that I cannot use relative paths for loading images remotely from the Hugging Face repo or from a local repository. Any time I do it always simply appends my current working directory to the dataset. As a result to use the datasets library with my dataset I have to change my working directory to the dataset folder or abandon the dataset object structure, which I cannot imagine you intended. As a result I have to use URL’s since an absolute path on my system obviously wouldn’t work for others. The URL works ok, but despite me having it locally downloaded, it appears to be redownloading the dataset every time I train my snowGAN model on it (and often times I’m coming into HTTPS errors for over requesting the data).
Or maybe image relative paths aren't intended to be loaded directly through your datasets library as images and should be kept as strings for the user to handle? If so I feel like you’re missing out on some pretty seamless functionality
### Steps to reproduce the bug
1. Download a local copy of the dataset (rmdig/rocky_mountain_snowpack) through git or whatever you prefer.
2. Alter the README.md YAML for file_path (the relative path to each image) to be type Image instead of type string
`
---
dataset_info:
features:
- name: image
dtype: Image
- name: file_path
dtype: Image
`
3. Initialize the dataset locally, make sure your working directory is not the dataset directory root
`dataset = datasets.load_dataset(‘path/to/local/rocky_mountain_snowpack/‘)`
4. Call to one of the samples and you’ll get an error that the image was not found in current/working/directory/preprocessed/cores/image_1.png. Showing that it’s simply looking in the current working directory + relative path
`
>>> dataset['train'][0]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/arrow_dataset.py", line 2859, in __getitem__
return self._getitem(key)
^^^^^^^^^^^^^^^^^^
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/arrow_dataset.py", line 2841, in _getitem
formatted_output = format_table(
^^^^^^^^^^^^^
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/formatting/formatting.py", line 657, in format_table
return formatter(pa_table, query_type=query_type)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/formatting/formatting.py", line 410, in __call__
return self.format_row(pa_table)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/formatting/formatting.py", line 459, in format_row
row = self.python_features_decoder.decode_row(row)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/formatting/formatting.py", line 223, in decode_row
return self.features.decode_example(row, token_per_repo_id=self.token_per_repo_id) if self.features else row
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/features/features.py", line 2093, in decode_example
column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/features/features.py", line 1405, in decode_nested_example
return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) if obj is not None else None
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/features/image.py", line 171, in decode_example
image = PIL.Image.open(path)
^^^^^^^^^^^^^^^^^^^^
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/PIL/Image.py", line 3277, in open
fp = builtins.open(filename, "rb")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: '/Users/dennyschaedig/Datasets/preprocessed/cores/image_1.png'
`
### Expected behavior
I expect the datasets and Image() to load the locally hosted data using path/to/local/rocky_mountain_snowpack/ (that I pass in with my datasets.load_dataset() or the you all handle on the backend) call + relative path.
Instead it appears to load from my current working directory + relative path.
### Environment info
Tested on…
Windows 11, Ubuntu Linux 22.04 and Mac Sequoia 15.5 Silicone M2
datasets version 4.0.0
Python 3.12 and 3.13
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/27898715?v=4",
"events_url": "https://api.github.com/users/dennys246/events{/privacy}",
"followers_url": "https://api.github.com/users/dennys246/followers",
"following_url": "https://api.github.com/users/dennys246/following{/other_user}",
"gists_url": "https://api.github.com/users/dennys246/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dennys246",
"id": 27898715,
"login": "dennys246",
"node_id": "MDQ6VXNlcjI3ODk4NzE1",
"organizations_url": "https://api.github.com/users/dennys246/orgs",
"received_events_url": "https://api.github.com/users/dennys246/received_events",
"repos_url": "https://api.github.com/users/dennys246/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dennys246/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dennys246/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dennys246",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7733/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7733/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 59 days, 9:21:50
|
https://api.github.com/repos/huggingface/datasets/issues/7732
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7732/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7732/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7732/events
|
https://github.com/huggingface/datasets/issues/7732
| 3,304,673,383
|
I_kwDODunzps7E-VBn
| 7,732
|
webdataset: key errors when `field_name` has upper case characters
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/29985433?v=4",
"events_url": "https://api.github.com/users/YassineYousfi/events{/privacy}",
"followers_url": "https://api.github.com/users/YassineYousfi/followers",
"following_url": "https://api.github.com/users/YassineYousfi/following{/other_user}",
"gists_url": "https://api.github.com/users/YassineYousfi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/YassineYousfi",
"id": 29985433,
"login": "YassineYousfi",
"node_id": "MDQ6VXNlcjI5OTg1NDMz",
"organizations_url": "https://api.github.com/users/YassineYousfi/orgs",
"received_events_url": "https://api.github.com/users/YassineYousfi/received_events",
"repos_url": "https://api.github.com/users/YassineYousfi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/YassineYousfi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YassineYousfi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/YassineYousfi",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[] | 2025-08-08T16:56:42
| 2025-08-08T16:56:42
| null |
CONTRIBUTOR
| null | null | null | null |
### Describe the bug
When using a webdataset each sample can be a collection of different "fields"
like this:
```
images17/image194.left.jpg
images17/image194.right.jpg
images17/image194.json
images17/image12.left.jpg
images17/image12.right.jpg
images17/image12.json
```
if the field_name contains upper case characters, the HF webdataset integration throws a key error when trying to load the dataset:
e.g. from a dataset (now updated so that it doesn't throw this error)
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
Cell In[1], line 2
1 from datasets import load_dataset
----> 2 ds = load_dataset("commaai/comma2k19", data_files={'train': ['data-00000.tar.gz']}, num_proc=1)
File ~/xx/.venv/lib/python3.11/site-packages/datasets/load.py:1412, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, keep_in_memory, save_infos, revision, token, streaming, num_proc, storage_options, **config_kwargs)
1409 return builder_instance.as_streaming_dataset(split=split)
1411 # Download and prepare data
-> 1412 builder_instance.download_and_prepare(
1413 download_config=download_config,
1414 download_mode=download_mode,
1415 verification_mode=verification_mode,
1416 num_proc=num_proc,
1417 storage_options=storage_options,
1418 )
1420 # Build dataset for splits
1421 keep_in_memory = (
1422 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)
1423 )
File ~/xx/.venv/lib/python3.11/site-packages/datasets/builder.py:894, in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, dl_manager, base_path, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs)
892 if num_proc is not None:
893 prepare_split_kwargs["num_proc"] = num_proc
--> 894 self._download_and_prepare(
895 dl_manager=dl_manager,
896 verification_mode=verification_mode,
897 **prepare_split_kwargs,
898 **download_and_prepare_kwargs,
899 )
900 # Sync info
901 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values())
File ~/xx/.venv/lib/python3.11/site-packages/datasets/builder.py:1609, in GeneratorBasedBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_splits_kwargs)
1608 def _download_and_prepare(self, dl_manager, verification_mode, **prepare_splits_kwargs):
-> 1609 super()._download_and_prepare(
1610 dl_manager,
1611 verification_mode,
1612 check_duplicate_keys=verification_mode == VerificationMode.BASIC_CHECKS
1613 or verification_mode == VerificationMode.ALL_CHECKS,
1614 **prepare_splits_kwargs,
1615 )
File ~/xx/.venv/lib/python3.11/site-packages/datasets/builder.py:948, in DatasetBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs)
946 split_dict = SplitDict(dataset_name=self.dataset_name)
947 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)
--> 948 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
950 # Checksums verification
951 if verification_mode == VerificationMode.ALL_CHECKS and dl_manager.record_checksums:
File ~/xx/.venv/lib/python3.11/site-packages/datasets/packaged_modules/webdataset/webdataset.py:81, in WebDataset._split_generators(self, dl_manager)
78 if not self.info.features:
79 # Get one example to get the feature types
80 pipeline = self._get_pipeline_from_tar(tar_paths[0], tar_iterators[0])
---> 81 first_examples = list(islice(pipeline, self.NUM_EXAMPLES_FOR_FEATURES_INFERENCE))
82 if any(example.keys() != first_examples[0].keys() for example in first_examples):
83 raise ValueError(
84 "The TAR archives of the dataset should be in WebDataset format, "
85 "but the files in the archive don't share the same prefix or the same types."
86 )
File ~/xx/.venv/lib/python3.11/site-packages/datasets/packaged_modules/webdataset/webdataset.py:55, in WebDataset._get_pipeline_from_tar(cls, tar_path, tar_iterator)
53 data_extension = field_name.split(".")[-1]
54 if data_extension in cls.DECODERS:
---> 55 current_example[field_name] = cls.DECODERS[data_extension](current_example[field_name])
56 if current_example:
57 yield current_example
KeyError: 'processed_log_IMU_magnetometer_value.npy'
```
### Steps to reproduce the bug
unit test was added in: https://github.com/huggingface/datasets/pull/7726
it fails without the fixed proposed in the same PR
### Expected behavior
Not throwing a key error.
### Environment info
```
- `datasets` version: 4.0.0
- Platform: Linux-6.8.0-51-generic-x86_64-with-glibc2.39
- Python version: 3.11.4
- `huggingface_hub` version: 0.33.4
- PyArrow version: 21.0.0
- Pandas version: 2.3.1
- `fsspec` version: 2025.7.0
```
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7732/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7732/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/7731
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7731/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7731/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7731/events
|
https://github.com/huggingface/datasets/issues/7731
| 3,303,637,075
|
I_kwDODunzps7E6YBT
| 7,731
|
Add the possibility of a backend for audio decoding
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/142020129?v=4",
"events_url": "https://api.github.com/users/intexcor/events{/privacy}",
"followers_url": "https://api.github.com/users/intexcor/followers",
"following_url": "https://api.github.com/users/intexcor/following{/other_user}",
"gists_url": "https://api.github.com/users/intexcor/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/intexcor",
"id": 142020129,
"login": "intexcor",
"node_id": "U_kgDOCHcOIQ",
"organizations_url": "https://api.github.com/users/intexcor/orgs",
"received_events_url": "https://api.github.com/users/intexcor/received_events",
"repos_url": "https://api.github.com/users/intexcor/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/intexcor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/intexcor/subscriptions",
"type": "User",
"url": "https://api.github.com/users/intexcor",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] |
[
"is there a work around im stuck",
"never mind just downgraded"
] | 2025-08-08T11:08:56
| 2025-08-20T16:29:33
| null |
NONE
| null | null | null | null |
### Feature request
Add the possibility of a backend for audio decoding. Before version 4.0.0, soundfile was used, and now torchcodec is used, but the problem is that torchcodec requires ffmpeg, which is problematic to install on the same colab. Therefore, I suggest adding a decoder selection when loading the dataset.
### Motivation
I use a service for training models in which ffmpeg cannot be installed.
### Your contribution
I use a service for training models in which ffmpeg cannot be installed.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7731/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7731/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/7729
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7729/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7729/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7729/events
|
https://github.com/huggingface/datasets/issues/7729
| 3,300,672,954
|
I_kwDODunzps7EvEW6
| 7,729
|
OSError: libcudart.so.11.0: cannot open shared object file: No such file or directory
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/115183904?v=4",
"events_url": "https://api.github.com/users/SaleemMalikAI/events{/privacy}",
"followers_url": "https://api.github.com/users/SaleemMalikAI/followers",
"following_url": "https://api.github.com/users/SaleemMalikAI/following{/other_user}",
"gists_url": "https://api.github.com/users/SaleemMalikAI/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/SaleemMalikAI",
"id": 115183904,
"login": "SaleemMalikAI",
"node_id": "U_kgDOBt2RIA",
"organizations_url": "https://api.github.com/users/SaleemMalikAI/orgs",
"received_events_url": "https://api.github.com/users/SaleemMalikAI/received_events",
"repos_url": "https://api.github.com/users/SaleemMalikAI/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/SaleemMalikAI/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SaleemMalikAI/subscriptions",
"type": "User",
"url": "https://api.github.com/users/SaleemMalikAI",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[
"Is this related to the \"datasets\" library? @SaleemMalikAI "
] | 2025-08-07T14:07:23
| 2025-09-24T02:17:15
| null |
NONE
| null | null | null | null |
> Hi is there any solution for that eror i try to install this one
pip install torch==1.12.1+cpu torchaudio==0.12.1+cpu -f https://download.pytorch.org/whl/torch_stable.html
this is working fine but tell me how to install pytorch version that is fit for gpu
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7729/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7729/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/7728
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7728/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7728/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7728/events
|
https://github.com/huggingface/datasets/issues/7728
| 3,298,854,904
|
I_kwDODunzps7EoIf4
| 7,728
|
NonMatchingSplitsSizesError and ExpectedMoreSplitsError
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/104755879?v=4",
"events_url": "https://api.github.com/users/efsotr/events{/privacy}",
"followers_url": "https://api.github.com/users/efsotr/followers",
"following_url": "https://api.github.com/users/efsotr/following{/other_user}",
"gists_url": "https://api.github.com/users/efsotr/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/efsotr",
"id": 104755879,
"login": "efsotr",
"node_id": "U_kgDOBj5ypw",
"organizations_url": "https://api.github.com/users/efsotr/orgs",
"received_events_url": "https://api.github.com/users/efsotr/received_events",
"repos_url": "https://api.github.com/users/efsotr/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/efsotr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/efsotr/subscriptions",
"type": "User",
"url": "https://api.github.com/users/efsotr",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[
"To load just one shard without errors, you should use data_files directly with split set to \"train\", but don’t specify \"allenai/c4\", since that points to the full dataset with all shards.\n\nInstead, do this:\n```\nfrom datasets import load_dataset\nfrom datasets import load_dataset\n\n# Load only one shard of C4\ntraindata = load_dataset(\n \"json\", # <-- use \"json\" since you’re directly passing JSON files\n data_files={\"train\": \"https://huggingface.co/datasets/allenai/c4/resolve/main/en/c4-train.00000-of-01024.json.gz\"},\n split=\"train\"\n)\n\nprint(traindata)\n```\nIf you want both train and validation but only a subset of shards, do:\n```\ntraindata = load_dataset(\n \"json\",\n data_files={\n \"train\": \"https://huggingface.co/datasets/allenai/c4/resolve/main/en/c4-train.00000-of-01024.json.gz\",\n \"validation\": \"https://huggingface.co/datasets/allenai/c4/resolve/main/en/c4-validation.00000-of-00008.json.gz\"\n }\n)\n\nprint(traindata)\n```",
"I just want to load a few files from allenai/c4.\nIf I do not specify allenai/c4, where will the files be loaded from?",
"My apologies, I’ve modified my previous answer.\nYou just need to specify the full path, for example:\n\nhttps://huggingface.co/datasets/allenai/c4/resolve/main/en/c4-train.00000-of-01024.json.gz\n\n<img width=\"1843\" height=\"633\" alt=\"Image\" src=\"https://github.com/user-attachments/assets/b2922958-9d87-4b62-a00e-c5ca02e31c27\" />\n\nI hope this updated answer is helpful."
] | 2025-08-07T04:04:50
| 2025-10-06T21:08:39
| null |
NONE
| null | null | null | null |
### Describe the bug
When loading dataset, the info specified by `data_files` did not overwrite the original info.
### Steps to reproduce the bug
```python
from datasets import load_dataset
traindata = load_dataset(
"allenai/c4",
"en",
data_files={"train": "en/c4-train.00000-of-01024.json.gz",
"validation": "en/c4-validation.00000-of-00008.json.gz"},
)
```
```log
NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=828589180707, num_examples=364868892, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='train', num_bytes=809262831, num_examples=356317, shard_lengths=[223006, 133311], dataset_name='c4')}, {'expected': SplitInfo(name='validation', num_bytes=825767266, num_examples=364608, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='validation', num_bytes=102199431, num_examples=45576, shard_lengths=None, dataset_name='c4')}]
```
```python
from datasets import load_dataset
traindata = load_dataset(
"allenai/c4",
"en",
data_files={"train": "en/c4-train.00000-of-01024.json.gz"},
split="train"
)
```
```log
ExpectedMoreSplitsError: {'validation'}
```
### Expected behavior
No error
### Environment info
datasets 4.0.0
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7728/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7728/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/7727
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7727/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7727/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7727/events
|
https://github.com/huggingface/datasets/issues/7727
| 3,295,718,578
|
I_kwDODunzps7EcKyy
| 7,727
|
config paths that start with ./ are not valid as hf:// accessed repos, but are valid when accessed locally
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/2229300?v=4",
"events_url": "https://api.github.com/users/doctorpangloss/events{/privacy}",
"followers_url": "https://api.github.com/users/doctorpangloss/followers",
"following_url": "https://api.github.com/users/doctorpangloss/following{/other_user}",
"gists_url": "https://api.github.com/users/doctorpangloss/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/doctorpangloss",
"id": 2229300,
"login": "doctorpangloss",
"node_id": "MDQ6VXNlcjIyMjkzMDA=",
"organizations_url": "https://api.github.com/users/doctorpangloss/orgs",
"received_events_url": "https://api.github.com/users/doctorpangloss/received_events",
"repos_url": "https://api.github.com/users/doctorpangloss/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/doctorpangloss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/doctorpangloss/subscriptions",
"type": "User",
"url": "https://api.github.com/users/doctorpangloss",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[] | 2025-08-06T08:21:37
| 2025-08-06T08:21:37
| null |
NONE
| null | null | null | null |
### Describe the bug
```
- config_name: some_config
data_files:
- split: train
path:
- images/xyz/*.jpg
```
will correctly download but
```
- config_name: some_config
data_files:
- split: train
path:
- ./images/xyz/*.jpg
```
will error with `FileNotFoundError` due to improper url joining. `load_dataset` on the same directory locally works fine.
### Steps to reproduce the bug
1. create a README.md with the front matter of the form
```
- config_name: some_config
data_files:
- split: train
path:
- ./images/xyz/*.jpg
```
2. `touch ./images/xyz/1.jpg`
3. Observe this directory loads with `load_dataset("filesystem_path", "some_config")` correctly.
4. Observe exceptions when you load this with `load_dataset("repoid/filesystem_path", "some_config")`
### Expected behavior
`./` prefix should be interpreted correctly
### Environment info
datasets 4.0.0
datasets 3.4.0
reproduce
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7727/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7727/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/7724
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7724/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7724/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7724/events
|
https://github.com/huggingface/datasets/issues/7724
| 3,292,315,241
|
I_kwDODunzps7EPL5p
| 7,724
|
Can not stepinto load_dataset.py?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/13776012?v=4",
"events_url": "https://api.github.com/users/micklexqg/events{/privacy}",
"followers_url": "https://api.github.com/users/micklexqg/followers",
"following_url": "https://api.github.com/users/micklexqg/following{/other_user}",
"gists_url": "https://api.github.com/users/micklexqg/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/micklexqg",
"id": 13776012,
"login": "micklexqg",
"node_id": "MDQ6VXNlcjEzNzc2MDEy",
"organizations_url": "https://api.github.com/users/micklexqg/orgs",
"received_events_url": "https://api.github.com/users/micklexqg/received_events",
"repos_url": "https://api.github.com/users/micklexqg/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/micklexqg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/micklexqg/subscriptions",
"type": "User",
"url": "https://api.github.com/users/micklexqg",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[] | 2025-08-05T09:28:51
| 2025-08-05T09:28:51
| null |
NONE
| null | null | null | null |
I set a breakpoint in "load_dataset.py" and try to debug my data load codes, but it does not stop at any breakpoints, so "load_dataset.py" can not be stepped into ?
<!-- Failed to upload "截图 2025-08-05 17-25-18.png" -->
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7724/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7724/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/7723
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7723/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7723/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7723/events
|
https://github.com/huggingface/datasets/issues/7723
| 3,289,943,261
|
I_kwDODunzps7EGIzd
| 7,723
|
Don't remove `trust_remote_code` arg!!!
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/758925?v=4",
"events_url": "https://api.github.com/users/autosquid/events{/privacy}",
"followers_url": "https://api.github.com/users/autosquid/followers",
"following_url": "https://api.github.com/users/autosquid/following{/other_user}",
"gists_url": "https://api.github.com/users/autosquid/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/autosquid",
"id": 758925,
"login": "autosquid",
"node_id": "MDQ6VXNlcjc1ODkyNQ==",
"organizations_url": "https://api.github.com/users/autosquid/orgs",
"received_events_url": "https://api.github.com/users/autosquid/received_events",
"repos_url": "https://api.github.com/users/autosquid/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/autosquid/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/autosquid/subscriptions",
"type": "User",
"url": "https://api.github.com/users/autosquid",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] |
[] | 2025-08-04T15:42:07
| 2025-08-04T15:42:07
| null |
NONE
| null | null | null | null |
### Feature request
defaulting it to False is nice balance. we need manully setting it to True in certain scenarios!
Add `trust_remote_code` arg back please!
### Motivation
defaulting it to False is nice balance. we need manully setting it to True in certain scenarios!
### Your contribution
defaulting it to False is nice balance. we need manully setting it to True in certain scenarios!
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7723/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7723/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/7722
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7722/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7722/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7722/events
|
https://github.com/huggingface/datasets/issues/7722
| 3,289,741,064
|
I_kwDODunzps7EFXcI
| 7,722
|
Out of memory even though using load_dataset(..., streaming=True)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/3961950?v=4",
"events_url": "https://api.github.com/users/padmalcom/events{/privacy}",
"followers_url": "https://api.github.com/users/padmalcom/followers",
"following_url": "https://api.github.com/users/padmalcom/following{/other_user}",
"gists_url": "https://api.github.com/users/padmalcom/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/padmalcom",
"id": 3961950,
"login": "padmalcom",
"node_id": "MDQ6VXNlcjM5NjE5NTA=",
"organizations_url": "https://api.github.com/users/padmalcom/orgs",
"received_events_url": "https://api.github.com/users/padmalcom/received_events",
"repos_url": "https://api.github.com/users/padmalcom/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/padmalcom/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/padmalcom/subscriptions",
"type": "User",
"url": "https://api.github.com/users/padmalcom",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[] | 2025-08-04T14:41:55
| 2025-08-04T14:41:55
| null |
NONE
| null | null | null | null |
### Describe the bug
I am iterating over a large dataset that I load using streaming=True to avoid running out of memory. Unfortunately, I am observing that memory usage increases over time and I'm finally running in an oom.
### Steps to reproduce the bug
```
ds = load_dataset("openslr/librispeech_asr", split="train.clean.360", streaming=True)
for i,sample in enumerate(tqdm(ds)):
target_file = os.path.join(NSFW_TARGET_FOLDER, f'audio{i}.wav')
try:
sf.write(target_file, sample['audio']['array'], samplerate=sample['audio']['sampling_rate'])
except Exception as e:
print(f"Could not write audio {i} in ds: {e}")
```
### Expected behavior
I'd expect to have a small memory footprint and memory being freed after each iteration of the for loop. Instead the memory usage is increasing. I tried to remove the logic to write the sound file and just print the sample but the issue remains the same.
### Environment info
Python 3.12.11
Ubuntu 24
datasets 4.0.0 and 3.6.0
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7722/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7722/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/7721
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7721/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7721/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7721/events
|
https://github.com/huggingface/datasets/issues/7721
| 3,289,426,104
|
I_kwDODunzps7EEKi4
| 7,721
|
Bad split error message when using percentages
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/3961950?v=4",
"events_url": "https://api.github.com/users/padmalcom/events{/privacy}",
"followers_url": "https://api.github.com/users/padmalcom/followers",
"following_url": "https://api.github.com/users/padmalcom/following{/other_user}",
"gists_url": "https://api.github.com/users/padmalcom/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/padmalcom",
"id": 3961950,
"login": "padmalcom",
"node_id": "MDQ6VXNlcjM5NjE5NTA=",
"organizations_url": "https://api.github.com/users/padmalcom/orgs",
"received_events_url": "https://api.github.com/users/padmalcom/received_events",
"repos_url": "https://api.github.com/users/padmalcom/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/padmalcom/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/padmalcom/subscriptions",
"type": "User",
"url": "https://api.github.com/users/padmalcom",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[
"I'd like to work on this: add clearer validation/messages for percent-based splits + tests",
"The most basic example is this code:\n`load_dataset(\"openslr/librispeech_asr\", split=\"train[10%:20%]\")`\n\nThis results in this ValueError:\n```\n raise ValueError(f'Unknown split \"{split}\". Should be one of {list(name2len)}.')\nValueError: Unknown split \"train\". Should be one of ['test.clean', 'test.other', 'train.clean.100', 'train.clean.360', 'train.other.500', 'validation.clean', 'validation.other'].\n```\n"
] | 2025-08-04T13:20:25
| 2025-08-14T14:42:24
| null |
NONE
| null | null | null | null |
### Describe the bug
Hi, I'm trying to download a dataset. To not load the entire dataset in memory, I split it as described [here](https://huggingface.co/docs/datasets/v4.0.0/loading#slice-splits) in 10% steps.
When doing so, the library returns this error:
raise ValueError(f"Bad split: {split}. Available splits: {list(splits_generators)}")
ValueError: Bad split: train[0%:10%]. Available splits: ['train']
Edit: Same happens with a split like _train[:90000]_
### Steps to reproduce the bug
```
for split in range(10):
split_str = f"train[{split*10}%:{(split+1)*10}%]"
print(f"Processing split {split_str}...")
ds = load_dataset("user/dataset", split=split_str, streaming=True)
```
### Expected behavior
I'd expect the library to split my dataset in 10% steps.
### Environment info
python 3.12.11
ubuntu 24
dataset 4.0.0
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7721/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7721/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/7720
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7720/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7720/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7720/events
|
https://github.com/huggingface/datasets/issues/7720
| 3,287,150,513
|
I_kwDODunzps7D7e-x
| 7,720
|
Datasets 4.0 map function causing column not found
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/55143337?v=4",
"events_url": "https://api.github.com/users/Darejkal/events{/privacy}",
"followers_url": "https://api.github.com/users/Darejkal/followers",
"following_url": "https://api.github.com/users/Darejkal/following{/other_user}",
"gists_url": "https://api.github.com/users/Darejkal/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Darejkal",
"id": 55143337,
"login": "Darejkal",
"node_id": "MDQ6VXNlcjU1MTQzMzM3",
"organizations_url": "https://api.github.com/users/Darejkal/orgs",
"received_events_url": "https://api.github.com/users/Darejkal/received_events",
"repos_url": "https://api.github.com/users/Darejkal/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Darejkal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Darejkal/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Darejkal",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[
"Hi, I tried to reproduce this issue on the latest `main` branch but it seems to be working correctly now. My test script (which creates a dummy dataset and applies the `.map()` function) successfully creates and accesses the new column without a `KeyError`.\n\nIt's possible this was fixed by a recent commit. The maintainers might want to consider closing this issue.",
"Hi, have you tried on a large dataset (200GB+) perhaps? I will try my best to do a rerun with main branch when I have the time.",
"I ran it on a small dataset, maybe that’s why I didn’t hit the issue. If it still shows up on your side with the latest main, let me know. I can try it on a bigger set too."
] | 2025-08-03T12:52:34
| 2025-08-07T19:23:34
| null |
NONE
| null | null | null | null |
### Describe the bug
Column returned after mapping is not found in new instance of the dataset.
### Steps to reproduce the bug
Code for reproduction. After running get_total_audio_length, it is errored out due to `data` not having `duration`
```
def compute_duration(x):
return {"duration": len(x["audio"]["array"]) / x["audio"]["sampling_rate"]}
def get_total_audio_length(dataset):
data = dataset.map(compute_duration, num_proc=NUM_PROC)
print(data)
durations=data["duration"]
total_seconds = sum(durations)
return total_seconds
```
### Expected behavior
New datasets.Dataset instance should have new columns attached.
### Environment info
- `datasets` version: 4.0.0
- Platform: Linux-5.4.0-124-generic-x86_64-with-glibc2.31
- Python version: 3.10.13
- `huggingface_hub` version: 0.33.2
- PyArrow version: 20.0.0
- Pandas version: 2.3.0
- `fsspec` version: 2023.12.2
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7720/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7720/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.