html_url
stringlengths
46
51
number
int64
1
7.85k
title
stringlengths
1
290
user
dict
labels
listlengths
0
4
state
stringclasses
2 values
locked
bool
1 class
comments
listlengths
0
30
created_at
timestamp[ns, tz=UTC]date
2020-04-14 10:18:02
2025-11-05 18:11:12
updated_at
timestamp[ns, tz=UTC]date
2020-04-27 16:04:17
2025-11-06 09:44:34
closed_at
timestamp[ns, tz=UTC]date
2020-04-14 12:01:40
2025-11-05 16:02:32
βŒ€
author_association
stringclasses
4 values
draft
bool
2 classes
pull_request
dict
body
stringlengths
0
228k
βŒ€
closed_by
dict
reactions
dict
state_reason
stringclasses
4 values
sub_issues_summary
dict
issue_dependencies_summary
dict
is_pull_request
bool
2 classes
https://github.com/huggingface/datasets/pull/7749
7,749
Fix typo in error message for cache directory deletion
{ "avatar_url": "https://avatars.githubusercontent.com/u/2460418?v=4", "events_url": "https://api.github.com/users/brchristian/events{/privacy}", "followers_url": "https://api.github.com/users/brchristian/followers", "following_url": "https://api.github.com/users/brchristian/following{/other_user}", "gists_url": "https://api.github.com/users/brchristian/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/brchristian", "id": 2460418, "login": "brchristian", "node_id": "MDQ6VXNlcjI0NjA0MTg=", "organizations_url": "https://api.github.com/users/brchristian/orgs", "received_events_url": "https://api.github.com/users/brchristian/received_events", "repos_url": "https://api.github.com/users/brchristian/repos", "site_admin": false, "starred_url": "https://api.github.com/users/brchristian/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/brchristian/subscriptions", "type": "User", "url": "https://api.github.com/users/brchristian", "user_view_type": "public" }
[]
closed
false
[]
2025-08-26T17:47:22Z
2025-09-12T15:43:08Z
2025-09-12T13:22:18Z
CONTRIBUTOR
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7749.diff", "html_url": "https://github.com/huggingface/datasets/pull/7749", "merged_at": "2025-09-12T13:22:18Z", "patch_url": "https://github.com/huggingface/datasets/pull/7749.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7749" }
This PR fixes a small typo in an error message in `src/datasets/fingerprint.py`: https://github.com/huggingface/datasets/blob/910fab20606893f69b4fccac5fcc883dddf5a14d/src/datasets/fingerprint.py#L63 ```diff - occured + occurred ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7749/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/pull/7748
7,748
docs: Streaming best practices
{ "avatar_url": "https://avatars.githubusercontent.com/u/32625230?v=4", "events_url": "https://api.github.com/users/Abdul-Omira/events{/privacy}", "followers_url": "https://api.github.com/users/Abdul-Omira/followers", "following_url": "https://api.github.com/users/Abdul-Omira/following{/other_user}", "gists_url": "https://api.github.com/users/Abdul-Omira/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Abdul-Omira", "id": 32625230, "login": "Abdul-Omira", "node_id": "MDQ6VXNlcjMyNjI1MjMw", "organizations_url": "https://api.github.com/users/Abdul-Omira/orgs", "received_events_url": "https://api.github.com/users/Abdul-Omira/received_events", "repos_url": "https://api.github.com/users/Abdul-Omira/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Abdul-Omira/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Abdul-Omira/subscriptions", "type": "User", "url": "https://api.github.com/users/Abdul-Omira", "user_view_type": "public" }
[]
open
false
[]
2025-08-23T00:18:43Z
2025-09-07T02:33:36Z
null
NONE
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7748.diff", "html_url": "https://github.com/huggingface/datasets/pull/7748", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7748.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7748" }
Add a new 'Streaming best practices' page with practical patterns and pitfalls for large-scale/production use of IterableDataset. Includes examples for batched map with remove_columns, deterministic shuffling with set_epoch, multi-worker sharding, checkpoint/resume, and persistence to Parquet/Hub. Linked from How-to > General usage, next to Stream.
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7748/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/pull/7747
7,747
Add wikipedia-2023-redirects dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/32625230?v=4", "events_url": "https://api.github.com/users/Abdul-Omira/events{/privacy}", "followers_url": "https://api.github.com/users/Abdul-Omira/followers", "following_url": "https://api.github.com/users/Abdul-Omira/following{/other_user}", "gists_url": "https://api.github.com/users/Abdul-Omira/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Abdul-Omira", "id": 32625230, "login": "Abdul-Omira", "node_id": "MDQ6VXNlcjMyNjI1MjMw", "organizations_url": "https://api.github.com/users/Abdul-Omira/orgs", "received_events_url": "https://api.github.com/users/Abdul-Omira/received_events", "repos_url": "https://api.github.com/users/Abdul-Omira/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Abdul-Omira/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Abdul-Omira/subscriptions", "type": "User", "url": "https://api.github.com/users/Abdul-Omira", "user_view_type": "public" }
[]
open
false
[ "you should host this dataset on HF with `ds.push_to_hub()` ! we stopped using dataset scripts some time ago" ]
2025-08-22T23:49:53Z
2025-09-12T13:23:34Z
null
NONE
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7747.diff", "html_url": "https://github.com/huggingface/datasets/pull/7747", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7747.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7747" }
Title: Add wikipedia-2023-redirects dataset (redirect resolution + pageviews) Summary - New dataset loader: wikipedia_2023_redirects - Canonical Wikipedia pages enriched with: - redirects (aliases pointing to the page) - 2023 pageviews (aggregated) - Streaming support; robust parsing; license notes included - Tests with tiny dummy data (XML + TSVs); covers streaming Motivation RAG/retrieval often benefits from: - Query expansion via redirect aliases - Popularity prior via pageviews This loader offers a practical, maintenance-light way to access canonical pages alongside their redirect aliases and 2023 pageview totals. Features - id: string - title: string - url: string - text: string - redirects: list[string] - pageviews_2023: int32 - timestamp: string Licensing - Wikipedia text: CC BY-SA 3.0 (attribution and share-alike apply) - Pageviews: public domain The PR docs mention both, and the module docstring cites sources. Notes - The URLs in _get_urls_for_config are wired to dummy files for tests. In production, these would point to Wikimedia dumps: - XML page dumps: https://dumps.wikimedia.org/ - Pageviews: https://dumps.wikimedia.org/other/pageviews/ - The schema is intentionally simple and stable. Pageview aggregation is per-title sum across 2023. Testing - make style && make quality - pytest -q tests/test_dataset_wikipedia_2023_redirects.py Example ```python from datasets import load_dataset ds = load_dataset("wikipedia_2023_redirects", split="train") print(ds[0]["title"], ds[0]["redirects"][:5], ds[0]["pageviews_2023"]) ``` Acknowledgements - Wikipedia/Wikimedia Foundation for the source data - Hugging Face Datasets for the dataset infrastructure
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7747/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/issues/7746
7,746
Fix: Canonical 'multi_news' dataset is broken and should be updated to a Parquet version
{ "avatar_url": "https://avatars.githubusercontent.com/u/187888489?v=4", "events_url": "https://api.github.com/users/Awesome075/events{/privacy}", "followers_url": "https://api.github.com/users/Awesome075/followers", "following_url": "https://api.github.com/users/Awesome075/following{/other_user}", "gists_url": "https://api.github.com/users/Awesome075/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Awesome075", "id": 187888489, "login": "Awesome075", "node_id": "U_kgDOCzLzaQ", "organizations_url": "https://api.github.com/users/Awesome075/orgs", "received_events_url": "https://api.github.com/users/Awesome075/received_events", "repos_url": "https://api.github.com/users/Awesome075/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Awesome075/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Awesome075/subscriptions", "type": "User", "url": "https://api.github.com/users/Awesome075", "user_view_type": "public" }
[]
open
false
[ "@sayakpaul @a-r-r-o-w could you verify this issue then i can contribute to solve this issue!😊" ]
2025-08-22T12:52:03Z
2025-08-27T20:23:35Z
null
NONE
null
null
Hi, The canonical `multi_news` dataset is currently broken and fails to load. This is because it points to the [alexfabri/multi_news](https://huggingface.co/datasets/alexfabbri/multi_news) repository, which contains a legacy loading script (`multi_news.py`) that requires the now-removed `trust_remote_code` parameter. The original maintainer's GitHub and Hugging Face repositories appear to be inactive, so a community-led fix is needed. I have created a working fix by converting the dataset to the modern Parquet format, which does not require a loading script. The fixed version is available here and loads correctly: **[Awesome075/multi_news_parquet](https://huggingface.co/datasets/Awesome075/multi_news_parquet)** Could the maintainers please guide me or themselves update the official `multi_news` dataset to use this working Parquet version? This would involve updating the canonical pointer for "multi_news" to resolve to the new repository. This action would fix the dataset for all users and ensure its continued availability. Thank you!
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7746/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/issues/7745
7,745
Audio mono argument no longer supported, despite class documentation
{ "avatar_url": "https://avatars.githubusercontent.com/u/5666041?v=4", "events_url": "https://api.github.com/users/jheitz/events{/privacy}", "followers_url": "https://api.github.com/users/jheitz/followers", "following_url": "https://api.github.com/users/jheitz/following{/other_user}", "gists_url": "https://api.github.com/users/jheitz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jheitz", "id": 5666041, "login": "jheitz", "node_id": "MDQ6VXNlcjU2NjYwNDE=", "organizations_url": "https://api.github.com/users/jheitz/orgs", "received_events_url": "https://api.github.com/users/jheitz/received_events", "repos_url": "https://api.github.com/users/jheitz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jheitz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jheitz/subscriptions", "type": "User", "url": "https://api.github.com/users/jheitz", "user_view_type": "public" }
[]
open
false
[ "I want to solve this problem can you please assign it to me\nand also can you please guide whether the mono parameter is required to be re-added or the documentation needs an update?" ]
2025-08-22T12:15:41Z
2025-08-24T18:22:41Z
null
NONE
null
null
### Describe the bug Either update the documentation, or re-introduce the flag (and corresponding logic to convert the audio to mono) ### Steps to reproduce the bug Audio(sampling_rate=16000, mono=True) raises the error TypeError: Audio.__init__() got an unexpected keyword argument 'mono' However, in the class documentation, is says: Args: sampling_rate (`int`, *optional*): Target sampling rate. If `None`, the native sampling rate is used. mono (`bool`, defaults to `True`): Whether to convert the audio signal to mono by averaging samples across channels. [...] ### Expected behavior The above call should either work, or the documentation within the Audio class should be updated ### Environment info - `datasets` version: 4.0.0 - Platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.35 - Python version: 3.12.11 - `huggingface_hub` version: 0.34.4 - PyArrow version: 21.0.0 - Pandas version: 2.3.2 - `fsspec` version: 2025.3.0
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7745/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/issues/7744
7,744
dtype: ClassLabel is not parsed correctly in `features.py`
{ "avatar_url": "https://avatars.githubusercontent.com/u/43553003?v=4", "events_url": "https://api.github.com/users/cmatKhan/events{/privacy}", "followers_url": "https://api.github.com/users/cmatKhan/followers", "following_url": "https://api.github.com/users/cmatKhan/following{/other_user}", "gists_url": "https://api.github.com/users/cmatKhan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cmatKhan", "id": 43553003, "login": "cmatKhan", "node_id": "MDQ6VXNlcjQzNTUzMDAz", "organizations_url": "https://api.github.com/users/cmatKhan/orgs", "received_events_url": "https://api.github.com/users/cmatKhan/received_events", "repos_url": "https://api.github.com/users/cmatKhan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cmatKhan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cmatKhan/subscriptions", "type": "User", "url": "https://api.github.com/users/cmatKhan", "user_view_type": "public" }
[]
closed
false
[ "I think it's \"class_label\"", "> I think it's \"class_label\"\n\nI see -- thank you. This works\n\n```yaml\nlicense: mit\nlanguage:\n- en\ntags:\n- genomics\n- yeast\n- transcription\n- perturbation\n- response\n- overexpression\npretty_name: Hackett, 2020 Overexpression\nsize_categories:\n- 1M<n<10M\ndataset_info:\n features:\n ...\n - name: mechanism\n dtype:\n class_label:\n names: [\"GEV\", \"ZEV\"]\n description: induction system (GEV or ZEV)\n - name: restriction\n dtype:\n class_label:\n names: [\"M\", \"N\", \"P\"]\n description: nutrient limitation (M, N or P)\n```\n\nI see the documentation for [datasets.ClassLabel](https://huggingface.co/docs/datasets/v4.0.0/en/package_reference/main_classes#datasets.ClassLabel). And the documentation for the [dataset cards](https://huggingface.co/docs/hub/en/datasets-cards). I don't see anything in either of those places, though, that specifies the pattern above.\n\nI suppose rather than writing the yaml by hand, the expected workflow is to use `datasets` to construct these features?", "I generally copy/paste and adapt a YAML from another dataset.\n\nBut it's also possible to generate it from `datasets` like that\n\n```python\n>>> import yaml\n>>> print(yaml.dump(features._to_yaml_list(), sort_keys=False))\n- name: start\n dtype: int32\n- name: end\n dtype: int32\n- name: restriction\n dtype:\n class_label:\n names: [\"M\", \"N\", \"P\"]\n```" ]
2025-08-21T23:28:50Z
2025-09-10T15:23:41Z
2025-09-10T15:23:41Z
NONE
null
null
`dtype: ClassLabel` in the README.md yaml metadata is parsed incorrectly and causes the data viewer to fail. This yaml in my metadata ([source](https://huggingface.co/datasets/BrentLab/yeast_genome_resources/blob/main/README.md), though i changed `ClassLabel` to `string` to using different dtype in order to avoid the error): ```yaml license: mit pretty_name: BrentLab Yeast Genome Resources size_categories: - 1K<n<10K language: - en dataset_info: features: - name: start dtype: int32 description: Start coordinate (1-based, **inclusive**) - name: end dtype: int32 description: End coordinate (1-based, **inclusive**) - name: strand dtype: ClassLabel ... ``` is producing the following error in the data viewer: ``` Error code: ConfigNamesError Exception: ValueError Message: Feature type 'Classlabel' not found. Available feature types: ['Value', 'ClassLabel', 'Translation', 'TranslationVariableLanguages', 'LargeList', 'List', 'Array2D', 'Array3D', 'Array4D', 'Array5D', 'Audio', 'Image', 'Video', 'Pdf'] Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response config_names = get_dataset_config_names( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 161, in get_dataset_config_names dataset_module = dataset_module_factory( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1031, in dataset_module_factory raise e1 from None File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 996, in dataset_module_factory return HubDatasetModuleFactory( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 605, in get_module dataset_infos = DatasetInfosDict.from_dataset_card_data(dataset_card_data) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/info.py", line 386, in from_dataset_card_data dataset_info = DatasetInfo._from_yaml_dict(dataset_card_data["dataset_info"]) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/info.py", line 317, in _from_yaml_dict yaml_data["features"] = Features._from_yaml_list(yaml_data["features"]) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 2027, in _from_yaml_list return cls.from_dict(from_yaml_inner(yaml_data)) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1872, in from_dict obj = generate_from_dict(dic) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1459, in generate_from_dict return {key: generate_from_dict(value) for key, value in obj.items()} File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1459, in <dictcomp> return {key: generate_from_dict(value) for key, value in obj.items()} File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1465, in generate_from_dict raise ValueError(f"Feature type '{_type}' not found. Available feature types: {list(_FEATURE_TYPES.keys())}") ValueError: Feature type 'Classlabel' not found. Available feature types: ['Value', 'ClassLabel', 'Translation', 'TranslationVariableLanguages', 'LargeList', 'List', 'Array2D', 'Array3D', 'Array4D', 'Array5D', 'Audio', 'Image', 'Video', 'Pdf'] ``` I think that this is caused by this line https://github.com/huggingface/datasets/blob/896616c6cb03d92a33248c3529b0796cda27e955/src/datasets/features/features.py#L2013 Reproducible example from [naming.py](https://github.com/huggingface/datasets/blob/896616c6cb03d92a33248c3529b0796cda27e955/src/datasets/naming.py) ```python import itertools import os import re _uppercase_uppercase_re = re.compile(r"([A-Z]+)([A-Z][a-z])") _lowercase_uppercase_re = re.compile(r"([a-z\d])([A-Z])") _single_underscore_re = re.compile(r"(?<!_)_(?!_)") _multiple_underscores_re = re.compile(r"(_{2,})") _split_re = r"^\w+(\.\w+)*$" def snakecase_to_camelcase(name): """Convert snake-case string to camel-case string.""" name = _single_underscore_re.split(name) name = [_multiple_underscores_re.split(n) for n in name] return "".join(n.capitalize() for n in itertools.chain.from_iterable(name) if n != "") snakecase_to_camelcase("ClassLabel") ``` Result: ```raw 'Classlabel' ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/43553003?v=4", "events_url": "https://api.github.com/users/cmatKhan/events{/privacy}", "followers_url": "https://api.github.com/users/cmatKhan/followers", "following_url": "https://api.github.com/users/cmatKhan/following{/other_user}", "gists_url": "https://api.github.com/users/cmatKhan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cmatKhan", "id": 43553003, "login": "cmatKhan", "node_id": "MDQ6VXNlcjQzNTUzMDAz", "organizations_url": "https://api.github.com/users/cmatKhan/orgs", "received_events_url": "https://api.github.com/users/cmatKhan/received_events", "repos_url": "https://api.github.com/users/cmatKhan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cmatKhan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cmatKhan/subscriptions", "type": "User", "url": "https://api.github.com/users/cmatKhan", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7744/reactions" }
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/pull/7743
7,743
Refactor HDF5 and preserve tree structure
{ "avatar_url": "https://avatars.githubusercontent.com/u/17013474?v=4", "events_url": "https://api.github.com/users/klamike/events{/privacy}", "followers_url": "https://api.github.com/users/klamike/followers", "following_url": "https://api.github.com/users/klamike/following{/other_user}", "gists_url": "https://api.github.com/users/klamike/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/klamike", "id": 17013474, "login": "klamike", "node_id": "MDQ6VXNlcjE3MDEzNDc0", "organizations_url": "https://api.github.com/users/klamike/orgs", "received_events_url": "https://api.github.com/users/klamike/received_events", "repos_url": "https://api.github.com/users/klamike/repos", "site_admin": false, "starred_url": "https://api.github.com/users/klamike/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/klamike/subscriptions", "type": "User", "url": "https://api.github.com/users/klamike", "user_view_type": "public" }
[]
closed
false
[ "@lhoestq this is ready for you now!", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7743). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-08-21T17:28:17Z
2025-08-26T15:28:05Z
2025-08-26T15:28:05Z
CONTRIBUTOR
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7743.diff", "html_url": "https://github.com/huggingface/datasets/pull/7743", "merged_at": "2025-08-26T15:28:05Z", "patch_url": "https://github.com/huggingface/datasets/pull/7743.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7743" }
Closes #7741. Followup to #7690 - Recursive parsing and feature inference, to preserve the tree structure of the file. Note this means we now visit all links in the file. It also means we have to call` combine_chunks` on any large non-root datasets. - Support for `complex64` (two `float32`s, used to be converted to two `float64`s) - Support for ndim complex, compound, more field types for compound (due to reusing the main parser, compound types are treated like groups) - Cleaned up varlen support - Always do feature inference and always cast to features (used to cast to schema) - Updated tests to use `load_dataset` instead of internal APIs - Removed `columns` in config. Have to give Features (i.e., must specify types) if filtering
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7743/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/issues/7742
7,742
module 'pyarrow' has no attribute 'PyExtensionType'
{ "avatar_url": "https://avatars.githubusercontent.com/u/6106392?v=4", "events_url": "https://api.github.com/users/mnedelko/events{/privacy}", "followers_url": "https://api.github.com/users/mnedelko/followers", "following_url": "https://api.github.com/users/mnedelko/following{/other_user}", "gists_url": "https://api.github.com/users/mnedelko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mnedelko", "id": 6106392, "login": "mnedelko", "node_id": "MDQ6VXNlcjYxMDYzOTI=", "organizations_url": "https://api.github.com/users/mnedelko/orgs", "received_events_url": "https://api.github.com/users/mnedelko/received_events", "repos_url": "https://api.github.com/users/mnedelko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mnedelko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mnedelko/subscriptions", "type": "User", "url": "https://api.github.com/users/mnedelko", "user_view_type": "public" }
[]
open
false
[ "Just checked out the files and thishad already been addressed", "For others who find this issue: \n\n`pip install --upgrade \"datasets>=2.20.0\"` \n\nfrom https://github.com/explodinggradients/ragas/issues/2170#issuecomment-3204393672 can fix it." ]
2025-08-20T06:14:33Z
2025-09-09T02:51:46Z
null
NONE
null
null
### Describe the bug When importing certain libraries, users will encounter the following error which can be traced back to the datasets library. module 'pyarrow' has no attribute 'PyExtensionType'. Example issue: https://github.com/explodinggradients/ragas/issues/2170 The issue occurs due to the following. I will proceed to submit a PR with the below fix: **Issue Reason** The issue is that PyArrow version 21.0.0 doesn’t have PyExtensionType. This was changed in newer versions of PyArrow. The PyExtensionType class was renamed to ExtensionType in PyArrow 13.0.0 and later versions. ** Issue Solution** Making the following changes to the following lib files should temporarily resolve the issue. I will submit a PR to the dataets library in the meantime. env_name/lib/python3.10/site-packages/datasets/features/features.py: ``` > 521 self.shape = tuple(shape) 522 self.value_type = dtype 523 self.storage_dtype = self._generate_dtype(self.value_type) 524 - pa.PyExtensionType.__init__(self, self.storage_dtype) 524 + pa.ExtensionType.__init__(self, self.storage_dtype) 525 526 def __reduce__(self): 527 return self.__class__, ( ``` Updated venv_name/lib/python3.10/site-packages/datasets/features/features.py: ``` 510 _type: str = field(default=β€œArray5D”, init=False, repr=False) 511 512 513 - class _ArrayXDExtensionType(pa.PyExtensionType): 513 + class _ArrayXDExtensionType(pa.ExtensionType): 514 ndims: Optional[int] = None 515 516 def __init__(self, shape: tuple, dtype: str): ``` ### Steps to reproduce the bug Ragas version: 0.3.1 Python version: 3.11 **Code to Reproduce** _**In notebook:**_ !pip install ragas from ragas import evaluate ### Expected behavior The required package installs without issue. ### Environment info In Jupyter Notebook. venv
null
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/7742/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/issues/7741
7,741
Preserve tree structure when loading HDF5
{ "avatar_url": "https://avatars.githubusercontent.com/u/17013474?v=4", "events_url": "https://api.github.com/users/klamike/events{/privacy}", "followers_url": "https://api.github.com/users/klamike/followers", "following_url": "https://api.github.com/users/klamike/following{/other_user}", "gists_url": "https://api.github.com/users/klamike/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/klamike", "id": 17013474, "login": "klamike", "node_id": "MDQ6VXNlcjE3MDEzNDc0", "organizations_url": "https://api.github.com/users/klamike/orgs", "received_events_url": "https://api.github.com/users/klamike/received_events", "repos_url": "https://api.github.com/users/klamike/repos", "site_admin": false, "starred_url": "https://api.github.com/users/klamike/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/klamike/subscriptions", "type": "User", "url": "https://api.github.com/users/klamike", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
[]
2025-08-19T15:42:05Z
2025-08-26T15:28:06Z
2025-08-26T15:28:06Z
CONTRIBUTOR
null
null
### Feature request https://github.com/huggingface/datasets/pull/7740#discussion_r2285605374 ### Motivation `datasets` has the `Features` class for representing nested features. HDF5 files have groups of datasets which are nested, though in #7690 the keys are flattened. We should preserve that structure for the user. ### Your contribution I'll open a PR (#7743)
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7741/reactions" }
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/pull/7740
7,740
Document HDF5 support
{ "avatar_url": "https://avatars.githubusercontent.com/u/17013474?v=4", "events_url": "https://api.github.com/users/klamike/events{/privacy}", "followers_url": "https://api.github.com/users/klamike/followers", "following_url": "https://api.github.com/users/klamike/following{/other_user}", "gists_url": "https://api.github.com/users/klamike/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/klamike", "id": 17013474, "login": "klamike", "node_id": "MDQ6VXNlcjE3MDEzNDc0", "organizations_url": "https://api.github.com/users/klamike/orgs", "received_events_url": "https://api.github.com/users/klamike/received_events", "repos_url": "https://api.github.com/users/klamike/repos", "site_admin": false, "starred_url": "https://api.github.com/users/klamike/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/klamike/subscriptions", "type": "User", "url": "https://api.github.com/users/klamike", "user_view_type": "public" }
[]
closed
false
[ "@lhoestq any guidance on what else to add/feedback on what is there now? It seems a bit minimal, but I don't think it's worth doing an entire page on HDF5?" ]
2025-08-19T14:53:04Z
2025-09-24T14:51:11Z
2025-09-24T14:51:11Z
CONTRIBUTOR
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7740.diff", "html_url": "https://github.com/huggingface/datasets/pull/7740", "merged_at": "2025-09-24T14:51:11Z", "patch_url": "https://github.com/huggingface/datasets/pull/7740.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7740" }
I think these are at least the main places where we should put content. Ideally it is not just repeated in the final version ref #7690 - [x] Wait for #7743 to land
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7740/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/issues/7739
7,739
Replacement of "Sequence" feature with "List" breaks backward compatibility
{ "avatar_url": "https://avatars.githubusercontent.com/u/15764776?v=4", "events_url": "https://api.github.com/users/evmaki/events{/privacy}", "followers_url": "https://api.github.com/users/evmaki/followers", "following_url": "https://api.github.com/users/evmaki/following{/other_user}", "gists_url": "https://api.github.com/users/evmaki/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/evmaki", "id": 15764776, "login": "evmaki", "node_id": "MDQ6VXNlcjE1NzY0Nzc2", "organizations_url": "https://api.github.com/users/evmaki/orgs", "received_events_url": "https://api.github.com/users/evmaki/received_events", "repos_url": "https://api.github.com/users/evmaki/repos", "site_admin": false, "starred_url": "https://api.github.com/users/evmaki/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/evmaki/subscriptions", "type": "User", "url": "https://api.github.com/users/evmaki", "user_view_type": "public" }
[]
open
false
[ "Backward compatibility here means 4.0.0 can load datasets saved with older versions.\n\nYou will need 4.0.0 to load datasets saved with 4.0.0" ]
2025-08-18T17:28:38Z
2025-09-10T14:17:50Z
null
NONE
null
null
PR #7634 replaced the Sequence feature with List in 4.0.0, so datasets saved with version 4.0.0 with that feature cannot be loaded by earlier versions. There is no clear option in 4.0.0 to use the legacy feature type to preserve backward compatibility. Why is this a problem? I have a complex preprocessing and training pipeline dependent on 3.6.0; we manage a very large number of separate datasets that get concatenated during training. If just one of those datasets is saved with 4.0.0, they become unusable, and we have no way of "fixing" them. I can load them in 4.0.0 but I can't re-save with the legacy feature type, and I can't load it in 3.6.0 for obvious reasons. Perhaps I'm missing something here, since the PR says that backward compatibility is preserved; if so, it's not obvious to me how.
null
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/7739/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/issues/7738
7,738
Allow saving multi-dimensional ndarray with dynamic shapes
{ "avatar_url": "https://avatars.githubusercontent.com/u/82735346?v=4", "events_url": "https://api.github.com/users/ryan-minato/events{/privacy}", "followers_url": "https://api.github.com/users/ryan-minato/followers", "following_url": "https://api.github.com/users/ryan-minato/following{/other_user}", "gists_url": "https://api.github.com/users/ryan-minato/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ryan-minato", "id": 82735346, "login": "ryan-minato", "node_id": "MDQ6VXNlcjgyNzM1MzQ2", "organizations_url": "https://api.github.com/users/ryan-minato/orgs", "received_events_url": "https://api.github.com/users/ryan-minato/received_events", "repos_url": "https://api.github.com/users/ryan-minato/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ryan-minato/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ryan-minato/subscriptions", "type": "User", "url": "https://api.github.com/users/ryan-minato", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
[ "I agree this would be super valuable.\n\nIt looks like this was discussed a few years ago in https://github.com/huggingface/datasets/issues/5272#issuecomment-1550200824 but there were some issues. Those PRs are merged now and it looks like Arrow [officially supports](https://arrow.apache.org/docs/format/CanonicalExtensions.html#variable-shape-tensor) this so it's a good time to re-evaluate!", "Happy to help with this, maybe we can think of adding a new type `Tensor` (instead of Array2D, 3D etc. which imply a fixed number of dims - we can keep them for backward compat anyways) that uses VariableShapeTensor (or FixedShapeTensor if the shape is provided maybe ? happy to discuss this)" ]
2025-08-18T02:23:51Z
2025-08-26T15:25:02Z
null
NONE
null
null
### Feature request I propose adding a dedicated feature to the datasets library that allows for the efficient storage and retrieval of multi-dimensional ndarray with dynamic shapes. Similar to how Image columns handle variable-sized images, this feature would provide a structured way to store array data where the dimensions are not fixed. A possible implementation could be a new Array or Tensor feature type that stores the data in a structured format, for example, ```python { "shape": (5, 224, 224), "dtype": "uint8", "data": [...] } ``` This would allow the datasets library to handle heterogeneous array sizes within a single column without requiring a fixed shape definition in the feature schema. ### Motivation I am currently trying to upload data from astronomical telescopes, specifically FITS files, to the Hugging Face Hub. This type of data is very similar to images but often has more than three dimensions. For example, data from the SDSS project contains five channels (u, g, r, i, z), and the pixel values can exceed 255, making the Pillow based Image feature unsuitable. The current datasets library requires a fixed shape to be defined in the feature schema for multi-dimensional arrays, which is a major roadblock. This prevents me from saving my data, as the dimensions of the arrays can vary across different FITS files. https://github.com/huggingface/datasets/blob/985c9bee6bfc345787a8b9dd316e1d4f3b930503/src/datasets/features/features.py#L613-L614 A feature that supports dynamic shapes would be incredibly beneficial for the astronomy community and other fields dealing with similar high-dimensional, variable-sized data (e.g., medical imaging, scientific simulations). ### Your contribution I am willing to create a PR to help implement this feature if the proposal is accepted.
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7738/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/pull/7737
7,737
docs: Add column overwrite example to batch mapping guide
{ "avatar_url": "https://avatars.githubusercontent.com/u/183703408?v=4", "events_url": "https://api.github.com/users/Sanjaykumar030/events{/privacy}", "followers_url": "https://api.github.com/users/Sanjaykumar030/followers", "following_url": "https://api.github.com/users/Sanjaykumar030/following{/other_user}", "gists_url": "https://api.github.com/users/Sanjaykumar030/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Sanjaykumar030", "id": 183703408, "login": "Sanjaykumar030", "node_id": "U_kgDOCvMXcA", "organizations_url": "https://api.github.com/users/Sanjaykumar030/orgs", "received_events_url": "https://api.github.com/users/Sanjaykumar030/received_events", "repos_url": "https://api.github.com/users/Sanjaykumar030/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Sanjaykumar030/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Sanjaykumar030/subscriptions", "type": "User", "url": "https://api.github.com/users/Sanjaykumar030", "user_view_type": "public" }
[]
closed
false
[ "Hi @lhoestq, just a gentle follow-up on this PR." ]
2025-08-13T14:20:19Z
2025-09-04T11:11:37Z
2025-09-04T11:11:37Z
CONTRIBUTOR
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7737.diff", "html_url": "https://github.com/huggingface/datasets/pull/7737", "merged_at": "2025-09-04T11:11:37Z", "patch_url": "https://github.com/huggingface/datasets/pull/7737.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7737" }
This PR adds a complementary example showing the **column-overwriting** pattern, which is both more direct and more flexible for many transformations. ### Proposed Change The original `remove_columns` example remains untouched. Below it, this PR introduces an alternative approach that overwrites an existing column during batch mapping. This teaches users a core `.map()` capability for in-place transformations without extra intermediate steps. **New Example:** > ```python > >>> from datasets import Dataset > >>> dataset = Dataset.from_dict({"a": [0, 1, 2]}) > # Overwrite "a" directly to duplicate each value > >>> duplicated_dataset = dataset.map( > ... lambda batch: {"a": [x for x in batch["a"] for _ in range(2)]}, > ... batched=True > ... ) > >>> duplicated_dataset > Dataset({ > features: ['a'], > num_rows: 6 > }) > >>> duplicated_dataset["a"] > [0, 0, 1, 1, 2, 2] > ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7737/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/pull/7736
7,736
Fix type hint `train_test_split`
{ "avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4", "events_url": "https://api.github.com/users/qgallouedec/events{/privacy}", "followers_url": "https://api.github.com/users/qgallouedec/followers", "following_url": "https://api.github.com/users/qgallouedec/following{/other_user}", "gists_url": "https://api.github.com/users/qgallouedec/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/qgallouedec", "id": 45557362, "login": "qgallouedec", "node_id": "MDQ6VXNlcjQ1NTU3MzYy", "organizations_url": "https://api.github.com/users/qgallouedec/orgs", "received_events_url": "https://api.github.com/users/qgallouedec/received_events", "repos_url": "https://api.github.com/users/qgallouedec/repos", "site_admin": false, "starred_url": "https://api.github.com/users/qgallouedec/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/qgallouedec/subscriptions", "type": "User", "url": "https://api.github.com/users/qgallouedec", "user_view_type": "public" }
[]
closed
false
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7736). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-08-11T20:46:53Z
2025-08-13T13:13:50Z
2025-08-13T13:13:48Z
MEMBER
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7736.diff", "html_url": "https://github.com/huggingface/datasets/pull/7736", "merged_at": "2025-08-13T13:13:48Z", "patch_url": "https://github.com/huggingface/datasets/pull/7736.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7736" }
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7736/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/pull/7735
7,735
fix largelist repr
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7735). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-08-11T15:17:42Z
2025-08-11T15:39:56Z
2025-08-11T15:39:54Z
MEMBER
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7735.diff", "html_url": "https://github.com/huggingface/datasets/pull/7735", "merged_at": "2025-08-11T15:39:54Z", "patch_url": "https://github.com/huggingface/datasets/pull/7735.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7735" }
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7735/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/pull/7734
7,734
Fixing __getitem__ of datasets which behaves inconsistent to documentation when setting _format_type to None
{ "avatar_url": "https://avatars.githubusercontent.com/u/40367113?v=4", "events_url": "https://api.github.com/users/awagen/events{/privacy}", "followers_url": "https://api.github.com/users/awagen/followers", "following_url": "https://api.github.com/users/awagen/following{/other_user}", "gists_url": "https://api.github.com/users/awagen/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/awagen", "id": 40367113, "login": "awagen", "node_id": "MDQ6VXNlcjQwMzY3MTEz", "organizations_url": "https://api.github.com/users/awagen/orgs", "received_events_url": "https://api.github.com/users/awagen/received_events", "repos_url": "https://api.github.com/users/awagen/repos", "site_admin": false, "starred_url": "https://api.github.com/users/awagen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/awagen/subscriptions", "type": "User", "url": "https://api.github.com/users/awagen", "user_view_type": "public" }
[]
closed
false
[ "this breaking change is actually expected, happy to help with a fix in sentencetransformers to account for this", "Thank you for the context. I thought this was a mismatch do the documentation. Good to know it was intentional. No worries, can add a PR to sentence transformers." ]
2025-08-09T15:52:54Z
2025-08-17T07:23:00Z
2025-08-17T07:23:00Z
NONE
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7734.diff", "html_url": "https://github.com/huggingface/datasets/pull/7734", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7734.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7734" }
Setting _format_type to None, should return plain python object but as of 4.0.0 returns Column. This fails in libs such as sentencetransformers (such as in generation of hard negatives) where plain python is expected.
{ "avatar_url": "https://avatars.githubusercontent.com/u/40367113?v=4", "events_url": "https://api.github.com/users/awagen/events{/privacy}", "followers_url": "https://api.github.com/users/awagen/followers", "following_url": "https://api.github.com/users/awagen/following{/other_user}", "gists_url": "https://api.github.com/users/awagen/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/awagen", "id": 40367113, "login": "awagen", "node_id": "MDQ6VXNlcjQwMzY3MTEz", "organizations_url": "https://api.github.com/users/awagen/orgs", "received_events_url": "https://api.github.com/users/awagen/received_events", "repos_url": "https://api.github.com/users/awagen/repos", "site_admin": false, "starred_url": "https://api.github.com/users/awagen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/awagen/subscriptions", "type": "User", "url": "https://api.github.com/users/awagen", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7734/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/issues/7733
7,733
Dataset Repo Paths to Locally Stored Images Not Being Appended to Image Path
{ "avatar_url": "https://avatars.githubusercontent.com/u/27898715?v=4", "events_url": "https://api.github.com/users/dennys246/events{/privacy}", "followers_url": "https://api.github.com/users/dennys246/followers", "following_url": "https://api.github.com/users/dennys246/following{/other_user}", "gists_url": "https://api.github.com/users/dennys246/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dennys246", "id": 27898715, "login": "dennys246", "node_id": "MDQ6VXNlcjI3ODk4NzE1", "organizations_url": "https://api.github.com/users/dennys246/orgs", "received_events_url": "https://api.github.com/users/dennys246/received_events", "repos_url": "https://api.github.com/users/dennys246/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dennys246/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dennys246/subscriptions", "type": "User", "url": "https://api.github.com/users/dennys246", "user_view_type": "public" }
[]
closed
false
[ "This is the download issues I come into, about ever other time it fails...\n<img width=\"1719\" height=\"1226\" alt=\"Image\" src=\"https://github.com/user-attachments/assets/2e5b4b3e-7c13-4bad-a77c-34b47a932831\" />", "I’m guessing this is just a feature so I’m going to close this thread. I also altered my loading scheme to start on the first index of a particular modality within the dataset (index ~390) and this issue went away with client error from too many requests. Due to how the dataset is sorted in HF, there are gaps in my dataset between modalities (~500) that this issue should theoretically also occur on but it does not. It seems after initially downloading the first image in a dataset the connection becomes approved on HF end and long lapses in checking entries in a dataset, without actually loading the full sample, are enabled. \n\nTL;DR Local handling doesn’t appear to be possible with images in the datasets library. Load the first image you need right away through storing it’s index and calling to it. Don’t iterate long sequences of HF repo’s looking for a condition to be met without first loading in a sample." ]
2025-08-08T19:10:58Z
2025-10-07T04:47:36Z
2025-10-07T04:32:48Z
NONE
null
null
### Describe the bug I’m not sure if this is a bug or a feature and I just don’t fully understand how dataset loading is to work, but it appears there may be a bug with how locally stored Image() are being accessed. I’ve uploaded a new dataset to hugging face (rmdig/rocky_mountain_snowpack) but I’ve come into a ton of trouble trying to have the images handled properly (at least in the way I’d expect them to be handled). I find that I cannot use relative paths for loading images remotely from the Hugging Face repo or from a local repository. Any time I do it always simply appends my current working directory to the dataset. As a result to use the datasets library with my dataset I have to change my working directory to the dataset folder or abandon the dataset object structure, which I cannot imagine you intended. As a result I have to use URL’s since an absolute path on my system obviously wouldn’t work for others. The URL works ok, but despite me having it locally downloaded, it appears to be redownloading the dataset every time I train my snowGAN model on it (and often times I’m coming into HTTPS errors for over requesting the data). Or maybe image relative paths aren't intended to be loaded directly through your datasets library as images and should be kept as strings for the user to handle? If so I feel like you’re missing out on some pretty seamless functionality ### Steps to reproduce the bug 1. Download a local copy of the dataset (rmdig/rocky_mountain_snowpack) through git or whatever you prefer. 2. Alter the README.md YAML for file_path (the relative path to each image) to be type Image instead of type string ` --- dataset_info: features: - name: image dtype: Image - name: file_path dtype: Image ` 3. Initialize the dataset locally, make sure your working directory is not the dataset directory root `dataset = datasets.load_dataset(β€˜path/to/local/rocky_mountain_snowpack/β€˜)` 4. Call to one of the samples and you’ll get an error that the image was not found in current/working/directory/preprocessed/cores/image_1.png. Showing that it’s simply looking in the current working directory + relative path ` >>> dataset['train'][0] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/arrow_dataset.py", line 2859, in __getitem__ return self._getitem(key) ^^^^^^^^^^^^^^^^^^ File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/arrow_dataset.py", line 2841, in _getitem formatted_output = format_table( ^^^^^^^^^^^^^ File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/formatting/formatting.py", line 657, in format_table return formatter(pa_table, query_type=query_type) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/formatting/formatting.py", line 410, in __call__ return self.format_row(pa_table) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/formatting/formatting.py", line 459, in format_row row = self.python_features_decoder.decode_row(row) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/formatting/formatting.py", line 223, in decode_row return self.features.decode_example(row, token_per_repo_id=self.token_per_repo_id) if self.features else row ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/features/features.py", line 2093, in decode_example column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/features/features.py", line 1405, in decode_nested_example return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) if obj is not None else None ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/features/image.py", line 171, in decode_example image = PIL.Image.open(path) ^^^^^^^^^^^^^^^^^^^^ File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/PIL/Image.py", line 3277, in open fp = builtins.open(filename, "rb") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ FileNotFoundError: [Errno 2] No such file or directory: '/Users/dennyschaedig/Datasets/preprocessed/cores/image_1.png' ` ### Expected behavior I expect the datasets and Image() to load the locally hosted data using path/to/local/rocky_mountain_snowpack/ (that I pass in with my datasets.load_dataset() or the you all handle on the backend) call + relative path. Instead it appears to load from my current working directory + relative path. ### Environment info Tested on… Windows 11, Ubuntu Linux 22.04 and Mac Sequoia 15.5 Silicone M2 datasets version 4.0.0 Python 3.12 and 3.13
{ "avatar_url": "https://avatars.githubusercontent.com/u/27898715?v=4", "events_url": "https://api.github.com/users/dennys246/events{/privacy}", "followers_url": "https://api.github.com/users/dennys246/followers", "following_url": "https://api.github.com/users/dennys246/following{/other_user}", "gists_url": "https://api.github.com/users/dennys246/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dennys246", "id": 27898715, "login": "dennys246", "node_id": "MDQ6VXNlcjI3ODk4NzE1", "organizations_url": "https://api.github.com/users/dennys246/orgs", "received_events_url": "https://api.github.com/users/dennys246/received_events", "repos_url": "https://api.github.com/users/dennys246/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dennys246/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dennys246/subscriptions", "type": "User", "url": "https://api.github.com/users/dennys246", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7733/reactions" }
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/issues/7732
7,732
webdataset: key errors when `field_name` has upper case characters
{ "avatar_url": "https://avatars.githubusercontent.com/u/29985433?v=4", "events_url": "https://api.github.com/users/YassineYousfi/events{/privacy}", "followers_url": "https://api.github.com/users/YassineYousfi/followers", "following_url": "https://api.github.com/users/YassineYousfi/following{/other_user}", "gists_url": "https://api.github.com/users/YassineYousfi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/YassineYousfi", "id": 29985433, "login": "YassineYousfi", "node_id": "MDQ6VXNlcjI5OTg1NDMz", "organizations_url": "https://api.github.com/users/YassineYousfi/orgs", "received_events_url": "https://api.github.com/users/YassineYousfi/received_events", "repos_url": "https://api.github.com/users/YassineYousfi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/YassineYousfi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/YassineYousfi/subscriptions", "type": "User", "url": "https://api.github.com/users/YassineYousfi", "user_view_type": "public" }
[]
open
false
[]
2025-08-08T16:56:42Z
2025-08-08T16:56:42Z
null
CONTRIBUTOR
null
null
### Describe the bug When using a webdataset each sample can be a collection of different "fields" like this: ``` images17/image194.left.jpg images17/image194.right.jpg images17/image194.json images17/image12.left.jpg images17/image12.right.jpg images17/image12.json ``` if the field_name contains upper case characters, the HF webdataset integration throws a key error when trying to load the dataset: e.g. from a dataset (now updated so that it doesn't throw this error) ``` --------------------------------------------------------------------------- KeyError Traceback (most recent call last) Cell In[1], line 2 1 from datasets import load_dataset ----> 2 ds = load_dataset("commaai/comma2k19", data_files={'train': ['data-00000.tar.gz']}, num_proc=1) File ~/xx/.venv/lib/python3.11/site-packages/datasets/load.py:1412, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, keep_in_memory, save_infos, revision, token, streaming, num_proc, storage_options, **config_kwargs) 1409 return builder_instance.as_streaming_dataset(split=split) 1411 # Download and prepare data -> 1412 builder_instance.download_and_prepare( 1413 download_config=download_config, 1414 download_mode=download_mode, 1415 verification_mode=verification_mode, 1416 num_proc=num_proc, 1417 storage_options=storage_options, 1418 ) 1420 # Build dataset for splits 1421 keep_in_memory = ( 1422 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size) 1423 ) File ~/xx/.venv/lib/python3.11/site-packages/datasets/builder.py:894, in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, dl_manager, base_path, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs) 892 if num_proc is not None: 893 prepare_split_kwargs["num_proc"] = num_proc --> 894 self._download_and_prepare( 895 dl_manager=dl_manager, 896 verification_mode=verification_mode, 897 **prepare_split_kwargs, 898 **download_and_prepare_kwargs, 899 ) 900 # Sync info 901 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values()) File ~/xx/.venv/lib/python3.11/site-packages/datasets/builder.py:1609, in GeneratorBasedBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_splits_kwargs) 1608 def _download_and_prepare(self, dl_manager, verification_mode, **prepare_splits_kwargs): -> 1609 super()._download_and_prepare( 1610 dl_manager, 1611 verification_mode, 1612 check_duplicate_keys=verification_mode == VerificationMode.BASIC_CHECKS 1613 or verification_mode == VerificationMode.ALL_CHECKS, 1614 **prepare_splits_kwargs, 1615 ) File ~/xx/.venv/lib/python3.11/site-packages/datasets/builder.py:948, in DatasetBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs) 946 split_dict = SplitDict(dataset_name=self.dataset_name) 947 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs) --> 948 split_generators = self._split_generators(dl_manager, **split_generators_kwargs) 950 # Checksums verification 951 if verification_mode == VerificationMode.ALL_CHECKS and dl_manager.record_checksums: File ~/xx/.venv/lib/python3.11/site-packages/datasets/packaged_modules/webdataset/webdataset.py:81, in WebDataset._split_generators(self, dl_manager) 78 if not self.info.features: 79 # Get one example to get the feature types 80 pipeline = self._get_pipeline_from_tar(tar_paths[0], tar_iterators[0]) ---> 81 first_examples = list(islice(pipeline, self.NUM_EXAMPLES_FOR_FEATURES_INFERENCE)) 82 if any(example.keys() != first_examples[0].keys() for example in first_examples): 83 raise ValueError( 84 "The TAR archives of the dataset should be in WebDataset format, " 85 "but the files in the archive don't share the same prefix or the same types." 86 ) File ~/xx/.venv/lib/python3.11/site-packages/datasets/packaged_modules/webdataset/webdataset.py:55, in WebDataset._get_pipeline_from_tar(cls, tar_path, tar_iterator) 53 data_extension = field_name.split(".")[-1] 54 if data_extension in cls.DECODERS: ---> 55 current_example[field_name] = cls.DECODERS[data_extension](current_example[field_name]) 56 if current_example: 57 yield current_example KeyError: 'processed_log_IMU_magnetometer_value.npy' ``` ### Steps to reproduce the bug unit test was added in: https://github.com/huggingface/datasets/pull/7726 it fails without the fixed proposed in the same PR ### Expected behavior Not throwing a key error. ### Environment info ``` - `datasets` version: 4.0.0 - Platform: Linux-6.8.0-51-generic-x86_64-with-glibc2.39 - Python version: 3.11.4 - `huggingface_hub` version: 0.33.4 - PyArrow version: 21.0.0 - Pandas version: 2.3.1 - `fsspec` version: 2025.7.0 ```
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7732/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/issues/7731
7,731
Add the possibility of a backend for audio decoding
{ "avatar_url": "https://avatars.githubusercontent.com/u/142020129?v=4", "events_url": "https://api.github.com/users/intexcor/events{/privacy}", "followers_url": "https://api.github.com/users/intexcor/followers", "following_url": "https://api.github.com/users/intexcor/following{/other_user}", "gists_url": "https://api.github.com/users/intexcor/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/intexcor", "id": 142020129, "login": "intexcor", "node_id": "U_kgDOCHcOIQ", "organizations_url": "https://api.github.com/users/intexcor/orgs", "received_events_url": "https://api.github.com/users/intexcor/received_events", "repos_url": "https://api.github.com/users/intexcor/repos", "site_admin": false, "starred_url": "https://api.github.com/users/intexcor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/intexcor/subscriptions", "type": "User", "url": "https://api.github.com/users/intexcor", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
[ "is there a work around im stuck", "never mind just downgraded" ]
2025-08-08T11:08:56Z
2025-08-20T16:29:33Z
null
NONE
null
null
### Feature request Add the possibility of a backend for audio decoding. Before version 4.0.0, soundfile was used, and now torchcodec is used, but the problem is that torchcodec requires ffmpeg, which is problematic to install on the same colab. Therefore, I suggest adding a decoder selection when loading the dataset. ### Motivation I use a service for training models in which ffmpeg cannot be installed. ### Your contribution I use a service for training models in which ffmpeg cannot be installed.
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7731/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/pull/7730
7,730
Grammar fix: correct "showed" to "shown" in fingerprint.py
{ "avatar_url": "https://avatars.githubusercontent.com/u/2460418?v=4", "events_url": "https://api.github.com/users/brchristian/events{/privacy}", "followers_url": "https://api.github.com/users/brchristian/followers", "following_url": "https://api.github.com/users/brchristian/following{/other_user}", "gists_url": "https://api.github.com/users/brchristian/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/brchristian", "id": 2460418, "login": "brchristian", "node_id": "MDQ6VXNlcjI0NjA0MTg=", "organizations_url": "https://api.github.com/users/brchristian/orgs", "received_events_url": "https://api.github.com/users/brchristian/received_events", "repos_url": "https://api.github.com/users/brchristian/repos", "site_admin": false, "starred_url": "https://api.github.com/users/brchristian/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/brchristian/subscriptions", "type": "User", "url": "https://api.github.com/users/brchristian", "user_view_type": "public" }
[]
closed
false
[]
2025-08-07T21:22:56Z
2025-08-13T18:34:30Z
2025-08-13T13:12:56Z
CONTRIBUTOR
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7730.diff", "html_url": "https://github.com/huggingface/datasets/pull/7730", "merged_at": "2025-08-13T13:12:56Z", "patch_url": "https://github.com/huggingface/datasets/pull/7730.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7730" }
This PR corrects a small grammatical issue in the outputs of fingerprint.py: ```diff - "This warning is only showed once. Subsequent hashing failures won't be showed." + "This warning is only shown once. Subsequent hashing failures won't be shown." ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7730/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/issues/7729
7,729
OSError: libcudart.so.11.0: cannot open shared object file: No such file or directory
{ "avatar_url": "https://avatars.githubusercontent.com/u/115183904?v=4", "events_url": "https://api.github.com/users/SaleemMalikAI/events{/privacy}", "followers_url": "https://api.github.com/users/SaleemMalikAI/followers", "following_url": "https://api.github.com/users/SaleemMalikAI/following{/other_user}", "gists_url": "https://api.github.com/users/SaleemMalikAI/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/SaleemMalikAI", "id": 115183904, "login": "SaleemMalikAI", "node_id": "U_kgDOBt2RIA", "organizations_url": "https://api.github.com/users/SaleemMalikAI/orgs", "received_events_url": "https://api.github.com/users/SaleemMalikAI/received_events", "repos_url": "https://api.github.com/users/SaleemMalikAI/repos", "site_admin": false, "starred_url": "https://api.github.com/users/SaleemMalikAI/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SaleemMalikAI/subscriptions", "type": "User", "url": "https://api.github.com/users/SaleemMalikAI", "user_view_type": "public" }
[]
open
false
[ "Is this related to the \"datasets\" library? @SaleemMalikAI " ]
2025-08-07T14:07:23Z
2025-09-24T02:17:15Z
null
NONE
null
null
> Hi is there any solution for that eror i try to install this one pip install torch==1.12.1+cpu torchaudio==0.12.1+cpu -f https://download.pytorch.org/whl/torch_stable.html this is working fine but tell me how to install pytorch version that is fit for gpu
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7729/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/issues/7728
7,728
NonMatchingSplitsSizesError and ExpectedMoreSplitsError
{ "avatar_url": "https://avatars.githubusercontent.com/u/104755879?v=4", "events_url": "https://api.github.com/users/efsotr/events{/privacy}", "followers_url": "https://api.github.com/users/efsotr/followers", "following_url": "https://api.github.com/users/efsotr/following{/other_user}", "gists_url": "https://api.github.com/users/efsotr/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/efsotr", "id": 104755879, "login": "efsotr", "node_id": "U_kgDOBj5ypw", "organizations_url": "https://api.github.com/users/efsotr/orgs", "received_events_url": "https://api.github.com/users/efsotr/received_events", "repos_url": "https://api.github.com/users/efsotr/repos", "site_admin": false, "starred_url": "https://api.github.com/users/efsotr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/efsotr/subscriptions", "type": "User", "url": "https://api.github.com/users/efsotr", "user_view_type": "public" }
[]
open
false
[ "To load just one shard without errors, you should use data_files directly with split set to \"train\", but don’t specify \"allenai/c4\", since that points to the full dataset with all shards.\n\nInstead, do this:\n```\nfrom datasets import load_dataset\nfrom datasets import load_dataset\n\n# Load only one shard of C4\ntraindata = load_dataset(\n \"json\", # <-- use \"json\" since you’re directly passing JSON files\n data_files={\"train\": \"https://huggingface.co/datasets/allenai/c4/resolve/main/en/c4-train.00000-of-01024.json.gz\"},\n split=\"train\"\n)\n\nprint(traindata)\n```\nIf you want both train and validation but only a subset of shards, do:\n```\ntraindata = load_dataset(\n \"json\",\n data_files={\n \"train\": \"https://huggingface.co/datasets/allenai/c4/resolve/main/en/c4-train.00000-of-01024.json.gz\",\n \"validation\": \"https://huggingface.co/datasets/allenai/c4/resolve/main/en/c4-validation.00000-of-00008.json.gz\"\n }\n)\n\nprint(traindata)\n```", "I just want to load a few files from allenai/c4.\nIf I do not specify allenai/c4, where will the files be loaded from?", "My apologies, I’ve modified my previous answer.\nYou just need to specify the full path, for example:\n\nhttps://huggingface.co/datasets/allenai/c4/resolve/main/en/c4-train.00000-of-01024.json.gz\n\n<img width=\"1843\" height=\"633\" alt=\"Image\" src=\"https://github.com/user-attachments/assets/b2922958-9d87-4b62-a00e-c5ca02e31c27\" />\n\nI hope this updated answer is helpful." ]
2025-08-07T04:04:50Z
2025-10-06T21:08:39Z
null
NONE
null
null
### Describe the bug When loading dataset, the info specified by `data_files` did not overwrite the original info. ### Steps to reproduce the bug ```python from datasets import load_dataset traindata = load_dataset( "allenai/c4", "en", data_files={"train": "en/c4-train.00000-of-01024.json.gz", "validation": "en/c4-validation.00000-of-00008.json.gz"}, ) ``` ```log NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=828589180707, num_examples=364868892, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='train', num_bytes=809262831, num_examples=356317, shard_lengths=[223006, 133311], dataset_name='c4')}, {'expected': SplitInfo(name='validation', num_bytes=825767266, num_examples=364608, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='validation', num_bytes=102199431, num_examples=45576, shard_lengths=None, dataset_name='c4')}] ``` ```python from datasets import load_dataset traindata = load_dataset( "allenai/c4", "en", data_files={"train": "en/c4-train.00000-of-01024.json.gz"}, split="train" ) ``` ```log ExpectedMoreSplitsError: {'validation'} ``` ### Expected behavior No error ### Environment info datasets 4.0.0
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7728/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/issues/7727
7,727
config paths that start with ./ are not valid as hf:// accessed repos, but are valid when accessed locally
{ "avatar_url": "https://avatars.githubusercontent.com/u/2229300?v=4", "events_url": "https://api.github.com/users/doctorpangloss/events{/privacy}", "followers_url": "https://api.github.com/users/doctorpangloss/followers", "following_url": "https://api.github.com/users/doctorpangloss/following{/other_user}", "gists_url": "https://api.github.com/users/doctorpangloss/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/doctorpangloss", "id": 2229300, "login": "doctorpangloss", "node_id": "MDQ6VXNlcjIyMjkzMDA=", "organizations_url": "https://api.github.com/users/doctorpangloss/orgs", "received_events_url": "https://api.github.com/users/doctorpangloss/received_events", "repos_url": "https://api.github.com/users/doctorpangloss/repos", "site_admin": false, "starred_url": "https://api.github.com/users/doctorpangloss/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/doctorpangloss/subscriptions", "type": "User", "url": "https://api.github.com/users/doctorpangloss", "user_view_type": "public" }
[]
open
false
[]
2025-08-06T08:21:37Z
2025-08-06T08:21:37Z
null
NONE
null
null
### Describe the bug ``` - config_name: some_config data_files: - split: train path: - images/xyz/*.jpg ``` will correctly download but ``` - config_name: some_config data_files: - split: train path: - ./images/xyz/*.jpg ``` will error with `FileNotFoundError` due to improper url joining. `load_dataset` on the same directory locally works fine. ### Steps to reproduce the bug 1. create a README.md with the front matter of the form ``` - config_name: some_config data_files: - split: train path: - ./images/xyz/*.jpg ``` 2. `touch ./images/xyz/1.jpg` 3. Observe this directory loads with `load_dataset("filesystem_path", "some_config")` correctly. 4. Observe exceptions when you load this with `load_dataset("repoid/filesystem_path", "some_config")` ### Expected behavior `./` prefix should be interpreted correctly ### Environment info datasets 4.0.0 datasets 3.4.0 reproduce
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7727/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/pull/7726
7,726
fix(webdataset): don't .lower() field_name
{ "avatar_url": "https://avatars.githubusercontent.com/u/29985433?v=4", "events_url": "https://api.github.com/users/YassineYousfi/events{/privacy}", "followers_url": "https://api.github.com/users/YassineYousfi/followers", "following_url": "https://api.github.com/users/YassineYousfi/following{/other_user}", "gists_url": "https://api.github.com/users/YassineYousfi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/YassineYousfi", "id": 29985433, "login": "YassineYousfi", "node_id": "MDQ6VXNlcjI5OTg1NDMz", "organizations_url": "https://api.github.com/users/YassineYousfi/orgs", "received_events_url": "https://api.github.com/users/YassineYousfi/received_events", "repos_url": "https://api.github.com/users/YassineYousfi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/YassineYousfi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/YassineYousfi/subscriptions", "type": "User", "url": "https://api.github.com/users/YassineYousfi", "user_view_type": "public" }
[]
closed
false
[ "fixes: https://github.com/huggingface/datasets/issues/7732", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7726). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "CI failures are unrelated, merging :)" ]
2025-08-05T16:57:09Z
2025-08-20T16:35:55Z
2025-08-20T16:35:55Z
CONTRIBUTOR
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7726.diff", "html_url": "https://github.com/huggingface/datasets/pull/7726", "merged_at": "2025-08-20T16:35:55Z", "patch_url": "https://github.com/huggingface/datasets/pull/7726.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7726" }
This fixes cases where keys have upper case identifiers
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7726/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/issues/7724
7,724
Can not stepinto load_dataset.py?
{ "avatar_url": "https://avatars.githubusercontent.com/u/13776012?v=4", "events_url": "https://api.github.com/users/micklexqg/events{/privacy}", "followers_url": "https://api.github.com/users/micklexqg/followers", "following_url": "https://api.github.com/users/micklexqg/following{/other_user}", "gists_url": "https://api.github.com/users/micklexqg/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/micklexqg", "id": 13776012, "login": "micklexqg", "node_id": "MDQ6VXNlcjEzNzc2MDEy", "organizations_url": "https://api.github.com/users/micklexqg/orgs", "received_events_url": "https://api.github.com/users/micklexqg/received_events", "repos_url": "https://api.github.com/users/micklexqg/repos", "site_admin": false, "starred_url": "https://api.github.com/users/micklexqg/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/micklexqg/subscriptions", "type": "User", "url": "https://api.github.com/users/micklexqg", "user_view_type": "public" }
[]
open
false
[]
2025-08-05T09:28:51Z
2025-08-05T09:28:51Z
null
NONE
null
null
I set a breakpoint in "load_dataset.py" and try to debug my data load codes, but it does not stop at any breakpoints, so "load_dataset.py" can not be stepped into ? <!-- Failed to upload "ζˆͺε›Ύ 2025-08-05 17-25-18.png" -->
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7724/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/issues/7723
7,723
Don't remove `trust_remote_code` arg!!!
{ "avatar_url": "https://avatars.githubusercontent.com/u/758925?v=4", "events_url": "https://api.github.com/users/autosquid/events{/privacy}", "followers_url": "https://api.github.com/users/autosquid/followers", "following_url": "https://api.github.com/users/autosquid/following{/other_user}", "gists_url": "https://api.github.com/users/autosquid/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/autosquid", "id": 758925, "login": "autosquid", "node_id": "MDQ6VXNlcjc1ODkyNQ==", "organizations_url": "https://api.github.com/users/autosquid/orgs", "received_events_url": "https://api.github.com/users/autosquid/received_events", "repos_url": "https://api.github.com/users/autosquid/repos", "site_admin": false, "starred_url": "https://api.github.com/users/autosquid/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/autosquid/subscriptions", "type": "User", "url": "https://api.github.com/users/autosquid", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
[]
2025-08-04T15:42:07Z
2025-08-04T15:42:07Z
null
NONE
null
null
### Feature request defaulting it to False is nice balance. we need manully setting it to True in certain scenarios! Add `trust_remote_code` arg back please! ### Motivation defaulting it to False is nice balance. we need manully setting it to True in certain scenarios! ### Your contribution defaulting it to False is nice balance. we need manully setting it to True in certain scenarios!
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7723/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/issues/7722
7,722
Out of memory even though using load_dataset(..., streaming=True)
{ "avatar_url": "https://avatars.githubusercontent.com/u/3961950?v=4", "events_url": "https://api.github.com/users/padmalcom/events{/privacy}", "followers_url": "https://api.github.com/users/padmalcom/followers", "following_url": "https://api.github.com/users/padmalcom/following{/other_user}", "gists_url": "https://api.github.com/users/padmalcom/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/padmalcom", "id": 3961950, "login": "padmalcom", "node_id": "MDQ6VXNlcjM5NjE5NTA=", "organizations_url": "https://api.github.com/users/padmalcom/orgs", "received_events_url": "https://api.github.com/users/padmalcom/received_events", "repos_url": "https://api.github.com/users/padmalcom/repos", "site_admin": false, "starred_url": "https://api.github.com/users/padmalcom/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/padmalcom/subscriptions", "type": "User", "url": "https://api.github.com/users/padmalcom", "user_view_type": "public" }
[]
open
false
[]
2025-08-04T14:41:55Z
2025-08-04T14:41:55Z
null
NONE
null
null
### Describe the bug I am iterating over a large dataset that I load using streaming=True to avoid running out of memory. Unfortunately, I am observing that memory usage increases over time and I'm finally running in an oom. ### Steps to reproduce the bug ``` ds = load_dataset("openslr/librispeech_asr", split="train.clean.360", streaming=True) for i,sample in enumerate(tqdm(ds)): target_file = os.path.join(NSFW_TARGET_FOLDER, f'audio{i}.wav') try: sf.write(target_file, sample['audio']['array'], samplerate=sample['audio']['sampling_rate']) except Exception as e: print(f"Could not write audio {i} in ds: {e}") ``` ### Expected behavior I'd expect to have a small memory footprint and memory being freed after each iteration of the for loop. Instead the memory usage is increasing. I tried to remove the logic to write the sound file and just print the sample but the issue remains the same. ### Environment info Python 3.12.11 Ubuntu 24 datasets 4.0.0 and 3.6.0
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7722/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/issues/7721
7,721
Bad split error message when using percentages
{ "avatar_url": "https://avatars.githubusercontent.com/u/3961950?v=4", "events_url": "https://api.github.com/users/padmalcom/events{/privacy}", "followers_url": "https://api.github.com/users/padmalcom/followers", "following_url": "https://api.github.com/users/padmalcom/following{/other_user}", "gists_url": "https://api.github.com/users/padmalcom/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/padmalcom", "id": 3961950, "login": "padmalcom", "node_id": "MDQ6VXNlcjM5NjE5NTA=", "organizations_url": "https://api.github.com/users/padmalcom/orgs", "received_events_url": "https://api.github.com/users/padmalcom/received_events", "repos_url": "https://api.github.com/users/padmalcom/repos", "site_admin": false, "starred_url": "https://api.github.com/users/padmalcom/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/padmalcom/subscriptions", "type": "User", "url": "https://api.github.com/users/padmalcom", "user_view_type": "public" }
[]
open
false
[ "I'd like to work on this: add clearer validation/messages for percent-based splits + tests", "The most basic example is this code:\n`load_dataset(\"openslr/librispeech_asr\", split=\"train[10%:20%]\")`\n\nThis results in this ValueError:\n```\n raise ValueError(f'Unknown split \"{split}\". Should be one of {list(name2len)}.')\nValueError: Unknown split \"train\". Should be one of ['test.clean', 'test.other', 'train.clean.100', 'train.clean.360', 'train.other.500', 'validation.clean', 'validation.other'].\n```\n" ]
2025-08-04T13:20:25Z
2025-08-14T14:42:24Z
null
NONE
null
null
### Describe the bug Hi, I'm trying to download a dataset. To not load the entire dataset in memory, I split it as described [here](https://huggingface.co/docs/datasets/v4.0.0/loading#slice-splits) in 10% steps. When doing so, the library returns this error: raise ValueError(f"Bad split: {split}. Available splits: {list(splits_generators)}") ValueError: Bad split: train[0%:10%]. Available splits: ['train'] Edit: Same happens with a split like _train[:90000]_ ### Steps to reproduce the bug ``` for split in range(10): split_str = f"train[{split*10}%:{(split+1)*10}%]" print(f"Processing split {split_str}...") ds = load_dataset("user/dataset", split=split_str, streaming=True) ``` ### Expected behavior I'd expect the library to split my dataset in 10% steps. ### Environment info python 3.12.11 ubuntu 24 dataset 4.0.0
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7721/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/issues/7720
7,720
Datasets 4.0 map function causing column not found
{ "avatar_url": "https://avatars.githubusercontent.com/u/55143337?v=4", "events_url": "https://api.github.com/users/Darejkal/events{/privacy}", "followers_url": "https://api.github.com/users/Darejkal/followers", "following_url": "https://api.github.com/users/Darejkal/following{/other_user}", "gists_url": "https://api.github.com/users/Darejkal/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Darejkal", "id": 55143337, "login": "Darejkal", "node_id": "MDQ6VXNlcjU1MTQzMzM3", "organizations_url": "https://api.github.com/users/Darejkal/orgs", "received_events_url": "https://api.github.com/users/Darejkal/received_events", "repos_url": "https://api.github.com/users/Darejkal/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Darejkal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Darejkal/subscriptions", "type": "User", "url": "https://api.github.com/users/Darejkal", "user_view_type": "public" }
[]
open
false
[ "Hi, I tried to reproduce this issue on the latest `main` branch but it seems to be working correctly now. My test script (which creates a dummy dataset and applies the `.map()` function) successfully creates and accesses the new column without a `KeyError`.\n\nIt's possible this was fixed by a recent commit. The maintainers might want to consider closing this issue.", "Hi, have you tried on a large dataset (200GB+) perhaps? I will try my best to do a rerun with main branch when I have the time.", "I ran it on a small dataset, maybe that’s why I didn’t hit the issue. If it still shows up on your side with the latest main, let me know. I can try it on a bigger set too." ]
2025-08-03T12:52:34Z
2025-08-07T19:23:34Z
null
NONE
null
null
### Describe the bug Column returned after mapping is not found in new instance of the dataset. ### Steps to reproduce the bug Code for reproduction. After running get_total_audio_length, it is errored out due to `data` not having `duration` ``` def compute_duration(x): return {"duration": len(x["audio"]["array"]) / x["audio"]["sampling_rate"]} def get_total_audio_length(dataset): data = dataset.map(compute_duration, num_proc=NUM_PROC) print(data) durations=data["duration"] total_seconds = sum(durations) return total_seconds ``` ### Expected behavior New datasets.Dataset instance should have new columns attached. ### Environment info - `datasets` version: 4.0.0 - Platform: Linux-5.4.0-124-generic-x86_64-with-glibc2.31 - Python version: 3.10.13 - `huggingface_hub` version: 0.33.2 - PyArrow version: 20.0.0 - Pandas version: 2.3.0 - `fsspec` version: 2023.12.2
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7720/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/issues/7719
7,719
Specify dataset columns types in typehint
{ "avatar_url": "https://avatars.githubusercontent.com/u/36135455?v=4", "events_url": "https://api.github.com/users/Samoed/events{/privacy}", "followers_url": "https://api.github.com/users/Samoed/followers", "following_url": "https://api.github.com/users/Samoed/following{/other_user}", "gists_url": "https://api.github.com/users/Samoed/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Samoed", "id": 36135455, "login": "Samoed", "node_id": "MDQ6VXNlcjM2MTM1NDU1", "organizations_url": "https://api.github.com/users/Samoed/orgs", "received_events_url": "https://api.github.com/users/Samoed/received_events", "repos_url": "https://api.github.com/users/Samoed/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Samoed/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Samoed/subscriptions", "type": "User", "url": "https://api.github.com/users/Samoed", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
[]
2025-08-02T13:22:31Z
2025-08-02T13:22:31Z
null
NONE
null
null
### Feature request Make dataset optionaly generic to datasets usage with type annotations like it was done in `torch.Dataloader` https://github.com/pytorch/pytorch/blob/134179474539648ba7dee1317959529fbd0e7f89/torch/utils/data/dataloader.py#L131 ### Motivation In MTEB we're using a lot of datasets objects, but they're a bit poor in typehints. E.g. we can specify this for dataloder ```python from typing import TypedDict from torch.utils.data import DataLoader class CorpusInput(TypedDict): title: list[str] body: list[str] class QueryInput(TypedDict): query: list[str] instruction: list[str] def queries_loader() -> DataLoader[QueryInput]: ... def corpus_loader() -> DataLoader[CorpusInput]: ... ``` But for datasets we can only specify columns in type in comments ```python from datasets import Dataset QueryDataset = Dataset """Query dataset should have `query` and `instructions` columns as `str` """ ``` ### Your contribution I can create draft implementation
null
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/7719/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/pull/7718
7,718
add support for pyarrow string view in features
{ "avatar_url": "https://avatars.githubusercontent.com/u/5051569?v=4", "events_url": "https://api.github.com/users/onursatici/events{/privacy}", "followers_url": "https://api.github.com/users/onursatici/followers", "following_url": "https://api.github.com/users/onursatici/following{/other_user}", "gists_url": "https://api.github.com/users/onursatici/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/onursatici", "id": 5051569, "login": "onursatici", "node_id": "MDQ6VXNlcjUwNTE1Njk=", "organizations_url": "https://api.github.com/users/onursatici/orgs", "received_events_url": "https://api.github.com/users/onursatici/received_events", "repos_url": "https://api.github.com/users/onursatici/repos", "site_admin": false, "starred_url": "https://api.github.com/users/onursatici/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/onursatici/subscriptions", "type": "User", "url": "https://api.github.com/users/onursatici", "user_view_type": "public" }
[]
closed
false
[ "@lhoestq who do you think would be the best to have a look at this? Any pointers would be appreciated, thanks!", "Hi ! what's the rationale for supporting string view ? I'm afraid it can complexify the typing logic without much value", "Hi @lhoestq ! I mainly want to be able to create features by using `Features.from_arrow_schema(dataset_schema)` on an arrow dataset with string view columns, currently there is no easy way to do this, and string_view is becoming an increasingly common data type for string columns in arrow. Thanks for having a look!", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7718). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-08-01T14:58:39Z
2025-09-12T13:14:16Z
2025-09-12T13:13:24Z
CONTRIBUTOR
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7718.diff", "html_url": "https://github.com/huggingface/datasets/pull/7718", "merged_at": "2025-09-12T13:13:24Z", "patch_url": "https://github.com/huggingface/datasets/pull/7718.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7718" }
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 3, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 3, "url": "https://api.github.com/repos/huggingface/datasets/issues/7718/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/issues/7717
7,717
Cached dataset is not used when explicitly passing the cache_dir parameter
{ "avatar_url": "https://avatars.githubusercontent.com/u/3961950?v=4", "events_url": "https://api.github.com/users/padmalcom/events{/privacy}", "followers_url": "https://api.github.com/users/padmalcom/followers", "following_url": "https://api.github.com/users/padmalcom/following{/other_user}", "gists_url": "https://api.github.com/users/padmalcom/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/padmalcom", "id": 3961950, "login": "padmalcom", "node_id": "MDQ6VXNlcjM5NjE5NTA=", "organizations_url": "https://api.github.com/users/padmalcom/orgs", "received_events_url": "https://api.github.com/users/padmalcom/received_events", "repos_url": "https://api.github.com/users/padmalcom/repos", "site_admin": false, "starred_url": "https://api.github.com/users/padmalcom/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/padmalcom/subscriptions", "type": "User", "url": "https://api.github.com/users/padmalcom", "user_view_type": "public" }
[]
open
false
[ "Hi, I've investigated this issue and can confirm the bug. Here are my findings:\n\n**1. Reproduction:**\nI was able to reproduce the issue on the latest `main` branch. Using the provided code snippet, `snapshot_download` correctly populates the custom `cache_dir`, but `load_dataset` with the same `cache_dir` triggers a full re-download and re-processing of the dataset, ignoring the existing cache.\n\n**2. Investigation:**\nI traced the `cache_dir` parameter from `load_dataset` down to the `DatasetBuilder` class in `src/datasets/builder.py`. The root cause seems to be a mismatch between the cache path structure created by `snapshot_download` and the path structure expected by the `DatasetBuilder`.\n\nSpecifically, the `_relative_data_dir` method in `DatasetBuilder` constructs a path using `namespace___dataset_name` (with three underscores), while the cache from `snapshot_download` appears to use a `repo_id` based format like `datasets--namespace--dataset_name` (with double hyphens).\n\n**3. Attempted Fix & Result:**\nI attempted a fix by modifying the `_relative_data_dir` method to replace the path separator \"/\" in `self.repo_id` with \"--\", to align it with the `snapshot_download` structure.\n\nThis partially worked: `load_dataset` no longer re-downloads the files. However, it still re-processes them every time (triggering \"Generating train split...\", etc.) instead of loading the already processed Arrow files from the cache.\n\nThis suggests the issue is deeper than just the directory name and might be related to how the builder verifies the integrity or presence of the processed cache files.\n\nI hope these findings are helpful for whoever picks up this issue." ]
2025-08-01T07:12:41Z
2025-08-05T19:19:36Z
null
NONE
null
null
### Describe the bug Hi, we are pre-downloading a dataset using snapshot_download(). When loading this exact dataset with load_dataset() the cached snapshot is not used. In both calls, I provide the cache_dir parameter. ### Steps to reproduce the bug ``` from datasets import load_dataset, concatenate_datasets from huggingface_hub import snapshot_download def download_ds(name: str): snapshot_download(repo_id=name, repo_type="dataset", cache_dir="G:/Datasets/cache") def prepare_ds(): audio_ds = load_dataset("openslr/librispeech_asr", num_proc=4, cache_dir="G:/Datasets/cache") print(sfw_ds.features) if __name__ == '__main__': download_ds("openslr/librispeech_asr") prepare_ds() ``` ### Expected behavior I'd expect that the cached version of the dataset is used. Instead, the same dataset is downloaded again to the default cache directory. ### Environment info Windows 11 datasets==4.0.0 Python 3.12.11
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7717/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/pull/7716
7,716
typo
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7716). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-07-31T17:14:45Z
2025-07-31T17:17:15Z
2025-07-31T17:14:51Z
MEMBER
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7716.diff", "html_url": "https://github.com/huggingface/datasets/pull/7716", "merged_at": "2025-07-31T17:14:51Z", "patch_url": "https://github.com/huggingface/datasets/pull/7716.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7716" }
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7716/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/pull/7715
7,715
Docs: Use Image(mode="F") for PNG/JPEG depth maps
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7715). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-07-31T17:09:49Z
2025-07-31T17:12:23Z
2025-07-31T17:10:10Z
MEMBER
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7715.diff", "html_url": "https://github.com/huggingface/datasets/pull/7715", "merged_at": "2025-07-31T17:10:10Z", "patch_url": "https://github.com/huggingface/datasets/pull/7715.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7715" }
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7715/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/pull/7714
7,714
fix num_proc=1 ci test
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7714). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-07-31T16:36:32Z
2025-07-31T16:39:03Z
2025-07-31T16:38:03Z
MEMBER
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7714.diff", "html_url": "https://github.com/huggingface/datasets/pull/7714", "merged_at": "2025-07-31T16:38:03Z", "patch_url": "https://github.com/huggingface/datasets/pull/7714.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7714" }
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7714/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/pull/7713
7,713
Update cli.mdx to refer to the new "hf" CLI
{ "avatar_url": "https://avatars.githubusercontent.com/u/1936278?v=4", "events_url": "https://api.github.com/users/evalstate/events{/privacy}", "followers_url": "https://api.github.com/users/evalstate/followers", "following_url": "https://api.github.com/users/evalstate/following{/other_user}", "gists_url": "https://api.github.com/users/evalstate/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/evalstate", "id": 1936278, "login": "evalstate", "node_id": "MDQ6VXNlcjE5MzYyNzg=", "organizations_url": "https://api.github.com/users/evalstate/orgs", "received_events_url": "https://api.github.com/users/evalstate/received_events", "repos_url": "https://api.github.com/users/evalstate/repos", "site_admin": false, "starred_url": "https://api.github.com/users/evalstate/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/evalstate/subscriptions", "type": "User", "url": "https://api.github.com/users/evalstate", "user_view_type": "public" }
[]
closed
false
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7713). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-07-31T15:06:11Z
2025-07-31T16:37:56Z
2025-07-31T16:37:55Z
CONTRIBUTOR
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7713.diff", "html_url": "https://github.com/huggingface/datasets/pull/7713", "merged_at": "2025-07-31T16:37:55Z", "patch_url": "https://github.com/huggingface/datasets/pull/7713.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7713" }
Update to refer to `hf auth login`
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7713/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/pull/7712
7,712
Retry intermediate commits too
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7712). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-07-31T14:33:33Z
2025-07-31T14:37:43Z
2025-07-31T14:36:43Z
MEMBER
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7712.diff", "html_url": "https://github.com/huggingface/datasets/pull/7712", "merged_at": "2025-07-31T14:36:43Z", "patch_url": "https://github.com/huggingface/datasets/pull/7712.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7712" }
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7712/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/pull/7711
7,711
Update dataset_dict push_to_hub
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7711). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-07-31T13:25:03Z
2025-07-31T14:18:55Z
2025-07-31T14:18:53Z
MEMBER
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7711.diff", "html_url": "https://github.com/huggingface/datasets/pull/7711", "merged_at": "2025-07-31T14:18:53Z", "patch_url": "https://github.com/huggingface/datasets/pull/7711.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7711" }
following https://github.com/huggingface/datasets/pull/7708
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7711/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/pull/7710
7,710
Concurrent IterableDataset push_to_hub
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7710). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-07-31T10:11:31Z
2025-07-31T10:14:00Z
2025-07-31T10:12:52Z
MEMBER
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7710.diff", "html_url": "https://github.com/huggingface/datasets/pull/7710", "merged_at": "2025-07-31T10:12:52Z", "patch_url": "https://github.com/huggingface/datasets/pull/7710.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7710" }
Same as https://github.com/huggingface/datasets/pull/7708 but for `IterableDataset`
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7710/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/issues/7709
7,709
Release 4.0.0 breaks usage patterns of with_format
{ "avatar_url": "https://avatars.githubusercontent.com/u/9154515?v=4", "events_url": "https://api.github.com/users/wittenator/events{/privacy}", "followers_url": "https://api.github.com/users/wittenator/followers", "following_url": "https://api.github.com/users/wittenator/following{/other_user}", "gists_url": "https://api.github.com/users/wittenator/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/wittenator", "id": 9154515, "login": "wittenator", "node_id": "MDQ6VXNlcjkxNTQ1MTU=", "organizations_url": "https://api.github.com/users/wittenator/orgs", "received_events_url": "https://api.github.com/users/wittenator/received_events", "repos_url": "https://api.github.com/users/wittenator/repos", "site_admin": false, "starred_url": "https://api.github.com/users/wittenator/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wittenator/subscriptions", "type": "User", "url": "https://api.github.com/users/wittenator", "user_view_type": "public" }
[]
closed
false
[ "This is a breaking change with 4.0 which introduced `Column` objects. To get the numpy array from a `Column` you can `col[i]`, `col[i:j]` or even `col[:]` if you want the full column as a numpy array:\n\n```python\nfrom datasets import load_dataset\ndataset = load_dataset(...)\ndataset = dataset.with_format(\"numpy\")\nprint(dataset[\"star\"][:].ndim)\n```", "Ah perfect, thanks for clearing this up. I would close this ticket then." ]
2025-07-30T11:34:53Z
2025-08-07T08:27:18Z
2025-08-07T08:27:18Z
NONE
null
null
### Describe the bug Previously it was possible to access a whole column that was e.g. in numpy format via `with_format` by indexing the column. Now this possibility seems to be gone with the new Column() class. As far as I see, this makes working on a whole column (in-memory) more complex, i.e. normalizing an in-memory dataset for which iterating would be too slow. Is this intended behaviour? I couldn't find much documentation on the intended usage of the new Column class yet. ### Steps to reproduce the bug Steps to reproduce: ``` from datasets import load_dataset dataset = load_dataset("lhoestq/demo1") dataset = dataset.with_format("numpy") print(dataset["star"].ndim) ``` ### Expected behavior Working on whole columns should be possible. ### Environment info - `datasets` version: 4.0.0 - Platform: Linux-6.8.0-63-generic-x86_64-with-glibc2.36 - Python version: 3.12.11 - `huggingface_hub` version: 0.34.3 - PyArrow version: 21.0.0 - Pandas version: 2.3.1 - `fsspec` version: 2025.3.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/9154515?v=4", "events_url": "https://api.github.com/users/wittenator/events{/privacy}", "followers_url": "https://api.github.com/users/wittenator/followers", "following_url": "https://api.github.com/users/wittenator/following{/other_user}", "gists_url": "https://api.github.com/users/wittenator/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/wittenator", "id": 9154515, "login": "wittenator", "node_id": "MDQ6VXNlcjkxNTQ1MTU=", "organizations_url": "https://api.github.com/users/wittenator/orgs", "received_events_url": "https://api.github.com/users/wittenator/received_events", "repos_url": "https://api.github.com/users/wittenator/repos", "site_admin": false, "starred_url": "https://api.github.com/users/wittenator/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wittenator/subscriptions", "type": "User", "url": "https://api.github.com/users/wittenator", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7709/reactions" }
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/pull/7708
7,708
Concurrent push_to_hub
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7708). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-07-29T13:14:30Z
2025-07-31T10:00:50Z
2025-07-31T10:00:49Z
MEMBER
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7708.diff", "html_url": "https://github.com/huggingface/datasets/pull/7708", "merged_at": "2025-07-31T10:00:49Z", "patch_url": "https://github.com/huggingface/datasets/pull/7708.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7708" }
Retry the step that (download + update + upload) the README.md using `create_commit(..., parent_commit=...)` if there was a commit in the meantime. This should enable concurrent `push_to_hub()` since it won't overwrite the README.md metadata anymore. Note: we fixed an issue server side to make this work: <details> DO NOT MERGE FOR NOW since it seems there is one bug that prevents this logic from working: I'm using parent_commit to enable concurrent push_to_hub() in datasets for a retry mechanism, but for some reason I always run into a weird situation. Sometimes create_commit(.., parent_commit=...) returns error 500 but the commit did happen on the Hub side without respecting parent_commit e.g. request id ``` huggingface_hub.errors.HfHubHTTPError: 500 Server Error: Internal Server Error for url: https://huggingface.co/api/datasets/lhoestq/tmp/commit/main (Request ID: Root=1-6888d8af-2ce517bc60c69cb378b51526;d1b17993-c5d0-4ccd-9926-060c45f9ed61) ``` fix coming in [internal](https://github.com/huggingface-internal/moon-landing/pull/14617) </details> close https://github.com/huggingface/datasets/issues/7600
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7708/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/issues/7707
7,707
load_dataset() in 4.0.0 failed when decoding audio
{ "avatar_url": "https://avatars.githubusercontent.com/u/107918818?v=4", "events_url": "https://api.github.com/users/jiqing-feng/events{/privacy}", "followers_url": "https://api.github.com/users/jiqing-feng/followers", "following_url": "https://api.github.com/users/jiqing-feng/following{/other_user}", "gists_url": "https://api.github.com/users/jiqing-feng/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jiqing-feng", "id": 107918818, "login": "jiqing-feng", "node_id": "U_kgDOBm614g", "organizations_url": "https://api.github.com/users/jiqing-feng/orgs", "received_events_url": "https://api.github.com/users/jiqing-feng/received_events", "repos_url": "https://api.github.com/users/jiqing-feng/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jiqing-feng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jiqing-feng/subscriptions", "type": "User", "url": "https://api.github.com/users/jiqing-feng", "user_view_type": "public" }
[]
closed
false
[ "Hi @lhoestq . Would you please have a look at it? I use the official NV Docker ([NV official docker image](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch): `nvcr.io/nvidia/pytorch:25.06-py3`) on A100 and encountered this issue, but I don't know how to fix it.", "Use !pip install -U datasets[audio] rather than !pip install datasets\n\nI got the solution from this link [https://github.com/huggingface/datasets/issues/7678](https://github.com/huggingface/datasets/issues/7678), and it processes the data; however, it led to certain transformer importnerrors", "> https://github.com/huggingface/datasets/issues/7678\n\nHi @asantewaa-bremang . Thanks for your reply, but sadly it does not work for me.", "It looks like a torchcodec issue, have you tried to look at the torchcodec issues here in case someone has the same issue ? https://github.com/pytorch/torchcodec/issues\n\notherwise feel free to open a new issue there", "@jiqing-feng, are you running the code on Colab? If you are, you should restart after making this installation ! pip install -U datasets[audio]. ", "> [@jiqing-feng](https://github.com/jiqing-feng), are you running the code on Colab? If you are, you should restart after making this installation ! pip install -U datasets[audio].\n\nNo, I ran the script on the A100 instance locally.", "> It looks like a torchcodec issue, have you tried to look at the torchcodec issues here in case someone has the same issue ? https://github.com/pytorch/torchcodec/issues\n> \n> otherwise feel free to open a new issue there\n\nThanks! I've opened a new issue on torchcodec. Could we have a fallback implementation without torchcodec (just like datasets==3.6.0) ?", "> Thanks! I've opened a new issue on torchcodec. Could we have a fallback implementation without torchcodec (just like datasets==3.6.0) ?\n\nFor now I'd recommend using `datasets==3.6.0` if this issue is blocking for you", "Resolved by installing the pre-release torchcodec. Thanks!", "Same. torchcodec==0.6.0 failed, torchcodec==0.5.0 solved", "So what combination of 'datasets' and 'torchcodec' worked out?", "> So what combination of 'datasets' and 'torchcodec' worked out?\n\nnice mate! \njust about to write this massage!!!!!\n\n\n\nwhen this will solve????\n", "torchcodec 0.7 fails\n0.5 not guaranty to work with torch 2.8\n\n", "> Resolved by installing the pre-release torchcodec. Thanks!\n\nhow to install the pre-release torchcodec, when I use pip install --pre torchcodec, it do not download new version", "i fixed this issue by install :\n\nconda install \"ffmpeg<8\"\nor\nconda install \"ffmpeg<8\" -c conda-forge\n\nyou can find more info : https://github.com/meta-pytorch/torchcodec?tab=readme-ov-file#installing-torchcodec", "It loads fine with datasets==3.6.0" ]
2025-07-29T03:25:03Z
2025-10-05T06:41:38Z
2025-08-01T05:15:45Z
NONE
null
null
### Describe the bug Cannot decode audio data. ### Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation") print(dataset[0]["audio"]["array"]) ``` 1st round run, got ``` File "/usr/local/lib/python3.12/dist-packages/datasets/features/audio.py", line 172, in decode_example raise ImportError("To support decoding audio data, please install 'torchcodec'.") ImportError: To support decoding audio data, please install 'torchcodec'. ``` After `pip install torchcodec` and run, got ``` File "/usr/local/lib/python3.12/dist-packages/torchcodec/_core/_metadata.py", line 16, in <module> from torchcodec._core.ops import ( File "/usr/local/lib/python3.12/dist-packages/torchcodec/_core/ops.py", line 84, in <module> load_torchcodec_shared_libraries() File "/usr/local/lib/python3.12/dist-packages/torchcodec/_core/ops.py", line 69, in load_torchcodec_shared_libraries raise RuntimeError( RuntimeError: Could not load libtorchcodec. Likely causes: 1. FFmpeg is not properly installed in your environment. We support versions 4, 5, 6 and 7. 2. The PyTorch version (2.8.0a0+5228986c39.nv25.06) is not compatible with this version of TorchCodec. Refer to the version compatibility table: https://github.com/pytorch/torchcodec?tab=readme-ov-file#installing-torchcodec. 3. Another runtime dependency; see exceptions below. The following exceptions were raised as we tried to load libtorchcodec: [start of libtorchcodec loading traceback] FFmpeg version 7: libavutil.so.59: cannot open shared object file: No such file or directory FFmpeg version 6: libavutil.so.58: cannot open shared object file: No such file or directory FFmpeg version 5: libavutil.so.57: cannot open shared object file: No such file or directory FFmpeg version 4: libavutil.so.56: cannot open shared object file: No such file or directory [end of libtorchcodec loading traceback]. ``` After `apt update && apt install ffmpeg -y`, got ``` Traceback (most recent call last): File "/workspace/jiqing/test_datasets.py", line 4, in <module> print(dataset[0]["audio"]["array"]) ~~~~~~~^^^ File "/usr/local/lib/python3.12/dist-packages/datasets/arrow_dataset.py", line 2859, in __getitem__ return self._getitem(key) ^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/datasets/arrow_dataset.py", line 2841, in _getitem formatted_output = format_table( ^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/datasets/formatting/formatting.py", line 657, in format_table return formatter(pa_table, query_type=query_type) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/datasets/formatting/formatting.py", line 410, in __call__ return self.format_row(pa_table) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/datasets/formatting/formatting.py", line 459, in format_row row = self.python_features_decoder.decode_row(row) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/datasets/formatting/formatting.py", line 223, in decode_row return self.features.decode_example(row, token_per_repo_id=self.token_per_repo_id) if self.features else row ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/datasets/features/features.py", line 2093, in decode_example column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/datasets/features/features.py", line 1405, in decode_nested_example return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) if obj is not None else None ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/datasets/features/audio.py", line 198, in decode_example audio = AudioDecoder(bytes, stream_index=self.stream_index, sample_rate=self.sampling_rate) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/torchcodec/decoders/_audio_decoder.py", line 62, in __init__ self._decoder = create_decoder(source=source, seek_mode="approximate") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/torchcodec/decoders/_decoder_utils.py", line 33, in create_decoder return core.create_from_bytes(source, seek_mode) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/torchcodec/_core/ops.py", line 144, in create_from_bytes return create_from_tensor(buffer, seek_mode) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/torch/_ops.py", line 756, in __call__ return self._op(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^ NotImplementedError: Could not run 'torchcodec_ns::create_from_tensor' with arguments from the 'CPU' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'torchcodec_ns::create_from_tensor' is only available for these backends: [Meta, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradMPS, AutogradXPU, AutogradHPU, AutogradLazy, AutogradMTIA, AutogradMAIA, AutogradMeta, Tracer, AutocastCPU, AutocastMTIA, AutocastMAIA, AutocastXPU, AutocastMPS, AutocastCUDA, FuncTorchBatched, BatchedNestedTensor, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher]. Meta: registered at /dev/null:214 [kernel] BackendSelect: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback] Python: registered at /__w/torchcodec/torchcodec/pytorch/torchcodec/src/torchcodec/_core/custom_ops.cpp:694 [kernel] FuncTorchDynamicLayerBackMode: registered at /opt/pytorch/pytorch/aten/src/ATen/functorch/DynamicLayer.cpp:479 [backend fallback] Functionalize: registered at /opt/pytorch/pytorch/aten/src/ATen/FunctionalizeFallbackKernel.cpp:349 [backend fallback] Named: registered at /opt/pytorch/pytorch/aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback] Conjugate: registered at /opt/pytorch/pytorch/aten/src/ATen/ConjugateFallback.cpp:17 [backend fallback] Negative: registered at /opt/pytorch/pytorch/aten/src/ATen/native/NegateFallback.cpp:18 [backend fallback] ZeroTensor: registered at /opt/pytorch/pytorch/aten/src/ATen/ZeroTensorFallback.cpp:86 [backend fallback] ADInplaceOrView: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:104 [backend fallback] AutogradOther: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:63 [backend fallback] AutogradCPU: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:67 [backend fallback] AutogradCUDA: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:75 [backend fallback] AutogradXLA: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:87 [backend fallback] AutogradMPS: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:95 [backend fallback] AutogradXPU: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:71 [backend fallback] AutogradHPU: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:108 [backend fallback] AutogradLazy: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:91 [backend fallback] AutogradMTIA: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:79 [backend fallback] AutogradMAIA: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:83 [backend fallback] AutogradMeta: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:99 [backend fallback] Tracer: registered at /opt/pytorch/pytorch/torch/csrc/autograd/TraceTypeManual.cpp:294 [backend fallback] AutocastCPU: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:322 [backend fallback] AutocastMTIA: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:466 [backend fallback] AutocastMAIA: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:504 [backend fallback] AutocastXPU: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:542 [backend fallback] AutocastMPS: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:209 [backend fallback] AutocastCUDA: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:165 [backend fallback] FuncTorchBatched: registered at /opt/pytorch/pytorch/aten/src/ATen/functorch/LegacyBatchingRegistrations.cpp:731 [backend fallback] BatchedNestedTensor: registered at /opt/pytorch/pytorch/aten/src/ATen/functorch/LegacyBatchingRegistrations.cpp:758 [backend fallback] FuncTorchVmapMode: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/functorch/VmapModeRegistrations.cpp:27 [backend fallback] Batched: registered at /opt/pytorch/pytorch/aten/src/ATen/LegacyBatchingRegistrations.cpp:1075 [backend fallback] VmapMode: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback] FuncTorchGradWrapper: registered at /opt/pytorch/pytorch/aten/src/ATen/functorch/TensorWrapper.cpp:208 [backend fallback] PythonTLSSnapshot: registered at /opt/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:202 [backend fallback] FuncTorchDynamicLayerFrontMode: registered at /opt/pytorch/pytorch/aten/src/ATen/functorch/DynamicLayer.cpp:475 [backend fallback] PreDispatch: registered at /opt/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:206 [backend fallback] PythonDispatcher: registered at /opt/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:198 [backend fallback] ``` ### Expected behavior The result is ``` [0.00238037 0.0020752 0.00198364 ... 0.00042725 0.00057983 0.0010376 ] ``` on `datasets==3.6.0` ### Environment info [NV official docker image](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch): `nvcr.io/nvidia/pytorch:25.06-py3` ``` - `datasets` version: 4.0.0 - Platform: Linux-5.4.292-1.el8.elrepo.x86_64-x86_64-with-glibc2.39 - Python version: 3.12.3 - `huggingface_hub` version: 0.34.2 - PyArrow version: 19.0.1 - Pandas version: 2.2.3 - `fsspec` version: 2025.3.0 ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/107918818?v=4", "events_url": "https://api.github.com/users/jiqing-feng/events{/privacy}", "followers_url": "https://api.github.com/users/jiqing-feng/followers", "following_url": "https://api.github.com/users/jiqing-feng/following{/other_user}", "gists_url": "https://api.github.com/users/jiqing-feng/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jiqing-feng", "id": 107918818, "login": "jiqing-feng", "node_id": "U_kgDOBm614g", "organizations_url": "https://api.github.com/users/jiqing-feng/orgs", "received_events_url": "https://api.github.com/users/jiqing-feng/received_events", "repos_url": "https://api.github.com/users/jiqing-feng/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jiqing-feng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jiqing-feng/subscriptions", "type": "User", "url": "https://api.github.com/users/jiqing-feng", "user_view_type": "public" }
{ "+1": 3, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 3, "url": "https://api.github.com/repos/huggingface/datasets/issues/7707/reactions" }
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/pull/7706
7,706
Reimplemented partial split download support (revival of #6832)
{ "avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4", "events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}", "followers_url": "https://api.github.com/users/ArjunJagdale/followers", "following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}", "gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ArjunJagdale", "id": 142811259, "login": "ArjunJagdale", "node_id": "U_kgDOCIMgew", "organizations_url": "https://api.github.com/users/ArjunJagdale/orgs", "received_events_url": "https://api.github.com/users/ArjunJagdale/received_events", "repos_url": "https://api.github.com/users/ArjunJagdale/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions", "type": "User", "url": "https://api.github.com/users/ArjunJagdale", "user_view_type": "public" }
[]
open
false
[ " Mario’s Patch (in PR #6832):\r\n```\r\ndef _make_split_generators_kwargs(self, prepare_split_kwargs):\r\n # Pass `pipeline` into `_split_generators()` from `prepare_split_kwargs` if\r\n # it's in the call signature of `_split_generators()`.\r\n # This allows for global preprocessing in beam.\r\n split_generators_kwargs = {}\r\n if \"pipeline\" in inspect.signature(self._split_generators).parameters:\r\n split_generators_kwargs[\"pipeline\"] = prepare_split_kwargs[\"pipeline\"]\r\n split_generators_kwargs.update(super()._make_split_generators_kwargs(prepare_split_kwargs))\r\n return split_generators_kwargs\r\n```\r\n\r\nIn the latest main(in my fork and og repo's main):\r\n```\r\ndef _make_split_generators_kwargs(self, prepare_split_kwargs):\r\n \"\"\"Get kwargs for `self._split_generators()` from `prepare_split_kwargs`.\"\"\"\r\n splits = prepare_split_kwargs.pop(\"splits\", None)\r\n if self._supports_partial_generation():\r\n return {\"splits\": splits}\r\n return {}\r\n```\r\nIt enables passing splits into _split_generators() only for builders that support it(if i am not wrong..). So ignored Beam logic for now!", "Awesome ! btw we can modify the GeneratorBasedBuilder and ArrowBasedBuilder if needed now that custom loading scripts are not supported anymore :)\r\n\r\nI'll review this in a bit", "@lhoestq @ArjunJagdale is this still work in progress or is just a review missing? Anything I can help with here? This would indeed be a cool feature", "I did a preliminary pass and it looks good but we should check the CI, could you run `make style` @ArjunJagdale so we can run the CI ?", "Done! Also some parts may be incomplete because I had to focus on important exams and semester activities so couldn’t finish the work fully. I will still try my best." ]
2025-07-28T19:40:40Z
2025-10-29T10:20:22Z
null
CONTRIBUTOR
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7706.diff", "html_url": "https://github.com/huggingface/datasets/pull/7706", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7706.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7706" }
(revival of #6832) https://github.com/huggingface/datasets/pull/7648#issuecomment-3084050130 Close https://github.com/huggingface/datasets/issues/4101, and more --- ### PR under work!!!!
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7706/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/issues/7705
7,705
Can Not read installed dataset in dataset.load(.)
{ "avatar_url": "https://avatars.githubusercontent.com/u/52521165?v=4", "events_url": "https://api.github.com/users/HuangChiEn/events{/privacy}", "followers_url": "https://api.github.com/users/HuangChiEn/followers", "following_url": "https://api.github.com/users/HuangChiEn/following{/other_user}", "gists_url": "https://api.github.com/users/HuangChiEn/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/HuangChiEn", "id": 52521165, "login": "HuangChiEn", "node_id": "MDQ6VXNlcjUyNTIxMTY1", "organizations_url": "https://api.github.com/users/HuangChiEn/orgs", "received_events_url": "https://api.github.com/users/HuangChiEn/received_events", "repos_url": "https://api.github.com/users/HuangChiEn/repos", "site_admin": false, "starred_url": "https://api.github.com/users/HuangChiEn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HuangChiEn/subscriptions", "type": "User", "url": "https://api.github.com/users/HuangChiEn", "user_view_type": "public" }
[]
open
false
[ "You can download the dataset locally using [huggingface_hub.snapshot_download](https://huggingface.co/docs/huggingface_hub/v0.34.3/en/package_reference/file_download#huggingface_hub.snapshot_download) and then do\n\n```python\ndataset = load_dataset(local_directory_path)\n```", "> You can download the dataset locally using [huggingface_hub.snapshot_download](https://huggingface.co/docs/huggingface_hub/v0.34.3/en/package_reference/file_download#huggingface_hub.snapshot_download) and then do\n> \n> dataset = load_dataset(local_directory_path)\n\nIt's good suggestion, but my server env is network restriction. It can not directly fetch data from huggingface. I spent lot of time to download and transfer it to the server.\nSo, I attempt to make load_dataset connect to my local dataset. ", "Just Solved it few day before. Will post solution later...\nalso thanks folks quick reply.." ]
2025-07-28T09:43:54Z
2025-08-05T01:24:32Z
null
NONE
null
null
Hi, folks, I'm newbie in huggingface dataset api. As title, i'm facing the issue that the dataset.load api can not connect to the installed dataset. code snippet : <img width="572" height="253" alt="Image" src="https://github.com/user-attachments/assets/10f48aaf-d6ca-4239-b1cf-145d74f125d1" /> data path : "/xxx/joseph/llava_ds/vlm_ds" it contains all video clips i want! <img width="1398" height="261" alt="Image" src="https://github.com/user-attachments/assets/bf213b66-e344-4311-97e7-bc209677ae77" /> i run the py script by <img width="1042" height="38" alt="Image" src="https://github.com/user-attachments/assets/8b3fcee4-e1a6-41b8-bee1-91567b00d9d2" /> But bad happended, even i provide dataset path by "HF_HUB_CACHE", it still attempt to download data from remote side : <img width="1697" height="813" alt="Image" src="https://github.com/user-attachments/assets/baa6cff1-a724-4710-a8c4-4805459deffb" /> Any suggestion will be appreciated!!
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7705/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/pull/7704
7,704
Fix map() example in datasets documentation: define tokenizer before use
{ "avatar_url": "https://avatars.githubusercontent.com/u/183703408?v=4", "events_url": "https://api.github.com/users/Sanjaykumar030/events{/privacy}", "followers_url": "https://api.github.com/users/Sanjaykumar030/followers", "following_url": "https://api.github.com/users/Sanjaykumar030/following{/other_user}", "gists_url": "https://api.github.com/users/Sanjaykumar030/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Sanjaykumar030", "id": 183703408, "login": "Sanjaykumar030", "node_id": "U_kgDOCvMXcA", "organizations_url": "https://api.github.com/users/Sanjaykumar030/orgs", "received_events_url": "https://api.github.com/users/Sanjaykumar030/received_events", "repos_url": "https://api.github.com/users/Sanjaykumar030/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Sanjaykumar030/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Sanjaykumar030/subscriptions", "type": "User", "url": "https://api.github.com/users/Sanjaykumar030", "user_view_type": "public" }
[]
closed
false
[ "Hi @lhoestq, just a gentle follow-up on this doc fix PR (#7704). Let me know if any changes are needed β€” happy to update.\r\nHope this improvement helps users run the example without confusion!", "the modified file is the readme of the docs, not about map() specifically" ]
2025-07-26T14:18:17Z
2025-08-13T13:23:18Z
2025-08-13T13:06:37Z
CONTRIBUTOR
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7704.diff", "html_url": "https://github.com/huggingface/datasets/pull/7704", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7704.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7704" }
## Problem The current datasets.Dataset.map() example in the documentation demonstrates batched processing using a tokenizer object without defining or importing it. This causes a NameError when users copy and run the example as-is, breaking the expected seamless experience. ## Correction This PR fixes the issue by explicitly importing and initializing the tokenizer using the Transformers library (AutoTokenizer.from_pretrained("bert-base-uncased")), making the example self-contained and runnable without errors. This will help new users understand the workflow and apply the method correctly. Closes #7703
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7704/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/issues/7703
7,703
[Docs] map() example uses undefined `tokenizer` β€” causes NameError
{ "avatar_url": "https://avatars.githubusercontent.com/u/183703408?v=4", "events_url": "https://api.github.com/users/Sanjaykumar030/events{/privacy}", "followers_url": "https://api.github.com/users/Sanjaykumar030/followers", "following_url": "https://api.github.com/users/Sanjaykumar030/following{/other_user}", "gists_url": "https://api.github.com/users/Sanjaykumar030/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Sanjaykumar030", "id": 183703408, "login": "Sanjaykumar030", "node_id": "U_kgDOCvMXcA", "organizations_url": "https://api.github.com/users/Sanjaykumar030/orgs", "received_events_url": "https://api.github.com/users/Sanjaykumar030/received_events", "repos_url": "https://api.github.com/users/Sanjaykumar030/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Sanjaykumar030/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Sanjaykumar030/subscriptions", "type": "User", "url": "https://api.github.com/users/Sanjaykumar030", "user_view_type": "public" }
[]
open
false
[ "I've submitted PR #7704 which adds documentation to clarify the behavior of `map()` when returning `None`." ]
2025-07-26T13:35:11Z
2025-07-27T09:44:35Z
null
CONTRIBUTOR
null
null
## Description The current documentation example for `datasets.Dataset.map()` demonstrates batched processing but uses a `tokenizer` object without defining or importing it. This causes an error every time it's copied. Here is the problematic line: ```python # process a batch of examples >>> ds = ds.map(lambda example: tokenizer(example["text"]), batched=True) ``` This assumes the user has already set up a tokenizer, which contradicts the goal of having self-contained, copy-paste-friendly examples. ## Problem Users who copy and run the example as-is will encounter: ```python NameError: name 'tokenizer' is not defined ``` This breaks the flow for users and violates HuggingFace's documentation principle that examples should "work as expected" when copied directly. ## Proposal Update the example to include the required tokenizer setup using the Transformers library, like so: ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") ds_tokenized = ds.map(lambda example: tokenizer(example["text"]), batched=True) ``` This will help new users understand the workflow and apply the method correctly. ## Note This PR complements ongoing improvements like #7700, which clarifies multiprocessing in .map(). My change focuses on undefined tokenizer β€” causes NameError
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7703/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/pull/7702
7,702
num_proc=0 behave like None, num_proc=1 uses one worker (not main process) and clarify num_proc documentation
{ "avatar_url": "https://avatars.githubusercontent.com/u/84439872?v=4", "events_url": "https://api.github.com/users/tanuj-rai/events{/privacy}", "followers_url": "https://api.github.com/users/tanuj-rai/followers", "following_url": "https://api.github.com/users/tanuj-rai/following{/other_user}", "gists_url": "https://api.github.com/users/tanuj-rai/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/tanuj-rai", "id": 84439872, "login": "tanuj-rai", "node_id": "MDQ6VXNlcjg0NDM5ODcy", "organizations_url": "https://api.github.com/users/tanuj-rai/orgs", "received_events_url": "https://api.github.com/users/tanuj-rai/received_events", "repos_url": "https://api.github.com/users/tanuj-rai/repos", "site_admin": false, "starred_url": "https://api.github.com/users/tanuj-rai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tanuj-rai/subscriptions", "type": "User", "url": "https://api.github.com/users/tanuj-rai", "user_view_type": "public" }
[]
closed
false
[ "I think we can support num_proc=0 and make it equivalent to `None` to make it simpler", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7702). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "> I think we can support num_proc=0 and make it equivalent to `None` to make it simpler\r\n\r\nThank you @lhoestq for reviewing it. Please let me know if anything needs to be updated further." ]
2025-07-26T08:19:39Z
2025-07-31T14:52:33Z
2025-07-31T14:52:33Z
CONTRIBUTOR
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7702.diff", "html_url": "https://github.com/huggingface/datasets/pull/7702", "merged_at": "2025-07-31T14:52:33Z", "patch_url": "https://github.com/huggingface/datasets/pull/7702.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7702" }
Fixes issue #7700 This PR makes num_proc=0 behave like None in Dataset.map(), disabling multiprocessing. It improves UX by aligning with DataLoader(num_workers=0) behavior. The num_proc docstring is also updated to clearly explain valid values and behavior. @SunMarc
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7702/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/pull/7701
7,701
Update fsspec max version to current release 2025.7.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/5445560?v=4", "events_url": "https://api.github.com/users/rootAvish/events{/privacy}", "followers_url": "https://api.github.com/users/rootAvish/followers", "following_url": "https://api.github.com/users/rootAvish/following{/other_user}", "gists_url": "https://api.github.com/users/rootAvish/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/rootAvish", "id": 5445560, "login": "rootAvish", "node_id": "MDQ6VXNlcjU0NDU1NjA=", "organizations_url": "https://api.github.com/users/rootAvish/orgs", "received_events_url": "https://api.github.com/users/rootAvish/received_events", "repos_url": "https://api.github.com/users/rootAvish/repos", "site_admin": false, "starred_url": "https://api.github.com/users/rootAvish/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rootAvish/subscriptions", "type": "User", "url": "https://api.github.com/users/rootAvish", "user_view_type": "public" }
[]
closed
false
[ "@lhoestq I ran the test suite locally and while some tests were failing those failures are present on the main branch too. Could you please review and trigger the CI?", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7701). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Which release will this be available in ? I'm running into this issue with `datasets=3.6.0`" ]
2025-07-26T06:47:59Z
2025-08-13T17:32:07Z
2025-07-28T11:58:11Z
CONTRIBUTOR
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7701.diff", "html_url": "https://github.com/huggingface/datasets/pull/7701", "merged_at": "2025-07-28T11:58:11Z", "patch_url": "https://github.com/huggingface/datasets/pull/7701.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7701" }
Diffusers currently asks for a max fsspec version of `2025.3.0`. This change updates it to the current latest version. This change is mainly required to resolve conflicts with other packages in an environment. In my particular case, `aider-chat` which is a part of my environment installs `2025.5.1` which is incompatible with `datasets`.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7701/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/issues/7700
7,700
[doc] map.num_proc needs clarification
{ "avatar_url": "https://avatars.githubusercontent.com/u/196988264?v=4", "events_url": "https://api.github.com/users/sfc-gh-sbekman/events{/privacy}", "followers_url": "https://api.github.com/users/sfc-gh-sbekman/followers", "following_url": "https://api.github.com/users/sfc-gh-sbekman/following{/other_user}", "gists_url": "https://api.github.com/users/sfc-gh-sbekman/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sfc-gh-sbekman", "id": 196988264, "login": "sfc-gh-sbekman", "node_id": "U_kgDOC73NaA", "organizations_url": "https://api.github.com/users/sfc-gh-sbekman/orgs", "received_events_url": "https://api.github.com/users/sfc-gh-sbekman/received_events", "repos_url": "https://api.github.com/users/sfc-gh-sbekman/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sfc-gh-sbekman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sfc-gh-sbekman/subscriptions", "type": "User", "url": "https://api.github.com/users/sfc-gh-sbekman", "user_view_type": "public" }
[]
open
false
[]
2025-07-25T17:35:09Z
2025-07-25T17:39:36Z
null
NONE
null
null
https://huggingface.co/docs/datasets/v4.0.0/en/package_reference/main_classes#datasets.Dataset.map.num_proc ``` num_proc (int, optional, defaults to None) β€” Max number of processes when generating cache. Already cached shards are loaded sequentially. ``` for batch: ``` num_proc (int, optional, defaults to None): The number of processes to use for multiprocessing. If None, no multiprocessing is used. This can significantly speed up batching for large datasets. ``` So what happens to `map.num_proc` - is it the same behavior as `batch.num_proc` - so only if `num_proc=None` then no multiprocessing is used? Let's update the doc to be unambiguous. **bonus**: we could make all of these behave similarly to `DataLoader.num_workers` - where `num_workers==0` implies no multiprocessing. I think that's the most intuitive, IMHO. 0 workers - the main process has to do all the work. `None` could be the same as `0`. context: debugging a failing `map` Thank you!
null
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/7700/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/issues/7699
7,699
Broken link in documentation for "Create a video dataset"
{ "avatar_url": "https://avatars.githubusercontent.com/u/122366389?v=4", "events_url": "https://api.github.com/users/cleong110/events{/privacy}", "followers_url": "https://api.github.com/users/cleong110/followers", "following_url": "https://api.github.com/users/cleong110/following{/other_user}", "gists_url": "https://api.github.com/users/cleong110/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cleong110", "id": 122366389, "login": "cleong110", "node_id": "U_kgDOB0sptQ", "organizations_url": "https://api.github.com/users/cleong110/orgs", "received_events_url": "https://api.github.com/users/cleong110/received_events", "repos_url": "https://api.github.com/users/cleong110/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cleong110/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cleong110/subscriptions", "type": "User", "url": "https://api.github.com/users/cleong110", "user_view_type": "public" }
[]
open
false
[ "The URL is ok but it seems the webdataset website is down. There seems to be a related issue here: https://github.com/webdataset/webdataset/issues/155\n\nFeel free to ask the authors there for an update. Otherwise happy to witch the link to the mirror shared in that issue" ]
2025-07-24T19:46:28Z
2025-07-25T15:27:47Z
null
NONE
null
null
The link to "the [WebDataset documentation](https://webdataset.github.io/webdataset)." is broken. https://huggingface.co/docs/datasets/main/en/video_dataset#webdataset <img width="2048" height="264" alt="Image" src="https://github.com/user-attachments/assets/975dd10c-aad8-42fc-9fbc-de0e2747a326" />
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7699/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/issues/7698
7,698
NotImplementedError when using streaming=True in Google Colab environment
{ "avatar_url": "https://avatars.githubusercontent.com/u/100470741?v=4", "events_url": "https://api.github.com/users/Aniket17200/events{/privacy}", "followers_url": "https://api.github.com/users/Aniket17200/followers", "following_url": "https://api.github.com/users/Aniket17200/following{/other_user}", "gists_url": "https://api.github.com/users/Aniket17200/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Aniket17200", "id": 100470741, "login": "Aniket17200", "node_id": "U_kgDOBf0P1Q", "organizations_url": "https://api.github.com/users/Aniket17200/orgs", "received_events_url": "https://api.github.com/users/Aniket17200/received_events", "repos_url": "https://api.github.com/users/Aniket17200/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Aniket17200/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Aniket17200/subscriptions", "type": "User", "url": "https://api.github.com/users/Aniket17200", "user_view_type": "public" }
[]
open
false
[ "Hi, @Aniket17200, try upgrading datasets using '!pip install -U datasets'. I hope this will resolve your issue.", "Thank you @tanuj-rai, it's working great " ]
2025-07-23T08:04:53Z
2025-07-23T15:06:23Z
null
NONE
null
null
### Describe the bug When attempting to load a large dataset (like tiiuae/falcon-refinedweb or allenai/c4) using streaming=True in a standard Google Colab notebook, the process fails with a NotImplementedError: Loading a streaming dataset cached in a LocalFileSystem is not supported yet. This issue persists even after upgrading datasets and huggingface_hub and restarting the session. ### Steps to reproduce the bug Open a new Google Colab notebook. (Optional but recommended) Run !pip install --upgrade datasets huggingface_hub and restart the runtime. Run the following code: Python from datasets import load_dataset try: print("Attempting to load a stream...") streaming_dataset = load_dataset('tiiuae/falcon-refinedweb', streaming=True) print("Success!") except Exception as e: print(e) ### Expected behavior The load_dataset command should return a StreamingDataset object without raising an error, allowing iteration over the dataset. Actual Behavior The code fails and prints the following error traceback: [PASTE THE FULL ERROR TRACEBACK HERE] (Note: Copy the entire error message you received, from Traceback... to the final error line, and paste it in this section.) ### Environment info Platform: Google Colab datasets version: [Run !pip show datasets in Colab and paste the version here] huggingface_hub version: [Run !pip show huggingface_hub and paste the version here] Python version: [Run !python --version and paste the version here]
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7698/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/issues/7697
7,697
-
{ "avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4", "events_url": "https://api.github.com/users/ghost/events{/privacy}", "followers_url": "https://api.github.com/users/ghost/followers", "following_url": "https://api.github.com/users/ghost/following{/other_user}", "gists_url": "https://api.github.com/users/ghost/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ghost", "id": 10137, "login": "ghost", "node_id": "MDQ6VXNlcjEwMTM3", "organizations_url": "https://api.github.com/users/ghost/orgs", "received_events_url": "https://api.github.com/users/ghost/received_events", "repos_url": "https://api.github.com/users/ghost/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ghost/subscriptions", "type": "User", "url": "https://api.github.com/users/ghost", "user_view_type": "public" }
[]
closed
false
[]
2025-07-23T01:30:32Z
2025-07-25T15:21:39Z
2025-07-25T15:21:39Z
NONE
null
null
-
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7697/reactions" }
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/issues/7696
7,696
load_dataset() in 4.0.0 returns different audio samples compared to earlier versions breaking reproducibility
{ "avatar_url": "https://avatars.githubusercontent.com/u/25346345?v=4", "events_url": "https://api.github.com/users/Manalelaidouni/events{/privacy}", "followers_url": "https://api.github.com/users/Manalelaidouni/followers", "following_url": "https://api.github.com/users/Manalelaidouni/following{/other_user}", "gists_url": "https://api.github.com/users/Manalelaidouni/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Manalelaidouni", "id": 25346345, "login": "Manalelaidouni", "node_id": "MDQ6VXNlcjI1MzQ2MzQ1", "organizations_url": "https://api.github.com/users/Manalelaidouni/orgs", "received_events_url": "https://api.github.com/users/Manalelaidouni/received_events", "repos_url": "https://api.github.com/users/Manalelaidouni/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Manalelaidouni/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Manalelaidouni/subscriptions", "type": "User", "url": "https://api.github.com/users/Manalelaidouni", "user_view_type": "public" }
[]
closed
false
[ "Hi ! This is because `datasets` now uses the FFmpeg-based library `torchcodec` instead of the libsndfile-based library `soundfile` to decode audio data. Those two have different decoding implementations", "I’m all for torchcodec, good luck with the migration!" ]
2025-07-22T17:02:17Z
2025-07-30T14:22:21Z
2025-07-30T14:22:21Z
NONE
null
null
### Describe the bug In datasets 4.0.0 release, `load_dataset()` returns different audio samples compared to earlier versions, this breaks integration tests that depend on consistent sample data across different environments (first and second envs specified below). ### Steps to reproduce the bug ```python from datasets import Audio, load_dataset ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") ds = ds.cast_column("audio", Audio(24000)) sample= ds[0]["audio"]["array"] print(sample) # sample in 3.6.0 [0.00231914 0.00245417 0.00187414 ... 0.00061956 0.00101157 0.00076325] # sample in 4.0.0 array([0.00238037, 0.00220794, 0.00198703, ..., 0.00057983, 0.00085863, 0.00115309], dtype=float32) ``` ### Expected behavior The same dataset should load identical samples across versions to maintain reproducibility. ### Environment info First env: - datasets version: 3.6.0 - Platform: Windows-10-10.0.26100-SP0 - Python: 3.11.0 Second env: - datasets version: 4.0.0 - Platform: Linux-6.1.123+-x86_64-with-glibc2.35 - Python: 3.11.13
{ "avatar_url": "https://avatars.githubusercontent.com/u/25346345?v=4", "events_url": "https://api.github.com/users/Manalelaidouni/events{/privacy}", "followers_url": "https://api.github.com/users/Manalelaidouni/followers", "following_url": "https://api.github.com/users/Manalelaidouni/following{/other_user}", "gists_url": "https://api.github.com/users/Manalelaidouni/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Manalelaidouni", "id": 25346345, "login": "Manalelaidouni", "node_id": "MDQ6VXNlcjI1MzQ2MzQ1", "organizations_url": "https://api.github.com/users/Manalelaidouni/orgs", "received_events_url": "https://api.github.com/users/Manalelaidouni/received_events", "repos_url": "https://api.github.com/users/Manalelaidouni/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Manalelaidouni/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Manalelaidouni/subscriptions", "type": "User", "url": "https://api.github.com/users/Manalelaidouni", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7696/reactions" }
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/pull/7695
7,695
Support downloading specific splits in load_dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4", "events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}", "followers_url": "https://api.github.com/users/ArjunJagdale/followers", "following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}", "gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ArjunJagdale", "id": 142811259, "login": "ArjunJagdale", "node_id": "U_kgDOCIMgew", "organizations_url": "https://api.github.com/users/ArjunJagdale/orgs", "received_events_url": "https://api.github.com/users/ArjunJagdale/received_events", "repos_url": "https://api.github.com/users/ArjunJagdale/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions", "type": "User", "url": "https://api.github.com/users/ArjunJagdale", "user_view_type": "public" }
[]
closed
false
[ "I’ve completed the following steps to continue the partial split download support (from PR #6832):\r\n\r\nI did changes on top of what has been done by mario. Here are some of those changes: \r\n- Restored support for writing multiple split shards:\r\n\r\n- In _prepare_split_single, we now correctly replace JJJJJ and SSSSS placeholders in the fpath for job/shard IDs before creating the writer.\r\n\r\n- Added os.makedirs(os.path.dirname(path), exist_ok=True) after placeholder substitution to prevent FileNotFoundError.\r\n\r\n- Applied the fix to both split writers:\r\n\r\n 1] self._generate_examples version (used by most modules).\r\n\r\n 2] self._generate_tables version (used by IterableDatasetBuilder).\r\n\r\n- Confirmed 109/113 tests passing, meaning the general logic is working across the board.\r\n\r\nWhat’s still failing\r\n4 integration tests fail:\r\n\r\n`test_load_hub_dataset_with_single_config_in_metadata`\r\n\r\n`test_load_hub_dataset_with_two_config_in_metadata`\r\n\r\n`test_load_hub_dataset_with_metadata_config_in_parallel`\r\n\r\n`test_reload_old_cache_from_2_15`\r\n\r\nAll are due to FileNotFoundError from uncreated output paths, which I'm currently finalizing by ensuring os.makedirs() is correctly applied before every writer instantiation.\r\n\r\nI will update about these fixes after running tests!", "@lhoestq this was just an update", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7695). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Local DIR wasn't doing well, dk actually what happened, will PR again! Sorry :)" ]
2025-07-22T09:33:54Z
2025-07-28T17:33:30Z
2025-07-28T17:15:45Z
CONTRIBUTOR
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7695.diff", "html_url": "https://github.com/huggingface/datasets/pull/7695", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7695.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7695" }
This PR builds on #6832 by @mariosasko. May close - #4101, #2538 Discussion - https://github.com/huggingface/datasets/pull/7648#issuecomment-3084050130 --- ### Note - This PR is under work and frequent changes will be pushed.
{ "avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4", "events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}", "followers_url": "https://api.github.com/users/ArjunJagdale/followers", "following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}", "gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ArjunJagdale", "id": 142811259, "login": "ArjunJagdale", "node_id": "U_kgDOCIMgew", "organizations_url": "https://api.github.com/users/ArjunJagdale/orgs", "received_events_url": "https://api.github.com/users/ArjunJagdale/received_events", "repos_url": "https://api.github.com/users/ArjunJagdale/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions", "type": "User", "url": "https://api.github.com/users/ArjunJagdale", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/7695/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/issues/7694
7,694
Dataset.to_json consumes excessive memory, appears to not be a streaming operation
{ "avatar_url": "https://avatars.githubusercontent.com/u/49603999?v=4", "events_url": "https://api.github.com/users/ycq0125/events{/privacy}", "followers_url": "https://api.github.com/users/ycq0125/followers", "following_url": "https://api.github.com/users/ycq0125/following{/other_user}", "gists_url": "https://api.github.com/users/ycq0125/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ycq0125", "id": 49603999, "login": "ycq0125", "node_id": "MDQ6VXNlcjQ5NjAzOTk5", "organizations_url": "https://api.github.com/users/ycq0125/orgs", "received_events_url": "https://api.github.com/users/ycq0125/received_events", "repos_url": "https://api.github.com/users/ycq0125/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ycq0125/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ycq0125/subscriptions", "type": "User", "url": "https://api.github.com/users/ycq0125", "user_view_type": "public" }
[]
open
false
[ "Hi ! to_json is memory efficient and writes the data by batch:\n\nhttps://github.com/huggingface/datasets/blob/d9861d86be222884dabbd534a2db770c70c9b558/src/datasets/io/json.py#L153-L159\n\nWhat memory are you mesuring ? If you are mesuring RSS, it is likely that it counts the memory mapped data of the dataset. Memory mapped data are loaded as physical memory when accessed and are automatically discarded when your OS needs more memory, and therefore doesn't OOM." ]
2025-07-21T07:51:25Z
2025-07-25T14:42:21Z
null
NONE
null
null
### Describe the bug When exporting a Dataset object to a JSON Lines file using the .to_json(lines=True) method, the process consumes a very large amount of memory. The memory usage is proportional to the size of the entire Dataset object being saved, rather than being a low, constant memory operation. This behavior is unexpected, as the JSONL format is line-oriented and ideally suited for streaming writes. This issue can easily lead to Out-of-Memory (OOM) errors when exporting large datasets, especially in memory-constrained environments like Docker containers. <img width="1343" height="329" alt="Image" src="https://github.com/user-attachments/assets/518b4263-ad12-422d-9672-28ffe97240ce" /> ### Steps to reproduce the bug ``` import os from datasets import load_dataset, Dataset from loguru import logger # A public dataset to test with REPO_ID = "adam89/TinyStoriesChinese" SUBSET = "default" SPLIT = "train" NUM_ROWS_TO_LOAD = 10 # Use a reasonably large number to see the memory spike def run_test(): """Loads data into memory and then saves it, triggering the memory issue.""" logger.info("Step 1: Loading data into an in-memory Dataset object...") # Create an in-memory Dataset object from a stream # This simulates having a processed dataset ready to be saved iterable_dataset = load_dataset(REPO_ID, name=SUBSET, split=SPLIT, streaming=True) limited_stream = iterable_dataset.take(NUM_ROWS_TO_LOAD) in_memory_dataset = Dataset.from_generator(limited_stream.__iter__) logger.info(f"Dataset with {len(in_memory_dataset)} rows created in memory.") output_path = "./test_output.jsonl" logger.info(f"Step 2: Saving the dataset to {output_path} using .to_json()...") logger.info("Please monitor memory usage during this step.") # This is the step that causes the massive memory allocation in_memory_dataset.to_json(output_path, force_ascii=False) logger.info("Save operation complete.") os.remove(output_path) if __name__ == "__main__": # To see the memory usage clearly, run this script with a memory profiler: # python -m memray run your_script_name.py # python -m memray tree xxx.bin run_test() ``` ### Expected behavior I would expect the .to_json(lines=True) method to be a memory-efficient, streaming operation. The memory usage should remain low and relatively constant, as data is converted and written to the file line-by-line or in small batches. The memory footprint should not be proportional to the total number of rows in the in_memory_dataset. ### Environment info datasets version:3.6.0 Python version:3.9.18 os:macOS 15.3.1 (arm64)
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7694/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/issues/7693
7,693
Dataset scripts are no longer supported, but found superb.py
{ "avatar_url": "https://avatars.githubusercontent.com/u/114297534?v=4", "events_url": "https://api.github.com/users/edwinzajac/events{/privacy}", "followers_url": "https://api.github.com/users/edwinzajac/followers", "following_url": "https://api.github.com/users/edwinzajac/following{/other_user}", "gists_url": "https://api.github.com/users/edwinzajac/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/edwinzajac", "id": 114297534, "login": "edwinzajac", "node_id": "U_kgDOBtAKvg", "organizations_url": "https://api.github.com/users/edwinzajac/orgs", "received_events_url": "https://api.github.com/users/edwinzajac/received_events", "repos_url": "https://api.github.com/users/edwinzajac/repos", "site_admin": false, "starred_url": "https://api.github.com/users/edwinzajac/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/edwinzajac/subscriptions", "type": "User", "url": "https://api.github.com/users/edwinzajac", "user_view_type": "public" }
[]
open
false
[ "I got a pretty similar issue when I try to load bigbio/neurotrial_ner dataset. \n`Dataset scripts are no longer supported, but found neurotrial_ner.py`", "Same here. I was running this tutorial and got a similar error: https://github.com/openai/whisper/discussions/654 (I'm a first-time transformers library user)\n\nRuntimeError: Dataset scripts are no longer supported, but found librispeech_asr.py\n\nWhat am I supposed to do at this point?\n\nThanks", "hey I got the same error and I have tried to downgrade version to 3.6.0 and it works.\n`pip install datasets==3.6.0`", "Thank you very much @Tin-viAct . That indeed did the trick for me :) \nNow the code continue its normal flow ", "Thanks @Tin-viAct, Works!", "I converted [openslr/librispeech_asr](https://huggingface.co/datasets/openslr/librispeech_asr) to Parquet - thanks for reporting.\n\nIt's now compatible with `datasets` 4.0 !\n\nI'll try to ping the authors of the other datasets like [s3prl/superb](https://huggingface.co/datasets/s3prl/superb) and [espnet/yodas2](https://huggingface.co/datasets/espnet/yodas2)", "How come a breaking change was allowed and now requires extra work from individual authors for things to be usable? \n\nhttps://en.wikipedia.org/wiki/Backward_compatibility", "We follow semantic versioning so that breaking changes only occur in major releases. Also note that dataset scripts have been legacy for some time now, with a message on the dataset pages to ask authors to update their datasets.\n\nIt's ok to ping older versions of `datasets`, but imo a few remaining datasets need to be converted since they are valuable to the community.", "I was facing the same issue with a not so familiar dataset in hugging hub . downgrading the datasets version worked ❀️. Thank you @Tin-viAct .", "Thank you so much, @Tin-viAct ! I’ve been struggling with this issue for about 3 hours, and your suggestion to downgrade datasets worked perfectly. I really appreciate the helpβ€”you saved me!", "> hey I got the same error and I have tried to downgrade version to 3.6.0 and it works. `pip install datasets==3.6.0`\n\nThank you so much! I was following the [quickstart](https://huggingface.co/docs/datasets/quickstart) and the very first sample fails. Not a good way to get started....", "> hey I got the same error and I have tried to downgrade version to 3.6.0 and it works. `pip install datasets==3.6.0`\nthank you! I get it.\n", "I updated `hotpot_qa` and pinged the PolyAI folks to update the dataset used in the quickstart as well: https://huggingface.co/datasets/PolyAI/minds14/discussions/35\nedit: merged !\nedit2: quickstart dataset is also fixed !", "[LegalBench](https://huggingface.co/datasets/nguha/legalbench) is downloaded 10k times a month and is now broken. Would be great to have this fixed.", "I opened a PR to convert LegalBench to Parquet and reached out to the author: https://huggingface.co/datasets/nguha/legalbench/discussions/34", "Thank you very much @Tin-viAct! I’d been looking everywhere for a fix, and your reply saved me :)", "Tried downgrading the datasets version. But the problem with this is that it had led to compatibility issues and other breaking changes and more errors on other parts of my code ", "I opened a few more PRs and reached out to the authors:\n- https://huggingface.co/datasets/Skylion007/openwebtext/discussions/22\n- https://huggingface.co/datasets/stas/openwebtext-10k/discussions/2\n\nBtw if you want to open a PR to a dataset to convert it to Parquet here is the command:\n\n```\nuv run --with \"datasets==3.6.0\" datasets-cli convert_to_parquet <username/dataset-name> --trust_remote_code\n```\n\n(just replace the `<username/dataset-name>` with the dataset repository name)" ]
2025-07-20T13:48:06Z
2025-09-04T10:32:12Z
null
NONE
null
null
### Describe the bug Hello, I'm trying to follow the [Hugging Face Pipelines tutorial](https://huggingface.co/docs/transformers/main_classes/pipelines) but the tutorial seems to work only on old datasets versions. I then get the error : ``` -------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) Cell In[65], [line 1](vscode-notebook-cell:?execution_count=65&line=1) ----> [1](vscode-notebook-cell:?execution_count=65&line=1) dataset = datasets.load_dataset("superb", name="asr", split="test") 3 # KeyDataset (only *pt*) will simply return the item in the dict returned by the dataset item 4 # as we're not interested in the *target* part of the dataset. For sentence pair use KeyPairDataset 5 for out in tqdm(pipe(KeyDataset(dataset, "file"))): File ~/Desktop/debug/llm_course/.venv/lib/python3.11/site-packages/datasets/load.py:1392, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, keep_in_memory, save_infos, revision, token, streaming, num_proc, storage_options, **config_kwargs) 1387 verification_mode = VerificationMode( 1388 (verification_mode or VerificationMode.BASIC_CHECKS) if not save_infos else VerificationMode.ALL_CHECKS 1389 ) 1391 # Create a dataset builder -> [1392](https://file+.vscode-resource.vscode-cdn.net/home/edwin/Desktop/debug/llm_course/~/Desktop/debug/llm_course/.venv/lib/python3.11/site-packages/datasets/load.py:1392) builder_instance = load_dataset_builder( 1393 path=path, 1394 name=name, 1395 data_dir=data_dir, 1396 data_files=data_files, 1397 cache_dir=cache_dir, 1398 features=features, 1399 download_config=download_config, 1400 download_mode=download_mode, 1401 revision=revision, 1402 token=token, 1403 storage_options=storage_options, 1404 **config_kwargs, 1405 ) 1407 # Return iterable dataset in case of streaming 1408 if streaming: File ~/Desktop/debug/llm_course/.venv/lib/python3.11/site-packages/datasets/load.py:1132, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, token, storage_options, **config_kwargs) 1130 if features is not None: 1131 features = _fix_for_backward_compatible_features(features) -> [1132](https://file+.vscode-resource.vscode-cdn.net/home/edwin/Desktop/debug/llm_course/~/Desktop/debug/llm_course/.venv/lib/python3.11/site-packages/datasets/load.py:1132) dataset_module = dataset_module_factory( 1133 path, 1134 revision=revision, 1135 download_config=download_config, 1136 download_mode=download_mode, 1137 data_dir=data_dir, 1138 data_files=data_files, 1139 cache_dir=cache_dir, 1140 ) 1141 # Get dataset builder class 1142 builder_kwargs = dataset_module.builder_kwargs File ~/Desktop/debug/llm_course/.venv/lib/python3.11/site-packages/datasets/load.py:1031, in dataset_module_factory(path, revision, download_config, download_mode, data_dir, data_files, cache_dir, **download_kwargs) 1026 if isinstance(e1, FileNotFoundError): 1027 raise FileNotFoundError( 1028 f"Couldn't find any data file at {relative_to_absolute_path(path)}. " 1029 f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}" 1030 ) from None -> [1031](https://file+.vscode-resource.vscode-cdn.net/home/edwin/Desktop/debug/llm_course/~/Desktop/debug/llm_course/.venv/lib/python3.11/site-packages/datasets/load.py:1031) raise e1 from None 1032 else: 1033 raise FileNotFoundError(f"Couldn't find any data file at {relative_to_absolute_path(path)}.") File ~/Desktop/debug/llm_course/.venv/lib/python3.11/site-packages/datasets/load.py:989, in dataset_module_factory(path, revision, download_config, download_mode, data_dir, data_files, cache_dir, **download_kwargs) 981 try: 982 api.hf_hub_download( 983 repo_id=path, 984 filename=filename, (...) 987 proxies=download_config.proxies, 988 ) --> [989](https://file+.vscode-resource.vscode-cdn.net/home/edwin/Desktop/debug/llm_course/~/Desktop/debug/llm_course/.venv/lib/python3.11/site-packages/datasets/load.py:989) raise RuntimeError(f"Dataset scripts are no longer supported, but found {filename}") 990 except EntryNotFoundError: 991 # Use the infos from the parquet export except in some cases: 992 if data_dir or data_files or (revision and revision != "main"): RuntimeError: Dataset scripts are no longer supported, but found superb.py ``` NB : I tried to replace "superb" by "anton-l/superb_demo" but I get a 'torchcodec' importing error. Maybe I misunderstood something. ### Steps to reproduce the bug ``` import datasets from transformers import pipeline from transformers.pipelines.pt_utils import KeyDataset from tqdm.auto import tqdm pipe = pipeline("automatic-speech-recognition", model="facebook/wav2vec2-base-960h", device=0) dataset = datasets.load_dataset("superb", name="asr", split="test") # KeyDataset (only *pt*) will simply return the item in the dict returned by the dataset item # as we're not interested in the *target* part of the dataset. For sentence pair use KeyPairDataset for out in tqdm(pipe(KeyDataset(dataset, "file"))): print(out) # {"text": "NUMBER TEN FRESH NELLY IS WAITING ON YOU GOOD NIGHT HUSBAND"} # {"text": ....} # .... ``` ### Expected behavior Get the tutorial expected results ### Environment info --- SYSTEM INFO --- Operating System: Ubuntu 24.10 Kernel: Linux 6.11.0-29-generic Architecture: x86-64 --- PYTHON --- Python 3.11.13 --- VENV INFO ---- datasets=4.0.0 transformers=4.53 tqdm=4.67.1
null
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/7693/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/issues/7692
7,692
xopen: invalid start byte for streaming dataset with trust_remote_code=True
{ "avatar_url": "https://avatars.githubusercontent.com/u/5188731?v=4", "events_url": "https://api.github.com/users/sedol1339/events{/privacy}", "followers_url": "https://api.github.com/users/sedol1339/followers", "following_url": "https://api.github.com/users/sedol1339/following{/other_user}", "gists_url": "https://api.github.com/users/sedol1339/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sedol1339", "id": 5188731, "login": "sedol1339", "node_id": "MDQ6VXNlcjUxODg3MzE=", "organizations_url": "https://api.github.com/users/sedol1339/orgs", "received_events_url": "https://api.github.com/users/sedol1339/received_events", "repos_url": "https://api.github.com/users/sedol1339/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sedol1339/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sedol1339/subscriptions", "type": "User", "url": "https://api.github.com/users/sedol1339", "user_view_type": "public" }
[]
open
false
[ "Hi ! it would be cool to convert this dataset to Parquet. This will make it work for `datasets>=4.0`, enable the Dataset Viewer and make it more reliable to load/stream (currently it uses a loading script in python and those are known for having issues sometimes)\n\nusing `datasets==3.6.0`, here is the command to convert it and open a Pull Request:\n\n```\ndatasets-cli convert_to_parquet espnet/yodas2 --trust_remote_code\n```\n\nThough it's likely that the `UnicodeDecodeError` comes from the loading script. If the script has a bug, it must be fixed to be able to convert the dataset without errors" ]
2025-07-20T11:08:20Z
2025-07-25T14:38:54Z
null
NONE
null
null
### Describe the bug I am trying to load YODAS2 dataset with datasets==3.6.0 ``` from datasets import load_dataset next(iter(load_dataset('espnet/yodas2', name='ru000', split='train', streaming=True, trust_remote_code=True))) ``` And get `UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa8 in position 1: invalid start byte` The cause of the error is the following: ``` from datasets.utils.file_utils import xopen filepath = 'https://huggingface.co/datasets/espnet/yodas2/resolve/c9674490249665d658f527e2684848377108d82c/data/ru000/text/00000000.json' xopen(filepath, 'r').read() >>> UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa8 in position 1: invalid start byte ``` And the cause of this is the following: ``` import fsspec fsspec.open( 'hf://datasets/espnet/yodas2@c9674490249665d658f527e2684848377108d82c/data/ru000/text/00000000.json', mode='r', hf={'token': None, 'endpoint': 'https://huggingface.co'}, ).open().read() >>> UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa8 in position 1: invalid start byte ``` Is it true that streaming=True loading is not supported anymore for trust_remote_code=True, even with datasets==3.6.0? This breaks backward compatibility. ### Steps to reproduce the bug ``` from datasets import load_dataset next(iter(load_dataset('espnet/yodas2', name='ru000', split='train', streaming=True))) ``` ### Expected behavior No errors expected ### Environment info datasets==3.6.0, ubuntu 24.04
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7692/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/issues/7691
7,691
Large WebDataset: pyarrow.lib.ArrowCapacityError on load() even with streaming
{ "avatar_url": "https://avatars.githubusercontent.com/u/122366389?v=4", "events_url": "https://api.github.com/users/cleong110/events{/privacy}", "followers_url": "https://api.github.com/users/cleong110/followers", "following_url": "https://api.github.com/users/cleong110/following{/other_user}", "gists_url": "https://api.github.com/users/cleong110/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cleong110", "id": 122366389, "login": "cleong110", "node_id": "U_kgDOB0sptQ", "organizations_url": "https://api.github.com/users/cleong110/orgs", "received_events_url": "https://api.github.com/users/cleong110/received_events", "repos_url": "https://api.github.com/users/cleong110/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cleong110/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cleong110/subscriptions", "type": "User", "url": "https://api.github.com/users/cleong110", "user_view_type": "public" }
[]
open
false
[ "It seems the error occurs right here, as it tries to infer the Features: https://github.com/huggingface/datasets/blob/main/src/datasets/packaged_modules/webdataset/webdataset.py#L78-L90", "It seems to me that if we have something that is so large that it cannot fit in pa.table, the fallback method should be to just set it as \"binary\" type, perhaps?", "I also tried creating a dataset_info.json but the webdataset builder didn't seem to look for it and load it", "Workaround on my end, removed all videos larger than 2GB for now. The dataset no longer crashes.", "Potential patch to webdataset.py could be like so: \n```python\nLARGE_THRESHOLD = 2 * 1024 * 1024 * 1024 # 2 GB\nlarge_fields = set()\n\n# Replace large binary fields with None for schema inference\nprocessed_examples = []\nfor example in first_examples:\n new_example = {}\n for k, v in example.items():\n if isinstance(v, bytes) and len(v) > LARGE_THRESHOLD:\n large_fields.add(k)\n new_example[k] = None # Replace with None to avoid Arrow errors\n else:\n new_example[k] = v\n processed_examples.append(new_example)\n\n# Proceed to infer schema\npa_tables = [\n pa.Table.from_pylist(cast_to_python_objects([example], only_1d_for_numpy=True))\n for example in processed_examples\n]\ninferred_arrow_schema = pa.concat_tables(pa_tables, promote_options=\"default\").schema\n\n# Patch features to reflect large_binary\nfeatures = datasets.Features.from_arrow_schema(inferred_arrow_schema)\nfor field in large_fields:\n features[field] = datasets.Value(\"large_binary\")\n\n```" ]
2025-07-19T18:40:27Z
2025-07-25T08:51:10Z
null
NONE
null
null
### Describe the bug I am creating a large WebDataset-format dataset for sign language processing research, and a number of the videos are over 2GB. The instant I hit one of the shards with one of those videos, I get a ArrowCapacityError, even with streaming. I made a config for the dataset that specifically includes just one problem shard, and the error triggers the instant you even run load_dataset(), even with streaming=True ``` ds = load_dataset("bible-nlp/sign-bibles", "ase_chronological_bible_translation_in_american_sign_language_119_introductions_and_passages_debugging_problem_shard", streaming=True, split="train") ``` This gives: ``` File "/opt/home/cleong/projects/semantic_and_visual_similarity/sign-bibles-dataset/sign_bibles_dataset/tasks/test_iteration.py", line 13, in iterate_keys ds = load_dataset("bible-nlp/sign-bibles", language_subset, streaming=True, split="train") File "/opt/home/cleong/envs/sign-bibles-dataset/lib/python3.13/site-packages/datasets/load.py", line 1409, in load_dataset return builder_instance.as_streaming_dataset(split=split) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^ File "/opt/home/cleong/envs/sign-bibles-dataset/lib/python3.13/site-packages/datasets/builder.py", line 1225, in as_streaming_dataset splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)} ~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^ File "/opt/home/cleong/envs/sign-bibles-dataset/lib/python3.13/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 88, in _split_generators pa.Table.from_pylist(cast_to_python_objects([example], only_1d_for_numpy=True)) ~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "pyarrow/table.pxi", line 2046, in pyarrow.lib._Tabular.from_pylist File "pyarrow/table.pxi", line 6431, in pyarrow.lib._from_pylist File "pyarrow/table.pxi", line 4893, in pyarrow.lib.Table.from_arrays File "pyarrow/table.pxi", line 1607, in pyarrow.lib._sanitize_arrays File "pyarrow/table.pxi", line 1588, in pyarrow.lib._schema_from_arrays File "pyarrow/array.pxi", line 375, in pyarrow.lib.array File "pyarrow/array.pxi", line 45, in pyarrow.lib._sequence_to_array File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status pyarrow.lib.ArrowCapacityError: array cannot contain more than 2147483646 bytes, have 3980158992 ``` ### Steps to reproduce the bug ```python #!/usr/bin/env python import argparse from datasets import get_dataset_config_names, load_dataset from tqdm import tqdm from pyarrow.lib import ArrowCapacityError, ArrowInvalid def iterate_keys(language_subset: str) -> None: """Iterate over all samples in the Sign Bibles dataset and print idx and sample key.""" # https://huggingface.co/docs/datasets/v4.0.0/en/package_reference/loading_methods#datasets.load_dataset ds = load_dataset("bible-nlp/sign-bibles", language_subset, streaming=True, split="train") print(f"\n==> Loaded dataset config '{language_subset}'") idx = 0 estimated_shard_index = 0 samples_per_shard = 5 with tqdm(desc=f"{language_subset} samples") as pbar: iterator = iter(ds) while True: try: if idx % samples_per_shard == 0 and idx > 0: # 5 samples per shard: 0, 1, 2, 3, 4 print(f"Estimated Shard idx (starting at 0, {samples_per_shard}/shard): {estimated_shard_index}") estimated_shard_index += 1 sample = next(iterator) sample_key = sample.get("__key__", "missing-key") print(f"[{language_subset}] idx={idx}, key={sample_key}") idx += 1 pbar.update(1) except StopIteration: print(f"Finished iterating through {idx} samples of {language_subset}") break except (ArrowCapacityError, ArrowInvalid) as e: print(f"PyArrow error on idx={idx}, config={language_subset}: {e}") idx += 1 pbar.update(1) continue except KeyError as e: print(f"Missing key error on idx={idx}, config={language_subset}: {e}") idx += 1 pbar.update(1) continue def main(): configs = get_dataset_config_names("bible-nlp/sign-bibles") print(f"Available configs: {configs}") configs = [ "ase_chronological_bible_translation_in_american_sign_language_119_introductions_and_passages_debugging_problem_shard" ] for language_subset in configs: print(f"TESTING CONFIG {language_subset}") iterate_keys(language_subset) # try: # except (ArrowCapacityError, ArrowInvalid) as e: # print(f"PyArrow error at config level for {language_subset}: {e}") # continue # except RuntimeError as e: # print(f"RuntimeError at config level for {language_subset}: {e}") # continue if __name__ == "__main__": parser = argparse.ArgumentParser(description="Iterate through Sign Bibles dataset and print sample keys.") args = parser.parse_args() main() ``` ### Expected behavior I expect, when I load with streaming=True, that there should not be any data loaded or anything like that. https://huggingface.co/docs/datasets/main/en/package_reference/loading_methods#datasets.load_dataset says that with streaming=true, I did expect to have some trouble with large files, but that the streaming mode would not actually try to load them unless requested, e.g. with sample["mp4"] >In the streaming case: > Don’t download or cache anything. Instead, the dataset is lazily loaded and will be streamed on-the-fly when iterating on it. ### Environment info Local setup: Conda environment on Ubuntu, pip list includes the following datasets 4.0.0 pyarrow 20.0.0 Verified on Colab: https://colab.research.google.com/drive/1HdN8stlROWrLSYXUoNeV0vQ9pClhIVM8?usp=sharing, though there it crashes by using up all available RAM
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7691/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/pull/7690
7,690
HDF5 support
{ "avatar_url": "https://avatars.githubusercontent.com/u/17013474?v=4", "events_url": "https://api.github.com/users/klamike/events{/privacy}", "followers_url": "https://api.github.com/users/klamike/followers", "following_url": "https://api.github.com/users/klamike/following{/other_user}", "gists_url": "https://api.github.com/users/klamike/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/klamike", "id": 17013474, "login": "klamike", "node_id": "MDQ6VXNlcjE3MDEzNDc0", "organizations_url": "https://api.github.com/users/klamike/orgs", "received_events_url": "https://api.github.com/users/klamike/received_events", "repos_url": "https://api.github.com/users/klamike/repos", "site_admin": false, "starred_url": "https://api.github.com/users/klamike/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/klamike/subscriptions", "type": "User", "url": "https://api.github.com/users/klamike", "user_view_type": "public" }
[]
closed
false
[ "A few to-dos which I think can be left for future PRs (which I am happy to do/help with -- just this one is already huge πŸ˜„ ):\r\n- [Enum types](https://docs.h5py.org/en/stable/special.html#enumerated-types)\r\n- HDF5 [io](https://github.com/huggingface/datasets/tree/main/src/datasets/io)\r\n- [dataset-viewer](https://github.com/huggingface/dataset-viewer) support (not sure if changes are needed with the way it is written now)", "@lhoestq any interest in merging this? Let me know if I can do anything to make reviewing it easier!", "Sorry for the delay, I'll review your PR soon :)", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7690). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Thanks for the review @lhoestq! Rebased on main and incorporated most of your suggestions.\r\n\r\nI believe the only one left is the zero-dim handling with `table_cast`...", "@lhoestq is 2c4bfba what you meant?", "Awesome! Yes, I'm happy to help with the docs. Would appreciate any pointers, we can discuss in #7740.\r\n\r\nIt does look like there was a CI test failure, though it seems unrelated?\r\n```\r\nFAILED tests/test_dataset_dict.py::test_dummy_datasetdict_serialize_fs - ValueError: Protocol not known: mock\r\nFAILED tests/test_arrow_dataset.py::test_dummy_dataset_serialize_fs - ValueError: Protocol not known: mock\r\n```\r\nAlso, what do you think of the todos in https://github.com/huggingface/datasets/pull/7690#issuecomment-3105391677 ? In particular I think support in dataset-viewer would be nice.", "Cool ! Yeah the failure is unrelated\r\n\r\nRegarding the Viewer, it should work out of the box when it's updated with the next version of `datasets` :)" ]
2025-07-18T21:09:41Z
2025-08-19T15:18:58Z
2025-08-19T13:28:53Z
CONTRIBUTOR
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7690.diff", "html_url": "https://github.com/huggingface/datasets/pull/7690", "merged_at": "2025-08-19T13:28:53Z", "patch_url": "https://github.com/huggingface/datasets/pull/7690.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7690" }
This PR adds support for tabular HDF5 file(s) by converting each row to an Arrow table. It supports columns with the usual dtypes including up to 5-dimensional arrays as well as support for complex/compound types by using `Features(dict)`. All datasets within the HDF5 file should have rows on the first dimension (groups/subgroups are still allowed). Closes #3113. Replaces #7625 which only supports a relatively small subset of HDF5.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 2, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 3, "url": "https://api.github.com/repos/huggingface/datasets/issues/7690/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/issues/7689
7,689
BadRequestError for loading dataset?
{ "avatar_url": "https://avatars.githubusercontent.com/u/45011687?v=4", "events_url": "https://api.github.com/users/WPoelman/events{/privacy}", "followers_url": "https://api.github.com/users/WPoelman/followers", "following_url": "https://api.github.com/users/WPoelman/following{/other_user}", "gists_url": "https://api.github.com/users/WPoelman/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/WPoelman", "id": 45011687, "login": "WPoelman", "node_id": "MDQ6VXNlcjQ1MDExNjg3", "organizations_url": "https://api.github.com/users/WPoelman/orgs", "received_events_url": "https://api.github.com/users/WPoelman/received_events", "repos_url": "https://api.github.com/users/WPoelman/repos", "site_admin": false, "starred_url": "https://api.github.com/users/WPoelman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/WPoelman/subscriptions", "type": "User", "url": "https://api.github.com/users/WPoelman", "user_view_type": "public" }
[]
closed
false
[ "Same here, for `HuggingFaceFW/fineweb`. Code that worked with no issues for the last 2 months suddenly fails today. Tried updating `datasets`, `huggingface_hub`, `fsspec` to newest versions, but the same error occurs.", "I'm also hitting this issue, with `mandarjoshi/trivia_qa`; My dataset loading was working successfully yesterday - I'm using `huggingface-hub==0.27.1`, `datasets==3.2.0`", "Same, here with `datasets==3.6.0`", "Same, with `datasets==4.0.0`.", "Same here tried different versions of huggingface-hub and datasets but the error keeps occuring ", "A temporary workaround is to first download your dataset with\n\nhuggingface-cli download HuggingFaceH4/ultrachat_200k --repo-type dataset\n\nThen find the local path of the dataset typically like ~/.cache/huggingface/hub/HuggingFaceH4-ultrachat_200k/snapshots/*id*\n\nAnd then load like \n\nfrom datasets import load_dataset\ndataset = load_dataset(\"~/.cache/huggingface/hub/HuggingFaceH4-ultrachat_200k/snapshots/*id*\")\n", "I am also experiencing this issue. I was trying to load TinyStories\nds = datasets.load_dataset(\"roneneldan/TinyStories\", streaming=True, split=\"train\")\n\nresulting in the previously stated error:\nException has occurred: BadRequestError\n(Request ID: Root=1-687a1d09-66cceb496c9401b1084133d6;3550deed-c459-4799-bc74-97924742bd94)\n\nBad request:\n* Invalid input: expected array, received string * at paths * Invalid input: expected boolean, received string * at expand\nβœ– Invalid input: expected array, received string\n β†’ at paths\nβœ– Invalid input: expected boolean, received string\n β†’ at expand\nFileNotFoundError: Dataset roneneldan/TinyStories is not cached in None\n\nThis very code worked fine yesterday, so it's a very recent issue.\n\nEnvironment info:\nprint(\"datasets version:\", datasets.__version__)\nprint(\"huggingface_hub version:\", huggingface_hub.__version__)\nprint(\"pyarrow version:\", pyarrow.__version__)\nprint(\"pandas version:\", pandas.__version__)\nprint(\"fsspec version:\", fsspec.__version__)\nprint(\"Python version:\", sys.version)\nprint(\"Platform:\", platform.platform())\ndatasets version: 4.0.0\nhuggingface_hub version: 0.33.4\npyarrow version: 19.0.0\npandas version: 2.2.3\nfsspec version: 2024.9.0\nPython version: 3.12.11 (main, Jun 10 2025, 11:55:20) [GCC 15.1.1 20250425]\nPlatform: Linux-6.15.6-arch1-1-x86_64-with-glibc2.41", "Same here with datasets==3.6.0\n```\nhuggingface_hub.errors.BadRequestError: (Request ID: Root=1-687a238d-27374f964534f79f702bc239;61f0669c-cb70-4aff-b57b-73a446f9c65e)\n\nBad request:\n* Invalid input: expected array, received string * at paths * Invalid input: expected boolean, received string * at expand\nβœ– Invalid input: expected array, received string\n β†’ at paths\nβœ– Invalid input: expected boolean, received string\n β†’ at expand\n```", "Same here, works perfectly yesterday\n\n```\nError code: ConfigNamesError\nException: BadRequestError\nMessage: (Request ID: Root=1-687a23a5-314b45b36ce962cf0e431b9a;b979ddb2-a80b-483c-8b1e-403e24e83127)\n\nBad request:\n* Invalid input: expected array, received string * at paths * Invalid input: expected boolean, received string * at expand\nβœ– Invalid input: expected array, received string\n β†’ at paths\nβœ– Invalid input: expected boolean, received string\n β†’ at expand\n```", "It was literally working for me and then suddenly it stopped working next time I run the command. Same issue but private repo so I can't share example. ", "A bug from Hugging Face not us", "Same here!", "@LMSPaul thanks! The workaround seems to work (at least for the datasets I tested).\n\nOn the command line:\n```sh\nhuggingface-cli download <dataset-name> --repo-type dataset --local-dir <local-dir>\n```\n\nAnd then in Python:\n```python\nfrom datasets import load_dataset\n\n# The dataset-specific options seem to work with this as well, \n# except for a warning from \"trust_remote_code\"\nds = load_dataset(<local-dir>)\n```", "Same for me.. I couldn't load ..\nIt was perfectly working yesterday..\n\n\nfrom datasets import load_dataset\nraw_datasets = load_dataset(\"glue\", \"mrpc\")\n\nThe error resulting is given below\n\n---------------------------------------------------------------------------\nBadRequestError Traceback (most recent call last)\n/tmp/ipykernel_60/772458687.py in <cell line: 0>()\n 1 from datasets import load_dataset\n----> 2 raw_datasets = load_dataset(\"glue\", \"mrpc\")\n\n/usr/local/lib/python3.11/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, keep_in_memory, save_infos, revision, token, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs)\n 2060 \n 2061 # Create a dataset builder\n-> 2062 builder_instance = load_dataset_builder(\n 2063 path=path,\n 2064 name=name,\n\n/usr/local/lib/python3.11/dist-packages/datasets/load.py in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, token, storage_options, trust_remote_code, _require_default_config_name, **config_kwargs)\n 1780 download_config = download_config.copy() if download_config else DownloadConfig()\n 1781 download_config.storage_options.update(storage_options)\n-> 1782 dataset_module = dataset_module_factory(\n 1783 path,\n 1784 revision=revision,\n\n/usr/local/lib/python3.11/dist-packages/datasets/load.py in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, cache_dir, trust_remote_code, _require_default_config_name, _require_custom_configs, **download_kwargs)\n 1662 f\"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}\"\n 1663 ) from None\n-> 1664 raise e1 from None\n 1665 elif trust_remote_code:\n 1666 raise FileNotFoundError(\n\n/usr/local/lib/python3.11/dist-packages/datasets/load.py in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, cache_dir, trust_remote_code, _require_default_config_name, _require_custom_configs, **download_kwargs)\n 1627 download_mode=download_mode,\n 1628 use_exported_dataset_infos=use_exported_dataset_infos,\n-> 1629 ).get_module()\n 1630 except GatedRepoError as e:\n 1631 message = f\"Dataset '{path}' is a gated dataset on the Hub.\"\n\n/usr/local/lib/python3.11/dist-packages/datasets/load.py in get_module(self)\n 1017 else:\n 1018 patterns = get_data_patterns(base_path, download_config=self.download_config)\n-> 1019 data_files = DataFilesDict.from_patterns(\n 1020 patterns,\n 1021 base_path=base_path,\n\n/usr/local/lib/python3.11/dist-packages/datasets/data_files.py in from_patterns(cls, patterns, base_path, allowed_extensions, download_config)\n 687 patterns_for_key\n 688 if isinstance(patterns_for_key, DataFilesList)\n--> 689 else DataFilesList.from_patterns(\n 690 patterns_for_key,\n 691 base_path=base_path,\n\n/usr/local/lib/python3.11/dist-packages/datasets/data_files.py in from_patterns(cls, patterns, base_path, allowed_extensions, download_config)\n 580 try:\n 581 data_files.extend(\n--> 582 resolve_pattern(\n 583 pattern,\n 584 base_path=base_path,\n\n/usr/local/lib/python3.11/dist-packages/datasets/data_files.py in resolve_pattern(pattern, base_path, allowed_extensions, download_config)\n 358 matched_paths = [\n 359 filepath if filepath.startswith(protocol_prefix) else protocol_prefix + filepath\n--> 360 for filepath, info in fs.glob(pattern, detail=True, **glob_kwargs).items()\n 361 if (info[\"type\"] == \"file\" or (info.get(\"islink\") and os.path.isfile(os.path.realpath(filepath))))\n 362 and (xbasename(filepath) not in files_to_ignore)\n\n/usr/local/lib/python3.11/dist-packages/huggingface_hub/hf_file_system.py in glob(self, path, **kwargs)\n 519 kwargs = {\"expand_info\": kwargs.get(\"detail\", False), **kwargs}\n 520 path = self.resolve_path(path, revision=kwargs.get(\"revision\")).unresolve()\n--> 521 return super().glob(path, **kwargs)\n 522 \n 523 def find(\n\n/usr/local/lib/python3.11/dist-packages/fsspec/spec.py in glob(self, path, maxdepth, **kwargs)\n 635 # any exception allowed bar FileNotFoundError?\n 636 return False\n--> 637 \n 638 def lexists(self, path, **kwargs):\n 639 \"\"\"If there is a file at the given path (including\n\n/usr/local/lib/python3.11/dist-packages/huggingface_hub/hf_file_system.py in find(self, path, maxdepth, withdirs, detail, refresh, revision, **kwargs)\n 554 \"\"\"\n 555 if maxdepth:\n--> 556 return super().find(\n 557 path, maxdepth=maxdepth, withdirs=withdirs, detail=detail, refresh=refresh, revision=revision, **kwargs\n 558 )\n\n/usr/local/lib/python3.11/dist-packages/fsspec/spec.py in find(self, path, maxdepth, withdirs, detail, **kwargs)\n 498 # This is needed for posix glob compliance\n 499 if withdirs and path != \"\" and self.isdir(path):\n--> 500 out[path] = self.info(path)\n 501 \n 502 for _, dirs, files in self.walk(path, maxdepth, detail=True, **kwargs):\n\n/usr/local/lib/python3.11/dist-packages/huggingface_hub/hf_file_system.py in info(self, path, refresh, revision, **kwargs)\n 717 out = out1[0]\n 718 if refresh or out is None or (expand_info and out and out[\"last_commit\"] is None):\n--> 719 paths_info = self._api.get_paths_info(\n 720 resolved_path.repo_id,\n 721 resolved_path.path_in_repo,\n\n/usr/local/lib/python3.11/dist-packages/huggingface_hub/utils/_validators.py in _inner_fn(*args, **kwargs)\n 112 kwargs = smoothly_deprecate_use_auth_token(fn_name=fn.__name__, has_token=has_token, kwargs=kwargs)\n 113 \n--> 114 return fn(*args, **kwargs)\n 115 \n 116 return _inner_fn # type: ignore\n\n/usr/local/lib/python3.11/dist-packages/huggingface_hub/hf_api.py in get_paths_info(self, repo_id, paths, expand, revision, repo_type, token)\n 3397 headers=headers,\n 3398 )\n-> 3399 hf_raise_for_status(response)\n 3400 paths_info = response.json()\n 3401 return [\n\n/usr/local/lib/python3.11/dist-packages/huggingface_hub/utils/_http.py in hf_raise_for_status(response, endpoint_name)\n 463 f\"\\n\\nBad request for {endpoint_name} endpoint:\" if endpoint_name is not None else \"\\n\\nBad request:\"\n 464 )\n--> 465 raise _format(BadRequestError, message, response) from e\n 466 \n 467 elif response.status_code == 403:\n\nBadRequestError: (Request ID: Root=1-687a3201-087954b9245ab59672e6068e;d5bb4dbe-03e1-4912-bcec-5964c017b920)\n\nBad request:\n* Invalid input: expected array, received string * at paths * Invalid input: expected boolean, received string * at expand\nβœ– Invalid input: expected array, received string\n β†’ at paths\nβœ– Invalid input: expected boolean, re", "Thanks for the report!\nThe issue has been fixed and should now work without any code changes πŸ˜„\nSorry for the inconvenience!\n\nClosing, please open again if needed.", "Works for me. Thanks!\n", "Yes Now it's works for me..Thanks\r\n\r\nOn Fri, 18 Jul 2025, 5:25β€―pm Karol Brejna, ***@***.***> wrote:\r\n\r\n> *karol-brejna-i* left a comment (huggingface/datasets#7689)\r\n> <https://github.com/huggingface/datasets/issues/7689#issuecomment-3089238320>\r\n>\r\n> Works for me. Thanks!\r\n>\r\n> β€”\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/datasets/issues/7689#issuecomment-3089238320>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AJRBXNEWBJ5UYVC2IRJM5DD3JDODZAVCNFSM6AAAAACB2FDG4GVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZTAOBZGIZTQMZSGA>\r\n> .\r\n> You are receiving this because you commented.Message ID:\r\n> ***@***.***>\r\n>\r\n" ]
2025-07-18T09:30:04Z
2025-07-18T11:59:51Z
2025-07-18T11:52:29Z
NONE
null
null
### Describe the bug Up until a couple days ago I was having no issues loading `Helsinki-NLP/europarl` and `Helsinki-NLP/un_pc`, but now suddenly I get the following error: ``` huggingface_hub.errors.BadRequestError: (Request ID: ...) Bad request: * Invalid input: expected array, received string * at paths * Invalid input: expected boolean, received string * at expand βœ– Invalid input: expected array, received string β†’ at paths βœ– Invalid input: expected boolean, received string β†’ at expand ``` I tried with both `4.0.0` and `3.5.1` since this dataset uses `trust_remote_code`, but I get the same error with both. What can I do to load the dataset? I checked the documentation and GitHub issues here, but couldn't find a solution. ### Steps to reproduce the bug ```python import datasets ds = datasets.load_dataset("Helsinki-NLP/europarl", "en-fr", streaming=True, trust_remote_code=True)["train"] ``` ### Expected behavior That the dataset loads as it did a couple days ago. ### Environment info - `datasets` version: 3.5.1 - Platform: Linux-4.18.0-513.24.1.el8_9.x86_64-x86_64-with-glibc2.28 - Python version: 3.11.11 - `huggingface_hub` version: 0.30.2 - PyArrow version: 20.0.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.6.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/17179696?v=4", "events_url": "https://api.github.com/users/sergiopaniego/events{/privacy}", "followers_url": "https://api.github.com/users/sergiopaniego/followers", "following_url": "https://api.github.com/users/sergiopaniego/following{/other_user}", "gists_url": "https://api.github.com/users/sergiopaniego/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sergiopaniego", "id": 17179696, "login": "sergiopaniego", "node_id": "MDQ6VXNlcjE3MTc5Njk2", "organizations_url": "https://api.github.com/users/sergiopaniego/orgs", "received_events_url": "https://api.github.com/users/sergiopaniego/received_events", "repos_url": "https://api.github.com/users/sergiopaniego/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sergiopaniego/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sergiopaniego/subscriptions", "type": "User", "url": "https://api.github.com/users/sergiopaniego", "user_view_type": "public" }
{ "+1": 23, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 23, "url": "https://api.github.com/repos/huggingface/datasets/issues/7689/reactions" }
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/issues/7688
7,688
No module named "distributed"
{ "avatar_url": "https://avatars.githubusercontent.com/u/45058324?v=4", "events_url": "https://api.github.com/users/yingtongxiong/events{/privacy}", "followers_url": "https://api.github.com/users/yingtongxiong/followers", "following_url": "https://api.github.com/users/yingtongxiong/following{/other_user}", "gists_url": "https://api.github.com/users/yingtongxiong/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yingtongxiong", "id": 45058324, "login": "yingtongxiong", "node_id": "MDQ6VXNlcjQ1MDU4MzI0", "organizations_url": "https://api.github.com/users/yingtongxiong/orgs", "received_events_url": "https://api.github.com/users/yingtongxiong/received_events", "repos_url": "https://api.github.com/users/yingtongxiong/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yingtongxiong/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yingtongxiong/subscriptions", "type": "User", "url": "https://api.github.com/users/yingtongxiong", "user_view_type": "public" }
[]
open
false
[ "The error ModuleNotFoundError: No module named 'datasets.distributed' means your installed datasets library is too old or incompatible with the version of Library you are using(in my case it was BEIR). The datasets.distributed module was removed in recent versions of the datasets library.\n\nDowngrade datasets to version 2.14.6 : ! pip install datasets==2.14.6\n", "this code does run in `datasets` 4.0:\n```python\nfrom datasets.distributed import split_dataset_by_node\n```\n\nmake sure you have a python version that is recent enough (>=3.9) to be able to install `datasets` 4.0", "I do think the problem is caused by the python version, because I do have python version 3.12.5" ]
2025-07-17T09:32:35Z
2025-07-25T15:14:19Z
null
NONE
null
null
### Describe the bug hello, when I run the command "from datasets.distributed import split_dataset_by_node", I always met the bug "No module named 'datasets.distributed" in different version like 4.0.0, 2.21.0 and so on. How can I solve this? ### Steps to reproduce the bug 1. pip install datasets 2. from datasets.distributed import split_dataset_by_node ### Expected behavior expecting the command "from datasets.distributed import split_dataset_by_node" can be ran successfully ### Environment info python: 3.12
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7688/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/issues/7687
7,687
Datasets keeps rebuilding the dataset every time i call the python script
{ "avatar_url": "https://avatars.githubusercontent.com/u/58883113?v=4", "events_url": "https://api.github.com/users/CALEB789/events{/privacy}", "followers_url": "https://api.github.com/users/CALEB789/followers", "following_url": "https://api.github.com/users/CALEB789/following{/other_user}", "gists_url": "https://api.github.com/users/CALEB789/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/CALEB789", "id": 58883113, "login": "CALEB789", "node_id": "MDQ6VXNlcjU4ODgzMTEz", "organizations_url": "https://api.github.com/users/CALEB789/orgs", "received_events_url": "https://api.github.com/users/CALEB789/received_events", "repos_url": "https://api.github.com/users/CALEB789/repos", "site_admin": false, "starred_url": "https://api.github.com/users/CALEB789/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/CALEB789/subscriptions", "type": "User", "url": "https://api.github.com/users/CALEB789", "user_view_type": "public" }
[]
open
false
[ "here is the code to load the dataset form the cache:\n\n```python\ns = load_dataset('databricks/databricks-dolly-15k')['train']\n```\n\nif you pass the location of a local directory it will create a new cache based on that directory content" ]
2025-07-17T09:03:38Z
2025-07-25T15:21:31Z
null
NONE
null
null
### Describe the bug Every time it runs, somehow, samples increase. This can cause a 12mb dataset to have other built versions of 400 mbs+ <img width="363" height="481" alt="Image" src="https://github.com/user-attachments/assets/766ce958-bd2b-41bc-b950-86710259bfdc" /> ### Steps to reproduce the bug `from datasets import load_dataset s = load_dataset('~/.cache/huggingface/datasets/databricks___databricks-dolly-15k')['train'] ` 1. A dataset needs to be available in the .cache folder 2. Run the code multiple times, and every time it runs, more versions are created ### Expected behavior The number of samples increases every time the script runs ### Environment info - `datasets` version: 3.6.0 - Platform: Windows-11-10.0.26100-SP0 - Python version: 3.13.3 - `huggingface_hub` version: 0.32.3 - PyArrow version: 20.0.0 - Pandas version: 2.2.3 - `fsspec` version: 2025.3.0
null
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/7687/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/issues/7686
7,686
load_dataset does not check .no_exist files in the hub cache
{ "avatar_url": "https://avatars.githubusercontent.com/u/3627235?v=4", "events_url": "https://api.github.com/users/jmaccarl/events{/privacy}", "followers_url": "https://api.github.com/users/jmaccarl/followers", "following_url": "https://api.github.com/users/jmaccarl/following{/other_user}", "gists_url": "https://api.github.com/users/jmaccarl/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jmaccarl", "id": 3627235, "login": "jmaccarl", "node_id": "MDQ6VXNlcjM2MjcyMzU=", "organizations_url": "https://api.github.com/users/jmaccarl/orgs", "received_events_url": "https://api.github.com/users/jmaccarl/received_events", "repos_url": "https://api.github.com/users/jmaccarl/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jmaccarl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmaccarl/subscriptions", "type": "User", "url": "https://api.github.com/users/jmaccarl", "user_view_type": "public" }
[]
open
false
[]
2025-07-16T20:04:00Z
2025-07-16T20:04:00Z
null
NONE
null
null
### Describe the bug I'm not entirely sure if this should be submitted as a bug in the `datasets` library or the `huggingface_hub` library, given it could be fixed at different levels of the stack. The fundamental issue is that the `load_datasets` api doesn't use the `.no_exist` files in the hub cache unlike other wrapper APIs that do. This is because the `utils.file_utils.cached_path` used directly calls `hf_hub_download` instead of using `file_download.try_to_load_from_cache` from `huggingface_hub` (see `transformers` library `utils.hub.cached_files` for one alternate example). This results in unnecessary metadata HTTP requests occurring for files that don't exist on every call. It won't generate the .no_exist cache files, nor will it use them. ### Steps to reproduce the bug Run the following snippet as one example (setting cache dirs to clean paths for clarity) `env HF_HOME=~/local_hf_hub python repro.py` ``` from datasets import load_dataset import huggingface_hub # monkeypatch to print out metadata requests being made original_get_hf_file_metadata = huggingface_hub.file_download.get_hf_file_metadata def get_hf_file_metadata_wrapper(*args, **kwargs): print("File metadata request made (get_hf_file_metadata):", args, kwargs) return original_get_hf_file_metadata(*args, **kwargs) # Apply the patch huggingface_hub.file_download.get_hf_file_metadata = get_hf_file_metadata_wrapper dataset = load_dataset( "Salesforce/wikitext", "wikitext-2-v1", split="test", trust_remote_code=True, cache_dir="~/local_datasets", revision="b08601e04326c79dfdd32d625aee71d232d685c3", ) ``` This may be called over and over again, and you will see the same calls for files that don't exist: ``` File metadata request made (get_hf_file_metadata): () {'url': 'https://huggingface.co/datasets/Salesforce/wikitext/resolve/b08601e04326c79dfdd32d625aee71d232d685c3/wikitext.py', 'proxies': None, 'timeout': 10, 'headers': {'user-agent': 'datasets/3.6.0; hf_hub/0.33.2; python/3.12.11; torch/2.7.0; huggingface_hub/0.33.2; pyarrow/20.0.0; jax/0.5.3'}, 'token': None} File metadata request made (get_hf_file_metadata): () {'url': 'https://huggingface.co/datasets/Salesforce/wikitext/resolve/b08601e04326c79dfdd32d625aee71d232d685c3/.huggingface.yaml', 'proxies': None, 'timeout': 10, 'headers': {'user-agent': 'datasets/3.6.0; hf_hub/0.33.2; python/3.12.11; torch/2.7.0; huggingface_hub/0.33.2; pyarrow/20.0.0; jax/0.5.3'}, 'token': None} File metadata request made (get_hf_file_metadata): () {'url': 'https://huggingface.co/datasets/Salesforce/wikitext/resolve/b08601e04326c79dfdd32d625aee71d232d685c3/dataset_infos.json', 'proxies': None, 'timeout': 10, 'headers': {'user-agent': 'datasets/3.6.0; hf_hub/0.33.2; python/3.12.11; torch/2.7.0; huggingface_hub/0.33.2; pyarrow/20.0.0; jax/0.5.3'}, 'token': None} ``` And you can see that the .no_exist folder is never created ``` $ ls ~/local_hf_hub/hub/datasets--Salesforce--wikitext/ blobs refs snapshots ``` ### Expected behavior The expected behavior is for the print "File metadata request made" to stop after the first call, and for .no_exist directory & files to be populated under ~/local_hf_hub/hub/datasets--Salesforce--wikitext/ ### Environment info - `datasets` version: 3.6.0 - Platform: Linux-6.5.13-65-650-4141-22041-coreweave-amd64-85c45edc-x86_64-with-glibc2.35 - Python version: 3.12.11 - `huggingface_hub` version: 0.33.2 - PyArrow version: 20.0.0 - Pandas version: 2.3.1 - `fsspec` version: 2024.9.0
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7686/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/issues/7685
7,685
Inconsistent range request behavior for parquet REST api
{ "avatar_url": "https://avatars.githubusercontent.com/u/21327470?v=4", "events_url": "https://api.github.com/users/universalmind303/events{/privacy}", "followers_url": "https://api.github.com/users/universalmind303/followers", "following_url": "https://api.github.com/users/universalmind303/following{/other_user}", "gists_url": "https://api.github.com/users/universalmind303/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/universalmind303", "id": 21327470, "login": "universalmind303", "node_id": "MDQ6VXNlcjIxMzI3NDcw", "organizations_url": "https://api.github.com/users/universalmind303/orgs", "received_events_url": "https://api.github.com/users/universalmind303/received_events", "repos_url": "https://api.github.com/users/universalmind303/repos", "site_admin": false, "starred_url": "https://api.github.com/users/universalmind303/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/universalmind303/subscriptions", "type": "User", "url": "https://api.github.com/users/universalmind303", "user_view_type": "public" }
[]
open
false
[ "This is a weird bug, is it a range that is supposed to be satisfiable ? I mean, is it on the boundraries ?\n\nLet me know if you'r e still having the issue, in case it was just a transient bug", "@lhoestq yes the ranges are supposed to be satisfiable, and _sometimes_ they are. \n\nThe head requests show that it does in fact accept a byte range. \n\n```\n> curl -IL \"https://huggingface.co/api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet\" \n\n\nHTTP/2 200\ncontent-length: 218006142\ncontent-disposition: inline; filename*=UTF-8''0000.parquet; filename=\"0000.parquet\";\ncache-control: public, max-age=31536000\netag: \"cf8a3a5665cf8b2ff667fb5236a1e5cb13c7582955f9533c88e1387997ef3af9\"\naccess-control-allow-origin: *\naccess-control-allow-headers: Content-Range, Content-Type, Content-Disposition, ETag\naccess-control-expose-headers: Accept-Ranges, Content-Range, Content-Type, Content-Disposition, ETag, X-Cache\naccept-ranges: bytes\nx-request-id: 01K11493PRMCZKVSNCBF1EX1WJ\ndate: Fri, 25 Jul 2025 15:47:25 GMT\nx-cache: Hit from cloudfront\nvia: 1.1 ad637ff39738449b56ab4eac4b02cbf4.cloudfront.net (CloudFront)\nx-amz-cf-pop: MSP50-P2\nx-amz-cf-id: ti1Ze3e0knGMl0PkeZ_F_snZNZe4007D9uT502MkGjM4NWPYWy13wA==\nage: 15\ncontent-security-policy: default-src 'none'; sandbox\n```\n\nand as I mentioned, _sometimes_ it satisfies the request \n\n```\n* Request completely sent off\n< HTTP/2 206\n< content-length: 131072\n< content-disposition: inline; filename*=UTF-8''0000.parquet; filename=\"0000.parquet\";\n< cache-control: public, max-age=31536000\n< etag: \"cf8a3a5665cf8b2ff667fb5236a1e5cb13c7582955f9533c88e1387997ef3af9\"\n< access-control-allow-origin: *\n< access-control-allow-headers: Content-Range, Content-Type, Content-Disposition, ETag\n< access-control-expose-headers: Accept-Ranges, Content-Range, Content-Type, Content-Disposition, ETag, X-Cache\n< x-request-id: 01K1146P5PNC4D2XD348C78BTC\n< date: Fri, 25 Jul 2025 15:46:06 GMT\n< x-cache: Hit from cloudfront\n< via: 1.1 990606ab91bf6503d073ad5fee40784c.cloudfront.net (CloudFront)\n< x-amz-cf-pop: MSP50-P2\n< x-amz-cf-id: l58ghqEzNZn4eo4IRNl76fOFrHTk_TJKeLi0-g8YYHmq7Oh3s8sXnQ==\n< age: 248\n< content-security-policy: default-src 'none'; sandbox\n< content-range: bytes 217875070-218006141/218006142\n```\n\nbut more often than not, it returns a 416\n```\n* Request completely sent off\n< HTTP/2 416\n< content-type: text/html\n< content-length: 49\n< server: CloudFront\n< date: Fri, 25 Jul 2025 15:51:08 GMT\n< expires: Fri, 25 Jul 2025 15:51:08 GMT\n< content-range: bytes */177\n< x-cache: Error from cloudfront\n< via: 1.1 65ba38c8dc30018660c405d1f32ef3a0.cloudfront.net (CloudFront)\n< x-amz-cf-pop: MSP50-P1\n< x-amz-cf-id: 1t1Att_eqiO-LmlnnaO-cCPoh6G2AIQDaklhS08F_revXNqijMpseA==\n```\n\n\n", "As a workaround, adding a unique parameter to the url avoids the CDN caching and returns the correct result. \n\n```\n❯ curl -v -L -H \"Range: bytes=217875070-218006142\" -o output.parquet \"https://huggingface.co/api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet?cachebust=<SOMEUNIQUESTRING>\" \n``` \n", "@lhoestq Is there any update on this? We (daft) have been getting more reports of this when users are reading huggingface datasets. ", "> [@lhoestq](https://github.com/lhoestq) Is there any update on this? We (daft) have been getting more reports of this when users are reading huggingface datasets.\n\nHello, \nWe have temporarily disabled the caching rule that could be the origin of this issue. Meanwhile, the problem is still being investigated by us" ]
2025-07-16T18:39:44Z
2025-08-11T08:16:54Z
null
NONE
null
null
### Describe the bug First off, I do apologize if this is not the correct repo for submitting this issue. Please direct me to another one if it's more appropriate elsewhere. The datasets rest api is inconsistently giving `416 Range Not Satisfiable` when using a range request to get portions of the parquet files. More often than not, I am seeing 416, but other times for an identical request, it gives me the data along with `206 Partial Content` as expected. ### Steps to reproduce the bug repeating this request multiple times will return either 416 or 206. ```sh $ curl -v -L -H "Range: bytes=217875070-218006142" -o output.parquet "https://huggingface.co/api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet" ``` Note: this is not limited to just the above file, I tried with many different datasets and am able to consistently reproduce issue across multiple datasets. when the 416 is returned, I get the following headers ``` < HTTP/2 416 < content-type: text/html < content-length: 49 < server: CloudFront < date: Wed, 16 Jul 2025 14:58:43 GMT < expires: Wed, 16 Jul 2025 14:58:43 GMT < content-range: bytes */177 < x-cache: Error from cloudfront < via: 1.1 873527676a354c5998cad133525df9c0.cloudfront.net (CloudFront) < ``` this suggests to me that there is likely a CDN/caching/routing issue happening and the request is not getting routed properly. Full verbose output via curl. <details> ❯ curl -v -L -H "Range: bytes=217875070-218006142" -o output.parquet "https://huggingface.co/api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet" % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Host huggingface.co:443 was resolved. * IPv6: (none) * IPv4: 18.160.102.96, 18.160.102.110, 18.160.102.4, 18.160.102.86 * Trying 18.160.102.96:443... * Connected to huggingface.co (18.160.102.96) port 443 * ALPN: curl offers h2,http/1.1 * (304) (OUT), TLS handshake, Client hello (1): } [319 bytes data] * CAfile: /etc/ssl/cert.pem * CApath: none * (304) (IN), TLS handshake, Server hello (2): { [122 bytes data] * (304) (IN), TLS handshake, Unknown (8): { [19 bytes data] * (304) (IN), TLS handshake, Certificate (11): { [3821 bytes data] * (304) (IN), TLS handshake, CERT verify (15): { [264 bytes data] * (304) (IN), TLS handshake, Finished (20): { [36 bytes data] * (304) (OUT), TLS handshake, Finished (20): } [36 bytes data] * SSL connection using TLSv1.3 / AEAD-AES128-GCM-SHA256 / [blank] / UNDEF * ALPN: server accepted h2 * Server certificate: * subject: CN=huggingface.co * start date: Apr 13 00:00:00 2025 GMT * expire date: May 12 23:59:59 2026 GMT * subjectAltName: host "huggingface.co" matched cert's "huggingface.co" * issuer: C=US; O=Amazon; CN=Amazon RSA 2048 M02 * SSL certificate verify ok. * using HTTP/2 * [HTTP/2] [1] OPENED stream for https://huggingface.co/api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet * [HTTP/2] [1] [:method: GET] * [HTTP/2] [1] [:scheme: https] * [HTTP/2] [1] [:authority: huggingface.co] * [HTTP/2] [1] [:path: /api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet] * [HTTP/2] [1] [user-agent: curl/8.7.1] * [HTTP/2] [1] [accept: */*] * [HTTP/2] [1] [range: bytes=217875070-218006142] > GET /api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet HTTP/2 > Host: huggingface.co > User-Agent: curl/8.7.1 > Accept: */* > Range: bytes=217875070-218006142 > * Request completely sent off < HTTP/2 416 < content-type: text/html < content-length: 49 < server: CloudFront < date: Wed, 16 Jul 2025 14:58:41 GMT < expires: Wed, 16 Jul 2025 14:58:41 GMT < content-range: bytes */177 < x-cache: Error from cloudfront < via: 1.1 e2f1bed2f82641d6d5439eac20a790ba.cloudfront.net (CloudFront) < x-amz-cf-pop: MSP50-P1 < x-amz-cf-id: Mo8hn-EZLJqE_hoBday8DdhmVXhV3v9-Wg-EEHI6gX_fNlkanVIUBA== < { [49 bytes data] 100 49 100 49 0 0 2215 0 --:--:-- --:--:-- --:--:-- 2227 * Connection #0 to host huggingface.co left intact (.venv) Daft main*​* ≑❯ curl -v -L -H "Range: bytes=217875070-218006142" -o output.parquet "https://huggingface.co/api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet" % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Host huggingface.co:443 was resolved. * IPv6: (none) * IPv4: 18.160.102.96, 18.160.102.110, 18.160.102.4, 18.160.102.86 * Trying 18.160.102.96:443... * Connected to huggingface.co (18.160.102.96) port 443 * ALPN: curl offers h2,http/1.1 * (304) (OUT), TLS handshake, Client hello (1): } [319 bytes data] * CAfile: /etc/ssl/cert.pem * CApath: none * (304) (IN), TLS handshake, Server hello (2): { [122 bytes data] * (304) (IN), TLS handshake, Unknown (8): { [19 bytes data] * (304) (IN), TLS handshake, Certificate (11): { [3821 bytes data] * (304) (IN), TLS handshake, CERT verify (15): { [264 bytes data] * (304) (IN), TLS handshake, Finished (20): { [36 bytes data] * (304) (OUT), TLS handshake, Finished (20): } [36 bytes data] * SSL connection using TLSv1.3 / AEAD-AES128-GCM-SHA256 / [blank] / UNDEF * ALPN: server accepted h2 * Server certificate: * subject: CN=huggingface.co * start date: Apr 13 00:00:00 2025 GMT * expire date: May 12 23:59:59 2026 GMT * subjectAltName: host "huggingface.co" matched cert's "huggingface.co" * issuer: C=US; O=Amazon; CN=Amazon RSA 2048 M02 * SSL certificate verify ok. * using HTTP/2 * [HTTP/2] [1] OPENED stream for https://huggingface.co/api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet * [HTTP/2] [1] [:method: GET] * [HTTP/2] [1] [:scheme: https] * [HTTP/2] [1] [:authority: huggingface.co] * [HTTP/2] [1] [:path: /api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet] * [HTTP/2] [1] [user-agent: curl/8.7.1] * [HTTP/2] [1] [accept: */*] * [HTTP/2] [1] [range: bytes=217875070-218006142] > GET /api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet HTTP/2 > Host: huggingface.co > User-Agent: curl/8.7.1 > Accept: */* > Range: bytes=217875070-218006142 > * Request completely sent off < HTTP/2 416 < content-type: text/html < content-length: 49 < server: CloudFront < date: Wed, 16 Jul 2025 14:58:42 GMT < expires: Wed, 16 Jul 2025 14:58:42 GMT < content-range: bytes */177 < x-cache: Error from cloudfront < via: 1.1 bb352451e1eacf85f8786ee3ecd07eca.cloudfront.net (CloudFront) < x-amz-cf-pop: MSP50-P1 < x-amz-cf-id: 9xy-CX9KvlS8Ye4eFr8jXMDobZHFkvdyvkLJGmK_qiwZQywCCwfq7Q== < { [49 bytes data] 100 49 100 49 0 0 2381 0 --:--:-- --:--:-- --:--:-- 2450 * Connection #0 to host huggingface.co left intact (.venv) Daft main*​* ≑❯ curl -v -L -H "Range: bytes=217875070-218006142" -o output.parquet "https://huggingface.co/api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet" % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Host huggingface.co:443 was resolved. * IPv6: (none) * IPv4: 18.160.102.96, 18.160.102.110, 18.160.102.4, 18.160.102.86 * Trying 18.160.102.96:443... * Connected to huggingface.co (18.160.102.96) port 443 * ALPN: curl offers h2,http/1.1 * (304) (OUT), TLS handshake, Client hello (1): } [319 bytes data] * CAfile: /etc/ssl/cert.pem * CApath: none * (304) (IN), TLS handshake, Server hello (2): { [122 bytes data] * (304) (IN), TLS handshake, Unknown (8): { [19 bytes data] * (304) (IN), TLS handshake, Certificate (11): { [3821 bytes data] * (304) (IN), TLS handshake, CERT verify (15): { [264 bytes data] * (304) (IN), TLS handshake, Finished (20): { [36 bytes data] * (304) (OUT), TLS handshake, Finished (20): } [36 bytes data] * SSL connection using TLSv1.3 / AEAD-AES128-GCM-SHA256 / [blank] / UNDEF * ALPN: server accepted h2 * Server certificate: * subject: CN=huggingface.co * start date: Apr 13 00:00:00 2025 GMT * expire date: May 12 23:59:59 2026 GMT * subjectAltName: host "huggingface.co" matched cert's "huggingface.co" * issuer: C=US; O=Amazon; CN=Amazon RSA 2048 M02 * SSL certificate verify ok. * using HTTP/2 * [HTTP/2] [1] OPENED stream for https://huggingface.co/api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet * [HTTP/2] [1] [:method: GET] * [HTTP/2] [1] [:scheme: https] * [HTTP/2] [1] [:authority: huggingface.co] * [HTTP/2] [1] [:path: /api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet] * [HTTP/2] [1] [user-agent: curl/8.7.1] * [HTTP/2] [1] [accept: */*] * [HTTP/2] [1] [range: bytes=217875070-218006142] > GET /api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet HTTP/2 > Host: huggingface.co > User-Agent: curl/8.7.1 > Accept: */* > Range: bytes=217875070-218006142 > * Request completely sent off < HTTP/2 416 < content-type: text/html < content-length: 49 < server: CloudFront < date: Wed, 16 Jul 2025 14:58:43 GMT < expires: Wed, 16 Jul 2025 14:58:43 GMT < content-range: bytes */177 < x-cache: Error from cloudfront < via: 1.1 873527676a354c5998cad133525df9c0.cloudfront.net (CloudFront) < x-amz-cf-pop: MSP50-P1 < x-amz-cf-id: wtBgwY4u4YJ2pD1ovM8UV770UiJoqWfs7i7VzschDyoLv5g7swGGmw== < { [49 bytes data] 100 49 100 49 0 0 2273 0 --:--:-- --:--:-- --:--:-- 2333 * Connection #0 to host huggingface.co left intact (.venv) Daft main*​* ≑❯ curl -v -L -H "Range: bytes=217875070-218006142" -o output.parquet "https://huggingface.co/api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet" % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Host huggingface.co:443 was resolved. * IPv6: (none) * IPv4: 18.160.102.96, 18.160.102.110, 18.160.102.4, 18.160.102.86 * Trying 18.160.102.96:443... * Connected to huggingface.co (18.160.102.96) port 443 * ALPN: curl offers h2,http/1.1 * (304) (OUT), TLS handshake, Client hello (1): } [319 bytes data] * CAfile: /etc/ssl/cert.pem * CApath: none * (304) (IN), TLS handshake, Server hello (2): { [122 bytes data] * (304) (IN), TLS handshake, Unknown (8): { [19 bytes data] * (304) (IN), TLS handshake, Certificate (11): { [3821 bytes data] * (304) (IN), TLS handshake, CERT verify (15): { [264 bytes data] * (304) (IN), TLS handshake, Finished (20): { [36 bytes data] * (304) (OUT), TLS handshake, Finished (20): } [36 bytes data] * SSL connection using TLSv1.3 / AEAD-AES128-GCM-SHA256 / [blank] / UNDEF * ALPN: server accepted h2 * Server certificate: * subject: CN=huggingface.co * start date: Apr 13 00:00:00 2025 GMT * expire date: May 12 23:59:59 2026 GMT * subjectAltName: host "huggingface.co" matched cert's "huggingface.co" * issuer: C=US; O=Amazon; CN=Amazon RSA 2048 M02 * SSL certificate verify ok. * using HTTP/2 * [HTTP/2] [1] OPENED stream for https://huggingface.co/api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet * [HTTP/2] [1] [:method: GET] * [HTTP/2] [1] [:scheme: https] * [HTTP/2] [1] [:authority: huggingface.co] * [HTTP/2] [1] [:path: /api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet] * [HTTP/2] [1] [user-agent: curl/8.7.1] * [HTTP/2] [1] [accept: */*] * [HTTP/2] [1] [range: bytes=217875070-218006142] > GET /api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet HTTP/2 > Host: huggingface.co > User-Agent: curl/8.7.1 > Accept: */* > Range: bytes=217875070-218006142 > * Request completely sent off < HTTP/2 302 < content-type: text/plain; charset=utf-8 < content-length: 177 < location: https://huggingface.co/datasets/HuggingFaceTB/smoltalk2/resolve/refs%2Fconvert%2Fparquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0000.parquet < date: Wed, 16 Jul 2025 14:58:44 GMT < x-powered-by: huggingface-moon < cross-origin-opener-policy: same-origin < referrer-policy: strict-origin-when-cross-origin < x-request-id: Root=1-6877be24-476860f03849cb1a1570c9d8 < access-control-allow-origin: https://huggingface.co < access-control-expose-headers: X-Repo-Commit,X-Request-Id,X-Error-Code,X-Error-Message,X-Total-Count,ETag,Link,Accept-Ranges,Content-Range,X-Linked-Size,X-Linked-ETag,X-Xet-Hash < set-cookie: token=; Path=/; Expires=Thu, 01 Jan 1970 00:00:00 GMT; Secure; SameSite=None < set-cookie: token=; Domain=huggingface.co; Path=/; Expires=Thu, 01 Jan 1970 00:00:00 GMT; Secure; SameSite=Lax < x-cache: Miss from cloudfront < via: 1.1 dd5af138aa8a11d8a70d5ef690ad1a2a.cloudfront.net (CloudFront) < x-amz-cf-pop: MSP50-P1 < x-amz-cf-id: xuSi0X5RpH1OZqQOM8gGQLQLU8eOM6Gbkk-bgIX_qBnTTaa1VNkExA== < * Ignoring the response-body 100 177 100 177 0 0 2021 0 --:--:-- --:--:-- --:--:-- 2034 * Connection #0 to host huggingface.co left intact * Issue another request to this URL: 'https://huggingface.co/datasets/HuggingFaceTB/smoltalk2/resolve/refs%2Fconvert%2Fparquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0000.parquet' * Found bundle for host: 0x600002d54570 [can multiplex] * Re-using existing connection with host huggingface.co * [HTTP/2] [3] OPENED stream for https://huggingface.co/datasets/HuggingFaceTB/smoltalk2/resolve/refs%2Fconvert%2Fparquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0000.parquet * [HTTP/2] [3] [:method: GET] * [HTTP/2] [3] [:scheme: https] * [HTTP/2] [3] [:authority: huggingface.co] * [HTTP/2] [3] [:path: /datasets/HuggingFaceTB/smoltalk2/resolve/refs%2Fconvert%2Fparquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0000.parquet] * [HTTP/2] [3] [user-agent: curl/8.7.1] * [HTTP/2] [3] [accept: */*] * [HTTP/2] [3] [range: bytes=217875070-218006142] > GET /datasets/HuggingFaceTB/smoltalk2/resolve/refs%2Fconvert%2Fparquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0000.parquet HTTP/2 > Host: huggingface.co > User-Agent: curl/8.7.1 > Accept: */* > Range: bytes=217875070-218006142 > * Request completely sent off < HTTP/2 302 < content-type: text/plain; charset=utf-8 < content-length: 1317 < location: https://cas-bridge.xethub.hf.co/xet-bridge-us/686fc33898943c873b45c9a0/cf8a3a5665cf8b2ff667fb5236a1e5cb13c7582955f9533c88e1387997ef3af9?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=cas%2F20250716%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20250716T145416Z&X-Amz-Expires=3600&X-Amz-Signature=21a15b50740d73fd8ce82d5105733ca067d2e612ada22570e09e93ebcc7f8842&X-Amz-SignedHeaders=host&X-Xet-Cas-Uid=public&response-content-disposition=inline%3B+filename*%3DUTF-8%27%270000.parquet%3B+filename%3D%220000.parquet%22%3B&x-id=GetObject&Expires=1752681256&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTc1MjY4MTI1Nn19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2FzLWJyaWRnZS54ZXRodWIuaGYuY28veGV0LWJyaWRnZS11cy82ODZmYzMzODk4OTQzYzg3M2I0NWM5YTAvY2Y4YTNhNTY2NWNmOGIyZmY2NjdmYjUyMzZhMWU1Y2IxM2M3NTgyOTU1Zjk1MzNjODhlMTM4Nzk5N2VmM2FmOSoifV19&Signature=Tl3xorJ-7yaWvG6Y1AhhRlV2Wko9QpoK1tdPOfNZaRbHo%7EdaAkJRJfcLAYD5YzozfHWBZMLlJsaMPJ1MAne21nr5%7E737sE6yLfBwHdP3ZFZhgrLsN%7EvkIWK2GYX543qTg-pVsf3it92w1oWyoyYNQ9srxLfEIuG2AKV2Nu3Ejl7S%7EaAq4Gv4jNemvRTLBFGgYPdUeuavudl4OD4RGkSGTnpzh-P-OBk5WvgpdZZnbb1cRAP73tFHsPDX4%7ETfQIor109G%7E0TB3Jq0wopO9WV0sMQyQs9peZc6bxONiTxb9aHM4yNvWNbVGtlPuC6YS4c9T1e9%7EehdgU4sDOI%7EhpaCvg__&Key-Pair-Id=K2L8F4GPSG1IFC < date: Wed, 16 Jul 2025 14:58:44 GMT < x-powered-by: huggingface-moon < cross-origin-opener-policy: same-origin < referrer-policy: strict-origin-when-cross-origin < x-request-id: Root=1-6877be24-4f628b292dc8a7a5339c41d3 < access-control-allow-origin: https://huggingface.co < vary: Origin, Accept < access-control-expose-headers: X-Repo-Commit,X-Request-Id,X-Error-Code,X-Error-Message,X-Total-Count,ETag,Link,Accept-Ranges,Content-Range,X-Linked-Size,X-Linked-ETag,X-Xet-Hash < set-cookie: token=; Path=/; Expires=Thu, 01 Jan 1970 00:00:00 GMT; Secure; SameSite=None < set-cookie: token=; Domain=huggingface.co; Path=/; Expires=Thu, 01 Jan 1970 00:00:00 GMT; Secure; SameSite=Lax < x-repo-commit: 712df366ffbc959d9f4279bf2da579230b7ca5d8 < accept-ranges: bytes < x-linked-size: 218006142 < x-linked-etag: "01736bf26d0046ddec4ab8900fba3f0dc6500b038314b44d0edb73a7c88dec07" < x-xet-hash: cf8a3a5665cf8b2ff667fb5236a1e5cb13c7582955f9533c88e1387997ef3af9 < link: <https://huggingface.co/api/datasets/HuggingFaceTB/smoltalk2/xet-read-token/712df366ffbc959d9f4279bf2da579230b7ca5d8>; rel="xet-auth", <https://cas-server.xethub.hf.co/reconstruction/cf8a3a5665cf8b2ff667fb5236a1e5cb13c7582955f9533c88e1387997ef3af9>; rel="xet-reconstruction-info" < x-cache: Miss from cloudfront < via: 1.1 dd5af138aa8a11d8a70d5ef690ad1a2a.cloudfront.net (CloudFront) < x-amz-cf-pop: MSP50-P1 < x-amz-cf-id: 0qXw2sJGrWCLVt7c-Vtn09uE3nu6CrJw9RmAKvNr_flG75muclvlIg== < * Ignoring the response-body 100 1317 100 1317 0 0 9268 0 --:--:-- --:--:-- --:--:-- 9268 * Connection #0 to host huggingface.co left intact * Issue another request to this URL: 'https://cas-bridge.xethub.hf.co/xet-bridge-us/686fc33898943c873b45c9a0/cf8a3a5665cf8b2ff667fb5236a1e5cb13c7582955f9533c88e1387997ef3af9?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=cas%2F20250716%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20250716T145416Z&X-Amz-Expires=3600&X-Amz-Signature=21a15b50740d73fd8ce82d5105733ca067d2e612ada22570e09e93ebcc7f8842&X-Amz-SignedHeaders=host&X-Xet-Cas-Uid=public&response-content-disposition=inline%3B+filename*%3DUTF-8%27%270000.parquet%3B+filename%3D%220000.parquet%22%3B&x-id=GetObject&Expires=1752681256&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTc1MjY4MTI1Nn19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2FzLWJyaWRnZS54ZXRodWIuaGYuY28veGV0LWJyaWRnZS11cy82ODZmYzMzODk4OTQzYzg3M2I0NWM5YTAvY2Y4YTNhNTY2NWNmOGIyZmY2NjdmYjUyMzZhMWU1Y2IxM2M3NTgyOTU1Zjk1MzNjODhlMTM4Nzk5N2VmM2FmOSoifV19&Signature=Tl3xorJ-7yaWvG6Y1AhhRlV2Wko9QpoK1tdPOfNZaRbHo%7EdaAkJRJfcLAYD5YzozfHWBZMLlJsaMPJ1MAne21nr5%7E737sE6yLfBwHdP3ZFZhgrLsN%7EvkIWK2GYX543qTg-pVsf3it92w1oWyoyYNQ9srxLfEIuG2AKV2Nu3Ejl7S%7EaAq4Gv4jNemvRTLBFGgYPdUeuavudl4OD4RGkSGTnpzh-P-OBk5WvgpdZZnbb1cRAP73tFHsPDX4%7ETfQIor109G%7E0TB3Jq0wopO9WV0sMQyQs9peZc6bxONiTxb9aHM4yNvWNbVGtlPuC6YS4c9T1e9%7EehdgU4sDOI%7EhpaCvg__&Key-Pair-Id=K2L8F4GPSG1IFC' * Host cas-bridge.xethub.hf.co:443 was resolved. * IPv6: (none) * IPv4: 18.160.181.55, 18.160.181.54, 18.160.181.52, 18.160.181.88 * Trying 18.160.181.55:443... * Connected to cas-bridge.xethub.hf.co (18.160.181.55) port 443 * ALPN: curl offers h2,http/1.1 * (304) (OUT), TLS handshake, Client hello (1): } [328 bytes data] * (304) (IN), TLS handshake, Server hello (2): { [122 bytes data] * (304) (IN), TLS handshake, Unknown (8): { [19 bytes data] * (304) (IN), TLS handshake, Certificate (11): { [3818 bytes data] * (304) (IN), TLS handshake, CERT verify (15): { [264 bytes data] * (304) (IN), TLS handshake, Finished (20): { [36 bytes data] * (304) (OUT), TLS handshake, Finished (20): } [36 bytes data] * SSL connection using TLSv1.3 / AEAD-AES128-GCM-SHA256 / [blank] / UNDEF * ALPN: server accepted h2 * Server certificate: * subject: CN=cas-bridge.xethub.hf.co * start date: Jun 4 00:00:00 2025 GMT * expire date: Jul 3 23:59:59 2026 GMT * subjectAltName: host "cas-bridge.xethub.hf.co" matched cert's "cas-bridge.xethub.hf.co" * issuer: C=US; O=Amazon; CN=Amazon RSA 2048 M04 * SSL certificate verify ok. * using HTTP/2 * [HTTP/2] [1] OPENED stream for https://cas-bridge.xethub.hf.co/xet-bridge-us/686fc33898943c873b45c9a0/cf8a3a5665cf8b2ff667fb5236a1e5cb13c7582955f9533c88e1387997ef3af9?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=cas%2F20250716%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20250716T145416Z&X-Amz-Expires=3600&X-Amz-Signature=21a15b50740d73fd8ce82d5105733ca067d2e612ada22570e09e93ebcc7f8842&X-Amz-SignedHeaders=host&X-Xet-Cas-Uid=public&response-content-disposition=inline%3B+filename*%3DUTF-8%27%270000.parquet%3B+filename%3D%220000.parquet%22%3B&x-id=GetObject&Expires=1752681256&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTc1MjY4MTI1Nn19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2FzLWJyaWRnZS54ZXRodWIuaGYuY28veGV0LWJyaWRnZS11cy82ODZmYzMzODk4OTQzYzg3M2I0NWM5YTAvY2Y4YTNhNTY2NWNmOGIyZmY2NjdmYjUyMzZhMWU1Y2IxM2M3NTgyOTU1Zjk1MzNjODhlMTM4Nzk5N2VmM2FmOSoifV19&Signature=Tl3xorJ-7yaWvG6Y1AhhRlV2Wko9QpoK1tdPOfNZaRbHo%7EdaAkJRJfcLAYD5YzozfHWBZMLlJsaMPJ1MAne21nr5%7E737sE6yLfBwHdP3ZFZhgrLsN%7EvkIWK2GYX543qTg-pVsf3it92w1oWyoyYNQ9srxLfEIuG2AKV2Nu3Ejl7S%7EaAq4Gv4jNemvRTLBFGgYPdUeuavudl4OD4RGkSGTnpzh-P-OBk5WvgpdZZnbb1cRAP73tFHsPDX4%7ETfQIor109G%7E0TB3Jq0wopO9WV0sMQyQs9peZc6bxONiTxb9aHM4yNvWNbVGtlPuC6YS4c9T1e9%7EehdgU4sDOI%7EhpaCvg__&Key-Pair-Id=K2L8F4GPSG1IFC * [HTTP/2] [1] [:method: GET] * [HTTP/2] [1] [:scheme: https] * [HTTP/2] [1] [:authority: cas-bridge.xethub.hf.co] * [HTTP/2] [1] [:path: /xet-bridge-us/686fc33898943c873b45c9a0/cf8a3a5665cf8b2ff667fb5236a1e5cb13c7582955f9533c88e1387997ef3af9?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=cas%2F20250716%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20250716T145416Z&X-Amz-Expires=3600&X-Amz-Signature=21a15b50740d73fd8ce82d5105733ca067d2e612ada22570e09e93ebcc7f8842&X-Amz-SignedHeaders=host&X-Xet-Cas-Uid=public&response-content-disposition=inline%3B+filename*%3DUTF-8%27%270000.parquet%3B+filename%3D%220000.parquet%22%3B&x-id=GetObject&Expires=1752681256&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTc1MjY4MTI1Nn19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2FzLWJyaWRnZS54ZXRodWIuaGYuY28veGV0LWJyaWRnZS11cy82ODZmYzMzODk4OTQzYzg3M2I0NWM5YTAvY2Y4YTNhNTY2NWNmOGIyZmY2NjdmYjUyMzZhMWU1Y2IxM2M3NTgyOTU1Zjk1MzNjODhlMTM4Nzk5N2VmM2FmOSoifV19&Signature=Tl3xorJ-7yaWvG6Y1AhhRlV2Wko9QpoK1tdPOfNZaRbHo%7EdaAkJRJfcLAYD5YzozfHWBZMLlJsaMPJ1MAne21nr5%7E737sE6yLfBwHdP3ZFZhgrLsN%7EvkIWK2GYX543qTg-pVsf3it92w1oWyoyYNQ9srxLfEIuG2AKV2Nu3Ejl7S%7EaAq4Gv4jNemvRTLBFGgYPdUeuavudl4OD4RGkSGTnpzh-P-OBk5WvgpdZZnbb1cRAP73tFHsPDX4%7ETfQIor109G%7E0TB3Jq0wopO9WV0sMQyQs9peZc6bxONiTxb9aHM4yNvWNbVGtlPuC6YS4c9T1e9%7EehdgU4sDOI%7EhpaCvg__&Key-Pair-Id=K2L8F4GPSG1IFC] * [HTTP/2] [1] [user-agent: curl/8.7.1] * [HTTP/2] [1] [accept: */*] * [HTTP/2] [1] [range: bytes=217875070-218006142] > GET /xet-bridge-us/686fc33898943c873b45c9a0/cf8a3a5665cf8b2ff667fb5236a1e5cb13c7582955f9533c88e1387997ef3af9?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=cas%2F20250716%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20250716T145416Z&X-Amz-Expires=3600&X-Amz-Signature=21a15b50740d73fd8ce82d5105733ca067d2e612ada22570e09e93ebcc7f8842&X-Amz-SignedHeaders=host&X-Xet-Cas-Uid=public&response-content-disposition=inline%3B+filename*%3DUTF-8%27%270000.parquet%3B+filename%3D%220000.parquet%22%3B&x-id=GetObject&Expires=1752681256&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTc1MjY4MTI1Nn19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2FzLWJyaWRnZS54ZXRodWIuaGYuY28veGV0LWJyaWRnZS11cy82ODZmYzMzODk4OTQzYzg3M2I0NWM5YTAvY2Y4YTNhNTY2NWNmOGIyZmY2NjdmYjUyMzZhMWU1Y2IxM2M3NTgyOTU1Zjk1MzNjODhlMTM4Nzk5N2VmM2FmOSoifV19&Signature=Tl3xorJ-7yaWvG6Y1AhhRlV2Wko9QpoK1tdPOfNZaRbHo%7EdaAkJRJfcLAYD5YzozfHWBZMLlJsaMPJ1MAne21nr5%7E737sE6yLfBwHdP3ZFZhgrLsN%7EvkIWK2GYX543qTg-pVsf3it92w1oWyoyYNQ9srxLfEIuG2AKV2Nu3Ejl7S%7EaAq4Gv4jNemvRTLBFGgYPdUeuavudl4OD4RGkSGTnpzh-P-OBk5WvgpdZZnbb1cRAP73tFHsPDX4%7ETfQIor109G%7E0TB3Jq0wopO9WV0sMQyQs9peZc6bxONiTxb9aHM4yNvWNbVGtlPuC6YS4c9T1e9%7EehdgU4sDOI%7EhpaCvg__&Key-Pair-Id=K2L8F4GPSG1IFC HTTP/2 > Host: cas-bridge.xethub.hf.co > User-Agent: curl/8.7.1 > Accept: */* > Range: bytes=217875070-218006142 > * Request completely sent off < HTTP/2 206 < content-length: 131072 < date: Mon, 14 Jul 2025 08:40:28 GMT < x-request-id: 01K041FDPVA03RR2PRXDZSN30G < content-disposition: inline; filename*=UTF-8''0000.parquet; filename="0000.parquet"; < cache-control: public, max-age=31536000 < etag: "cf8a3a5665cf8b2ff667fb5236a1e5cb13c7582955f9533c88e1387997ef3af9" < access-control-allow-origin: * < access-control-allow-headers: Content-Range, Content-Type, Content-Disposition, ETag < access-control-expose-headers: Accept-Ranges, Content-Range, Content-Type, Content-Disposition, ETag, X-Cache < x-cache: Hit from cloudfront < via: 1.1 1c857e24a4dc84d2d9c78d5b3463bed6.cloudfront.net (CloudFront) < x-amz-cf-pop: MSP50-P2 < x-amz-cf-id: 3SxFmQa5wLeeXbNiwaAo0_RwoR_n7-SivjsLjDLG-Pwn5UhG2oiEQA== < age: 195496 < content-security-policy: default-src 'none'; sandbox < content-range: bytes 217875070-218006141/218006142 < { [8192 bytes data] 100 128k 100 128k 0 0 769k 0 --:--:-- --:--:-- --:--:-- 769k * Connection #1 to host cas-bridge.xethub.hf.co left intact </details> ### Expected behavior always get back a `206` ### Environment info n/a
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7685/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/pull/7684
7,684
fix audio cast storage from array + sampling_rate
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7684). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-07-15T10:13:42Z
2025-07-15T10:24:08Z
2025-07-15T10:24:07Z
MEMBER
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7684.diff", "html_url": "https://github.com/huggingface/datasets/pull/7684", "merged_at": "2025-07-15T10:24:07Z", "patch_url": "https://github.com/huggingface/datasets/pull/7684.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7684" }
fix https://github.com/huggingface/datasets/issues/7682
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7684/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/pull/7683
7,683
Convert to string when needed + faster .zstd
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7683). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-07-15T09:37:44Z
2025-07-15T10:13:58Z
2025-07-15T10:13:56Z
MEMBER
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7683.diff", "html_url": "https://github.com/huggingface/datasets/pull/7683", "merged_at": "2025-07-15T10:13:56Z", "patch_url": "https://github.com/huggingface/datasets/pull/7683.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7683" }
for https://huggingface.co/datasets/allenai/olmo-mix-1124
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7683/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/issues/7682
7,682
Fail to cast Audio feature for numpy arrays in datasets 4.0.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/163345686?v=4", "events_url": "https://api.github.com/users/luatil-cloud/events{/privacy}", "followers_url": "https://api.github.com/users/luatil-cloud/followers", "following_url": "https://api.github.com/users/luatil-cloud/following{/other_user}", "gists_url": "https://api.github.com/users/luatil-cloud/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/luatil-cloud", "id": 163345686, "login": "luatil-cloud", "node_id": "U_kgDOCbx1Fg", "organizations_url": "https://api.github.com/users/luatil-cloud/orgs", "received_events_url": "https://api.github.com/users/luatil-cloud/received_events", "repos_url": "https://api.github.com/users/luatil-cloud/repos", "site_admin": false, "starred_url": "https://api.github.com/users/luatil-cloud/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/luatil-cloud/subscriptions", "type": "User", "url": "https://api.github.com/users/luatil-cloud", "user_view_type": "public" }
[]
closed
false
[ "thanks for reporting, I opened a PR and I'll make a patch release soon ", "> thanks for reporting, I opened a PR and I'll make a patch release soon\n\nThank you very much @lhoestq!" ]
2025-07-14T18:41:02Z
2025-07-15T12:10:39Z
2025-07-15T10:24:08Z
NONE
null
null
### Describe the bug Casting features with Audio for numpy arrays - done here with `ds.map(gen_sine, features=features)` fails in version 4.0.0 but not in version 3.6.0 ### Steps to reproduce the bug The following `uv script` should be able to reproduce the bug in version 4.0.0 and pass in version 3.6.0 on a macOS Sequoia 15.5 ```python # /// script # requires-python = ">=3.13" # dependencies = [ # "datasets[audio]==4.0.0", # "librosa>=0.11.0", # ] # /// # NAME # create_audio_dataset.py - create an audio dataset of sine waves # # SYNOPSIS # uv run create_audio_dataset.py # # DESCRIPTION # Create an audio dataset using the Hugging Face [datasets] library. # Illustrates how to create synthetic audio datasets using the [map] # datasets function. # # The strategy is to first create a dataset with the input to the # generation function, then execute the map function that generates # the result, and finally cast the final features. # # BUG # Casting features with Audio for numpy arrays - # done here with `ds.map(gen_sine, features=features)` fails # in version 4.0.0 but not in version 3.6.0 # # This happens both in cases where --extra audio is provided and where is not. # When audio is not provided i've installed the latest compatible version # of soundfile. # # The error when soundfile is installed but the audio --extra is not # indicates that the array values do not have the `.T` property, # whilst also indicating that the value is a list instead of a numpy array. # # Last lines of error report when for datasets + soundfile case # ... # # File "/Users/luasantilli/.cache/uv/archive-v0/tc_5IhQe7Zpw8ZXgQWpnl/lib/python3.13/site-packages/datasets/features/audio.py", line 239, in cast_storage # storage = pa.array([Audio().encode_example(x) if x is not None else None for x in storage.to_pylist()]) # ~~~~~~~~~~~~~~~~~~~~~~^^^ # File "/Users/luasantilli/.cache/uv/archive-v0/tc_5IhQe7Zpw8ZXgQWpnl/lib/python3.13/site-packages/datasets/features/audio.py", line 122, in encode_example # sf.write(buffer, value["array"].T, value["sampling_rate"], format="wav") # ^^^^^^^^^^^^^^^^ # AttributeError: 'list' object has no attribute 'T' # ... # # For the case of datasets[audio] without explicit adding soundfile I get an FFmpeg # error. # # Last lines of error report: # # ... # RuntimeError: Could not load libtorchcodec. Likely causes: # 1. FFmpeg is not properly installed in your environment. We support # versions 4, 5, 6 and 7. # 2. The PyTorch version (2.7.1) is not compatible with # this version of TorchCodec. Refer to the version compatibility # table: # https://github.com/pytorch/torchcodec?tab=readme-ov-file#installing-torchcodec. # 3. Another runtime dependency; see exceptions below. # The following exceptions were raised as we tried to load libtorchcodec: # # [start of libtorchcodec loading traceback] # FFmpeg version 7: dlopen(/Users/luasantilli/.cache/uv/archive-v0/RK3IAlGfiICwDkHm2guLC/lib/python3.13/site-packages/torchcodec/libtorchcodec_decoder7.dylib, 0x0006): Library not loaded: @rpath/libavutil.59.dylib # Referenced from: <6DB21246-F28A-31A6-910A-D8F3355D1064> /Users/luasantilli/.cache/uv/archive-v0/RK3IAlGfiICwDkHm2guLC/lib/python3.13/site-packages/torchcodec/libtorchcodec_decoder7.dylib # Reason: no LC_RPATH's found # FFmpeg version 6: dlopen(/Users/luasantilli/.cache/uv/archive-v0/RK3IAlGfiICwDkHm2guLC/lib/python3.13/site-packages/torchcodec/libtorchcodec_decoder6.dylib, 0x0006): Library not loaded: @rpath/libavutil.58.dylib # Referenced from: <BD3B44FC-E14B-3ABF-800F-BB54B6CCA3B1> /Users/luasantilli/.cache/uv/archive-v0/RK3IAlGfiICwDkHm2guLC/lib/python3.13/site-packages/torchcodec/libtorchcodec_decoder6.dylib # Reason: no LC_RPATH's found # FFmpeg version 5: dlopen(/Users/luasantilli/.cache/uv/archive-v0/RK3IAlGfiICwDkHm2guLC/lib/python3.13/site-packages/torchcodec/libtorchcodec_decoder5.dylib, 0x0006): Library not loaded: @rpath/libavutil.57.dylib # Referenced from: <F06EBF8A-238C-3A96-BFBB-B34E0BBDABF0> /Users/luasantilli/.cache/uv/archive-v0/RK3IAlGfiICwDkHm2guLC/lib/python3.13/site-packages/torchcodec/libtorchcodec_decoder5.dylib # Reason: no LC_RPATH's found # FFmpeg version 4: dlopen(/Users/luasantilli/.cache/uv/archive-v0/RK3IAlGfiICwDkHm2guLC/lib/python3.13/site-packages/torchcodec/libtorchcodec_decoder4.dylib, 0x0006): Library not loaded: @rpath/libavutil.56.dylib # Referenced from: <6E59F017-C703-3AF6-A271-6277DD5F8170> /Users/luasantilli/.cache/uv/archive-v0/RK3IAlGfiICwDkHm2guLC/lib/python3.13/site-packages/torchcodec/libtorchcodec_decoder4.dylib # Reason: no LC_RPATH's found # ... # # This is strange because the the same error does not happen when using version 3.6.0 with datasets[audio]. # # The same error appears in python3.12 ## Imports import numpy as np from datasets import Dataset, Features, Audio, Value ## Parameters NUM_WAVES = 128 SAMPLE_RATE = 16_000 DURATION = 1.0 ## Input dataset arguments freqs = np.linspace(100, 2000, NUM_WAVES).tolist() ds = Dataset.from_dict({"frequency": freqs}) ## Features for the final dataset features = Features( {"frequency": Value("float32"), "audio": Audio(sampling_rate=SAMPLE_RATE)} ) ## Generate audio sine waves and cast features def gen_sine(example): t = np.linspace(0, DURATION, int(SAMPLE_RATE * DURATION), endpoint=False) wav = np.sin(2 * np.pi * example["frequency"] * t) return { "frequency": example["frequency"], "audio": {"array": wav, "sampling_rate": SAMPLE_RATE}, } ds = ds.map(gen_sine, features=features) print(ds) print(ds.features) ``` ### Expected behavior I expect the result of version `4.0.0` to be the same of that in version `3.6.0`. Switching the value of the script above to `3.6.0` I get the following, expected, result: ``` $ uv run bug_report.py Map: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 128/128 [00:00<00:00, 204.58 examples/s] Dataset({ features: ['frequency', 'audio'], num_rows: 128 }) {'frequency': Value(dtype='float32', id=None), 'audio': Audio(sampling_rate=16000, mono=True, decode=True, id=None)} ``` ### Environment info - `datasets` version: 4.0.0 - Platform: macOS-15.5-arm64-arm-64bit-Mach-O - Python version: 3.13.1 - `huggingface_hub` version: 0.33.4 - PyArrow version: 20.0.0 - Pandas version: 2.3.1 - `fsspec` version: 2025.3.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 1, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/7682/reactions" }
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/issues/7681
7,681
Probabilistic High Memory Usage and Freeze on Python 3.10
{ "avatar_url": "https://avatars.githubusercontent.com/u/82735346?v=4", "events_url": "https://api.github.com/users/ryan-minato/events{/privacy}", "followers_url": "https://api.github.com/users/ryan-minato/followers", "following_url": "https://api.github.com/users/ryan-minato/following{/other_user}", "gists_url": "https://api.github.com/users/ryan-minato/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ryan-minato", "id": 82735346, "login": "ryan-minato", "node_id": "MDQ6VXNlcjgyNzM1MzQ2", "organizations_url": "https://api.github.com/users/ryan-minato/orgs", "received_events_url": "https://api.github.com/users/ryan-minato/received_events", "repos_url": "https://api.github.com/users/ryan-minato/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ryan-minato/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ryan-minato/subscriptions", "type": "User", "url": "https://api.github.com/users/ryan-minato", "user_view_type": "public" }
[]
open
false
[]
2025-07-14T01:57:16Z
2025-07-14T01:57:16Z
null
NONE
null
null
### Describe the bug A probabilistic issue encountered when processing datasets containing PIL.Image columns using the huggingface/datasets library on Python 3.10. The process occasionally experiences a sudden and significant memory spike, reaching 100% utilization, leading to a complete freeze. During this freeze, the process becomes unresponsive, cannot be forcefully terminated, and does not throw any exceptions. I have attempted to mitigate this issue by setting `datasets.config.IN_MEMORY_MAX_SIZE`, but it had no effect. In fact, based on the document of `load_dataset`, I suspect that setting `IN_MEMORY_MAX_SIZE` might even have a counterproductive effect. This bug is not consistently reproducible, but its occurrence rate significantly decreases or disappears entirely when upgrading Python to version 3.11 or higher. Therefore, this issue also serves to share a potential solution for others who might encounter similar problems. ### Steps to reproduce the bug Due to the probabilistic nature of this bug, consistent reproduction cannot be guaranteed for every run. However, in my environment, processing large datasets like timm/imagenet-1k-wds(whether reading, casting, or mapping operations) almost certainly triggers the issue at some point. The probability of the issue occurring drastically increases when num_proc is set to a value greater than 1 during operations. When the issue occurs, my system logs repeatedly show the following warnings: ``` WARN: very high memory utilization: 57.74GiB / 57.74GiB (100 %) WARN: container is unhealthy: triggered memory limits (OOM) WARN: container is unhealthy: triggered memory limits (OOM) WARN: container is unhealthy: triggered memory limits (OOM) ``` ### Expected behavior The dataset should be read and processed normally without memory exhaustion or freezing. If an unrecoverable error occurs, an appropriate exception should be raised. I have found that upgrading Python to version 3.11 or above completely resolves this issue. On Python 3.11, when memory usage approaches 100%, it suddenly drops before slowly increasing again. I suspect this behavior is due to an expected memory management action, possibly involving writing to disk cache, which prevents the complete freeze observed in Python 3.10. ### Environment info - `datasets` version: 4.0.0 - Platform: Linux-5.15.0-71-generic-x86_64-with-glibc2.35 - Python version: 3.10.12 - `huggingface_hub` version: 0.33.4 - PyArrow version: 20.0.0 - Pandas version: 2.3.1 - `fsspec` version: 2025.3.0
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7681/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/issues/7680
7,680
Question about iterable dataset and streaming
{ "avatar_url": "https://avatars.githubusercontent.com/u/73541181?v=4", "events_url": "https://api.github.com/users/Tavish9/events{/privacy}", "followers_url": "https://api.github.com/users/Tavish9/followers", "following_url": "https://api.github.com/users/Tavish9/following{/other_user}", "gists_url": "https://api.github.com/users/Tavish9/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Tavish9", "id": 73541181, "login": "Tavish9", "node_id": "MDQ6VXNlcjczNTQxMTgx", "organizations_url": "https://api.github.com/users/Tavish9/orgs", "received_events_url": "https://api.github.com/users/Tavish9/received_events", "repos_url": "https://api.github.com/users/Tavish9/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Tavish9/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Tavish9/subscriptions", "type": "User", "url": "https://api.github.com/users/Tavish9", "user_view_type": "public" }
[]
open
false
[ "> If we have already loaded the dataset, why doing to_iterable_dataset? Does it go through the dataset faster than map-style dataset?\n\nyes, it makes a faster DataLoader for example (otherwise DataLoader uses `__getitem__` which is slower than iterating)\n\n> load_dataset(streaming=True) is useful for huge dataset, but the speed is slow. How to make it comparable to to_iterable_dataset without loading the whole dataset into RAM?\n\nYou can aim for saturating your bandwidth using a DataLoader with num_workers and prefetch_factor. The maximum speed will be your internet bandwidth (unless your CPU is a bottlenbeck for CPU operations like image decoding).", "> > If we have already loaded the dataset, why doing to_iterable_dataset? Does it go through the dataset faster than map-style dataset?\n> \n> yes, it makes a faster DataLoader for example (otherwise DataLoader uses `__getitem__` which is slower than iterating)\n\nOkay, but `__getitem__` seems suitable for distributed settings. A distributed sampler would dispatch distinct indexes to each rank (rank0 got 0,1,2,3, rank1 got 4,5,6,7), however, if we make it `to_iterable_dataset`, then each rank needs to iterate all the samples, making it slower (i,e, rank1 got 0,1,2,3, rank2 got 0,1,2,3,(4,5,6,7))\n\nWhat's your opinion here?", "> however, if we make it to_iterable_dataset, then each rank needs to iterate all the samples, making it slower (i,e, rank1 got 0,1,2,3, rank2 got 0,1,2,3,(4,5,6,7))\n\nActually if you specify `to_iterable_dataset(num_shards=world_size)` (or a factor of world_size) and use a `torch.utils.data.DataLoader` then each rank will get a subset of the data thanks to the sharding. E.g. rank0 gets 0,1,2,3 and rank1 gets 4,5,6,7.\n\nThis is because `datasets.IterableDataset` subclasses `torch.utils.data.IterableDataset` and is aware of the current rank.", "Got it, very nice features `num_shards` πŸ‘πŸ» \n\nI would benchmark `to_iterable_dataset(num_shards=world_size)` against traditional map-style one in distributed settings in the near future.", "Hi @lhoestq , I run a test for the speed in single node. Things are not expected as you mentioned before.\n\n```python\nimport time\n\nimport datasets\nfrom torch.utils.data import DataLoader\n\n\ndef time_decorator(func):\n def wrapper(*args, **kwargs):\n start_time = time.time()\n result = func(*args, **kwargs)\n end_time = time.time()\n print(f\"Time taken: {end_time - start_time} seconds\")\n return result\n\n return wrapper\n\n\ndataset = datasets.load_dataset(\n \"parquet\", data_dir=\"my_dir\", split=\"train\"\n)\n\n\n@time_decorator\ndef load_dataset1():\n for _ in dataset:\n pass\n\n\n@time_decorator\ndef load_dataloader1():\n for _ in DataLoader(dataset, batch_size=100, num_workers=5):\n pass\n\n\n@time_decorator\ndef load_dataset2():\n for _ in dataset.to_iterable_dataset():\n pass\n\n\n@time_decorator\ndef load_dataloader2():\n for _ in DataLoader(dataset.to_iterable_dataset(num_shards=5), batch_size=100, num_workers=5):\n pass\n\n\nload_dataset1()\nload_dataloader1()\nload_dataset2()\nload_dataloader2()\n```\n```bash\nResolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 53192/53192 [00:00<00:00, 227103.16it/s]\nTime taken: 100.36162948608398 seconds\nTime taken: 70.09702134132385 seconds\nTime taken: 343.09229612350464 seconds\nTime taken: 132.8996012210846 seconds\n```\n\n1. Why `for _ in dataset.to_iterable_dataset()` is much slower than `for _ in dataset`\n2. The `70 < 132`, the dataloader is slower when `to_iterable_dataset`", "Loading in batches is faster than one example at a time. In your test the dataset is loaded in batches while the iterable_dataset is loaded one example at a time and the dataloader has a buffer to turn the examples to batches.\n\ncan you try this ?\n\n```\nbatched_dataset = dataset.batch(100, num_proc=5)\n\n@time_decorator\ndef load_dataloader3():\n for _ in DataLoader(batched_dataset.to_iterable_dataset(num_shards=5), batch_size=None, num_workers=5):\n pass\n```", "To be fair, I test the time including batching:\n```python\n@time_decorator\ndef load_dataloader3():\n for _ in DataLoader(dataset.batch(100, num_proc=5).to_iterable_dataset(num_shards=5), batch_size=None, num_workers=5):\n pass\n```\n\n```bash\nTime taken: 49.722447633743286 seconds\n```", "I run another test about shuffling.\n\n```python\n@time_decorator\ndef load_map_dataloader1():\n for _ in DataLoader(dataset, batch_size=100, num_workers=5, shuffle=True):\n pass\n\n@time_decorator\ndef load_map_dataloader2():\n for _ in DataLoader(dataset.batch(100, num_proc=5), batch_size=None, num_workers=5, shuffle=True):\n pass\n\n\n@time_decorator\ndef load_iter_dataloader1():\n for _ in DataLoader(dataset.batch(100, num_proc=5).to_iterable_dataset(num_shards=5).shuffle(buffer_size=1000), batch_size=None, num_workers=5):\n pass\n\nload_map_dataloader1()\nload_map_dataloader2()\nload_iter_dataloader1()\n```\n\n```bash\nTime taken: 43.8506863117218 seconds\nTime taken: 38.02591300010681 seconds\nTime taken: 53.38815689086914 seconds\n```\n\n\n- What if I have custom collate_fn when batching?\n\n- And if I want to shuffle the dataset, what's the correct order for `to_iterable_dataset(num_shards=x)`, `batch()` and `shuffle()`. Is `dataset.batch().to_iterable_dataset().shuffle()`? This is not faster than map-style dataset" ]
2025-07-12T04:48:30Z
2025-08-01T13:01:48Z
null
NONE
null
null
In the doc, I found the following example: https://github.com/huggingface/datasets/blob/611f5a592359ebac6f858f515c776aa7d99838b2/docs/source/stream.mdx?plain=1#L65-L78 I am confused, 1. If we have already loaded the dataset, why doing `to_iterable_dataset`? Does it go through the dataset faster than map-style dataset? 2. `load_dataset(streaming=True)` is useful for huge dataset, but the speed is slow. How to make it comparable to `to_iterable_dataset` without loading the whole dataset into RAM?
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7680/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/issues/7679
7,679
metric glue breaks with 4.0.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stas00", "id": 10676103, "login": "stas00", "node_id": "MDQ6VXNlcjEwNjc2MTAz", "organizations_url": "https://api.github.com/users/stas00/orgs", "received_events_url": "https://api.github.com/users/stas00/received_events", "repos_url": "https://api.github.com/users/stas00/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "type": "User", "url": "https://api.github.com/users/stas00", "user_view_type": "public" }
[]
closed
false
[ "I released `evaluate` 0.4.5 yesterday to fix the issue - sorry for the inconvenience:\n\n```\npip install -U evaluate\n```", "Thanks so much, @lhoestq!" ]
2025-07-10T21:39:50Z
2025-07-11T17:42:01Z
2025-07-11T17:42:01Z
CONTRIBUTOR
null
null
### Describe the bug worked fine with 3.6.0, and with 4.0.0 `eval_metric = metric.compute()` in HF Accelerate breaks. The code that fails is: https://huggingface.co/spaces/evaluate-metric/glue/blob/v0.4.0/glue.py#L84 ``` def simple_accuracy(preds, labels): print(preds, labels) print(f"{preds==labels}") return float((preds == labels).mean()) ``` data: ``` Column([1, 0, 0, 1, 1]) Column([1, 0, 0, 1, 0]) False ``` ``` [rank0]: return float((preds == labels).mean()) [rank0]: ^^^^^^^^^^^^^^^^^^^^^^ [rank0]: AttributeError: 'bool' object has no attribute 'mean' ``` Some behavior has changed in this new major release of `datasets` and requires updating HF accelerate and perhaps the glue metric code, all belong to HF. ### Environment info datasets=4.0.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stas00", "id": 10676103, "login": "stas00", "node_id": "MDQ6VXNlcjEwNjc2MTAz", "organizations_url": "https://api.github.com/users/stas00/orgs", "received_events_url": "https://api.github.com/users/stas00/received_events", "repos_url": "https://api.github.com/users/stas00/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "type": "User", "url": "https://api.github.com/users/stas00", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7679/reactions" }
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/issues/7678
7,678
To support decoding audio data, please install 'torchcodec'.
{ "avatar_url": "https://avatars.githubusercontent.com/u/48163702?v=4", "events_url": "https://api.github.com/users/alpcansoydas/events{/privacy}", "followers_url": "https://api.github.com/users/alpcansoydas/followers", "following_url": "https://api.github.com/users/alpcansoydas/following{/other_user}", "gists_url": "https://api.github.com/users/alpcansoydas/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alpcansoydas", "id": 48163702, "login": "alpcansoydas", "node_id": "MDQ6VXNlcjQ4MTYzNzAy", "organizations_url": "https://api.github.com/users/alpcansoydas/orgs", "received_events_url": "https://api.github.com/users/alpcansoydas/received_events", "repos_url": "https://api.github.com/users/alpcansoydas/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alpcansoydas/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alpcansoydas/subscriptions", "type": "User", "url": "https://api.github.com/users/alpcansoydas", "user_view_type": "public" }
[]
closed
false
[ "Hi ! yes you should `!pip install -U datasets[audio]` to have the required dependencies.\n\n`datasets` 4.0 now relies on `torchcodec` for audio decoding. The `torchcodec` AudioDecoder enables streaming from HF and also allows to decode ranges of audio", "Same issues on Colab.\n\n> !pip install -U datasets[audio] \n\nThis works for me. Thanks." ]
2025-07-10T09:43:13Z
2025-07-22T03:46:52Z
2025-07-11T05:05:42Z
NONE
null
null
In the latest version of datasets==4.0.0, i cannot print the audio data on the Colab notebook. But it works on the 3.6.0 version. !pip install -q -U datasets huggingface_hub fsspec from datasets import load_dataset downloaded_dataset = load_dataset("ymoslem/MediaSpeech", "tr", split="train") print(downloaded_dataset["audio"][0]) --------------------------------------------------------------------------- ImportError Traceback (most recent call last) [/tmp/ipython-input-4-90623240.py](https://localhost:8080/#) in <cell line: 0>() ----> 1 downloaded_dataset["audio"][0] 10 frames [/usr/local/lib/python3.11/dist-packages/datasets/features/audio.py](https://localhost:8080/#) in decode_example(self, value, token_per_repo_id) 170 from ._torchcodec import AudioDecoder 171 else: --> 172 raise ImportError("To support decoding audio data, please install 'torchcodec'.") 173 174 if not self.decode: ImportError: To support decoding audio data, please install 'torchcodec'. ### Environment info - `datasets` version: 4.0.0 - Platform: Linux-6.1.123+-x86_64-with-glibc2.35 - Python version: 3.11.13 - `huggingface_hub` version: 0.33.2 - PyArrow version: 18.1.0 - Pandas version: 2.2.2 - `fsspec` version: 2025.3.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/48163702?v=4", "events_url": "https://api.github.com/users/alpcansoydas/events{/privacy}", "followers_url": "https://api.github.com/users/alpcansoydas/followers", "following_url": "https://api.github.com/users/alpcansoydas/following{/other_user}", "gists_url": "https://api.github.com/users/alpcansoydas/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alpcansoydas", "id": 48163702, "login": "alpcansoydas", "node_id": "MDQ6VXNlcjQ4MTYzNzAy", "organizations_url": "https://api.github.com/users/alpcansoydas/orgs", "received_events_url": "https://api.github.com/users/alpcansoydas/received_events", "repos_url": "https://api.github.com/users/alpcansoydas/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alpcansoydas/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alpcansoydas/subscriptions", "type": "User", "url": "https://api.github.com/users/alpcansoydas", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7678/reactions" }
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/issues/7677
7,677
Toxicity fails with datasets 4.0.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/82044803?v=4", "events_url": "https://api.github.com/users/serena-ruan/events{/privacy}", "followers_url": "https://api.github.com/users/serena-ruan/followers", "following_url": "https://api.github.com/users/serena-ruan/following{/other_user}", "gists_url": "https://api.github.com/users/serena-ruan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/serena-ruan", "id": 82044803, "login": "serena-ruan", "node_id": "MDQ6VXNlcjgyMDQ0ODAz", "organizations_url": "https://api.github.com/users/serena-ruan/orgs", "received_events_url": "https://api.github.com/users/serena-ruan/received_events", "repos_url": "https://api.github.com/users/serena-ruan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/serena-ruan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/serena-ruan/subscriptions", "type": "User", "url": "https://api.github.com/users/serena-ruan", "user_view_type": "public" }
[]
closed
false
[ "Hi ! You can fix this by upgrading `evaluate`:\n\n```\npip install -U evaluate\n```", "Thanks, verified evaluate 0.4.5 works!" ]
2025-07-10T06:15:22Z
2025-07-11T04:40:59Z
2025-07-11T04:40:59Z
NONE
null
null
### Describe the bug With the latest 4.0.0 release, huggingface toxicity evaluation module fails with error: `ValueError: text input must be of type `str` (single example), `List[str]` (batch or single pretokenized example) or `List[List[str]]` (batch of pretokenized examples).` ### Steps to reproduce the bug Repro: ``` >>> toxicity.compute(predictions=["This is a response"]) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/serena.ruan/miniconda3/envs/mlflow-310/lib/python3.10/site-packages/evaluate/module.py", line 467, in compute output = self._compute(**inputs, **compute_kwargs) File "/Users/serena.ruan/.cache/huggingface/modules/evaluate_modules/metrics/evaluate-measurement--toxicity/2390290fa0bf6d78480143547c6b08f3d4f8805b249df8c7a8e80d0ce8e3778b/toxicity.py", line 135, in _compute scores = toxicity(predictions, self.toxic_classifier, toxic_label) File "/Users/serena.ruan/.cache/huggingface/modules/evaluate_modules/metrics/evaluate-measurement--toxicity/2390290fa0bf6d78480143547c6b08f3d4f8805b249df8c7a8e80d0ce8e3778b/toxicity.py", line 103, in toxicity for pred_toxic in toxic_classifier(preds): File "/Users/serena.ruan/miniconda3/envs/mlflow-310/lib/python3.10/site-packages/transformers/pipelines/text_classification.py", line 159, in __call__ result = super().__call__(*inputs, **kwargs) File "/Users/serena.ruan/miniconda3/envs/mlflow-310/lib/python3.10/site-packages/transformers/pipelines/base.py", line 1431, in __call__ return self.run_single(inputs, preprocess_params, forward_params, postprocess_params) File "/Users/serena.ruan/miniconda3/envs/mlflow-310/lib/python3.10/site-packages/transformers/pipelines/base.py", line 1437, in run_single model_inputs = self.preprocess(inputs, **preprocess_params) File "/Users/serena.ruan/miniconda3/envs/mlflow-310/lib/python3.10/site-packages/transformers/pipelines/text_classification.py", line 183, in preprocess return self.tokenizer(inputs, return_tensors=return_tensors, **tokenizer_kwargs) File "/Users/serena.ruan/miniconda3/envs/mlflow-310/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2867, in __call__ encodings = self._call_one(text=text, text_pair=text_pair, **all_kwargs) File "/Users/serena.ruan/miniconda3/envs/mlflow-310/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2927, in _call_one raise ValueError( ValueError: text input must be of type `str` (single example), `List[str]` (batch or single pretokenized example) or `List[List[str]]` (batch of pretokenized examples). ``` ### Expected behavior This works before 4.0.0 release ### Environment info - `datasets` version: 4.0.0 - Platform: macOS-15.5-arm64-arm-64bit - Python version: 3.10.16 - `huggingface_hub` version: 0.33.0 - PyArrow version: 19.0.1 - Pandas version: 2.2.3 - `fsspec` version: 2024.12.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/82044803?v=4", "events_url": "https://api.github.com/users/serena-ruan/events{/privacy}", "followers_url": "https://api.github.com/users/serena-ruan/followers", "following_url": "https://api.github.com/users/serena-ruan/following{/other_user}", "gists_url": "https://api.github.com/users/serena-ruan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/serena-ruan", "id": 82044803, "login": "serena-ruan", "node_id": "MDQ6VXNlcjgyMDQ0ODAz", "organizations_url": "https://api.github.com/users/serena-ruan/orgs", "received_events_url": "https://api.github.com/users/serena-ruan/received_events", "repos_url": "https://api.github.com/users/serena-ruan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/serena-ruan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/serena-ruan/subscriptions", "type": "User", "url": "https://api.github.com/users/serena-ruan", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7677/reactions" }
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/issues/7676
7,676
Many things broken since the new 4.0.0 release
{ "avatar_url": "https://avatars.githubusercontent.com/u/37179323?v=4", "events_url": "https://api.github.com/users/mobicham/events{/privacy}", "followers_url": "https://api.github.com/users/mobicham/followers", "following_url": "https://api.github.com/users/mobicham/following{/other_user}", "gists_url": "https://api.github.com/users/mobicham/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mobicham", "id": 37179323, "login": "mobicham", "node_id": "MDQ6VXNlcjM3MTc5MzIz", "organizations_url": "https://api.github.com/users/mobicham/orgs", "received_events_url": "https://api.github.com/users/mobicham/received_events", "repos_url": "https://api.github.com/users/mobicham/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mobicham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mobicham/subscriptions", "type": "User", "url": "https://api.github.com/users/mobicham", "user_view_type": "public" }
[]
open
false
[ "Happy to take a look, do you have a list of impacted datasets ?", "Thanks @lhoestq , related to lm-eval, at least `winogrande`, `mmlu` and `hellaswag`, based on my tests yesterday. But many others like <a href=\"https://huggingface.co/datasets/lukaemon/bbh\">bbh</a>, most probably others too. ", "Hi @mobicham ,\n\nI was having the same issue `ValueError: Feature type 'List' not found` yesterday, when I tried to load my dataset using the `load_dataset()` function.\nBy updating to `4.0.0`, I don't see this error anymore.\n\np.s. I used `Sequence` in replace of list when building my dataset (see below)\n```\nfeatures = Features({\n ...\n \"objects\": Sequence({\n \"id\": Value(\"int64\"),\n \"bbox\": Sequence(Value(\"float32\"), length=4),\n \"category\": Value(\"string\")\n }),\n ...\n})\ndataset = Dataset.from_dict(data_dict)\ndataset = dataset.cast(features)\n\n``` \n", "The issue comes from [hails/mmlu_no_train](https://huggingface.co/datasets/hails/mmlu_no_train), [allenai/winogrande](https://huggingface.co/datasets/allenai/winogrande), [lukaemon/bbh](https://huggingface.co/datasets/lukaemon/bbh) and [Rowan/hellaswag](https://huggingface.co/datasets/Rowan/hellaswag) which are all unsupported in `datasets` 4.0 since they are based on python scripts. Fortunately there are PRs to fix those datasets (I did some of them a year ago but dataset authors haven't merged yet... will have to ping people again about it and update here):\n\n- https://huggingface.co/datasets/hails/mmlu_no_train/discussions/2 merged ! βœ… \n- https://huggingface.co/datasets/allenai/winogrande/discussions/6 merged ! βœ… \n- https://huggingface.co/datasets/Rowan/hellaswag/discussions/7 merged ! βœ… \n- https://huggingface.co/datasets/lukaemon/bbh/discussions/2 merged ! βœ… ", "Thank you very much @lhoestq , I will try next week πŸ‘ ", "I get this error when using datasets 3.5.1 to load a dataset saved with datasets 4.0.0. If you are hitting this issue, make sure that both dataset saving code and the loading code are <4.0.0 or >=4.0.0.", "This broke several lm-eval-harness workflows for me and reverting to older versions of datasets is not fixing the issue, does anyone have a workaround?", "> I get this error when using datasets 3.5.1 to load a dataset saved with datasets 4.0.0. If you are hitting this issue, make sure that both dataset saving code and the loading code are <4.0.0 or >=4.0.0.\n\n`datasets` 4.0 can load datasets saved using any older version. But the other way around is not always true: if you save a dataset with `datasets` 4.0 it may use the new `List` type that requires 4.0 and raise `ValueError: Feature type 'List' not found.`\n\nHowever issues with lm eval harness seem to come from another issue: unsupported dataset scripts (see https://github.com/huggingface/datasets/issues/7676#issuecomment-3057550659)\n\n> This broke several lm-eval-harness workflows for me and reverting to older versions of datasets is not fixing the issue, does anyone have a workaround?\n\nwhen reverting to an old `datasets` version I'd encourage you to clear your cache (by default it is located at `~/.cache/huggingface/datasets`) otherwise it might try to load a `List` type that didn't exist in old versions", "All the impacted datasets in lm eval harness have been fixed thanks to the reactivity of dataset authors ! let me know if you encounter issues with other datasets :)", "Hello folks, I have found `patrickvonplaten/librispeech_asr_dummy` to be another dataset that is currently broken since the 4.0.0 release. Is there a PR on this as well?", "https://huggingface.co/datasets/microsoft/prototypical-hai-collaborations seems to be impacted as well.\n\n```\n_temp = load_dataset(\"microsoft/prototypical-hai-collaborations\", \"wildchat1m_en3u-task_anns\")\n``` \nleads to \n`ValueError: Feature type 'List' not found. Available feature types: ['Value', 'ClassLabel', 'Translation', 'TranslationVariableLanguages', 'LargeList', 'Sequence', 'Array2D', 'Array3D', 'Array4D', 'Array5D', 'Audio', 'Image', 'Video', 'Pdf']`", "`microsoft/prototypical-hai-collaborations` is not impacted, you can load it using both `datasets` 3.6 and 4.0. I also tried on colab to confirm.\n\nOne thing that could explain `ValueError: Feature type 'List' not found.` is maybe if you have loaded and cached this dataset with `datasets` 4.0 and then tried to reload it from cache using 3.6.0.\n\nEDIT: actually I tried and 3.6 can reload datasets cached with 4.0 so I'm not sure why you have this error. Which version of `datasets` are you using ?", "> Hello folks, I have found patrickvonplaten/librispeech_asr_dummy to be another dataset that is currently broken since the 4.0.0 release. Is there a PR on this as well?\n\nI guess you can use [hf-internal-testing/librispeech_asr_dummy](https://huggingface.co/datasets/hf-internal-testing/librispeech_asr_dummy) instead of `patrickvonplaten/librispeech_asr_dummy`, or ask the dataset author to convert their dataset to Parquet", "i am having a similar issue with these evals under leaderboard: https://github.com/EleutherAI/lm-evaluation-harness/tree/main/lm_eval/tasks/leaderboard\n\nsome datasets look pretty old (2years), not sure if the author would fix it", "For datasets based on scripts, I shared a command here to update them: https://github.com/huggingface/datasets/issues/7693#issuecomment-3253005348\n\nOtherwise if you are getting `ValueError: Feature type 'List' not found.` as in the original post, make sure you use `datasets` v4 to reload datasets that were loaded with v4." ]
2025-07-09T18:59:50Z
2025-09-18T16:33:34Z
null
NONE
null
null
### Describe the bug The new changes in 4.0.0 are breaking many datasets, including those from lm-evaluation-harness. I am trying to revert back to older versions, like 3.6.0 to make the eval work but I keep getting: ``` Python File /venv/main/lib/python3.12/site-packages/datasets/features/features.py:1474, in generate_from_dict(obj) 1471 class_type = _FEATURE_TYPES.get(_type, None) or globals().get(_type, None) 1473 if class_type is None: -> 1474 raise ValueError(f"Feature type '{_type}' not found. Available feature types: {list(_FEATURE_TYPES.keys())}") 1476 if class_type == LargeList: 1477 feature = obj.pop("feature") ValueError: Feature type 'List' not found. Available feature types: ['Value', 'ClassLabel', 'Translation', 'TranslationVariableLanguages', 'LargeList', 'Sequence', 'Array2D', 'Array3D', 'Array4D', 'Array5D', 'Audio', 'Image', 'Video', 'Pdf'] ``` ### Steps to reproduce the bug ``` Python import lm_eval model_eval = lm_eval.models.huggingface.HFLM(pretrained=model, tokenizer=tokenizer) lm_eval.evaluator.simple_evaluate(model_eval, tasks=["winogrande"], num_fewshot=5, batch_size=1) ``` ### Expected behavior Older `datasets` versions should work just fine as before ### Environment info - `datasets` version: 3.6.0 - Platform: Linux-6.8.0-60-generic-x86_64-with-glibc2.39 - Python version: 3.12.11 - `huggingface_hub` version: 0.33.1 - PyArrow version: 20.0.0 - Pandas version: 2.3.1 - `fsspec` version: 2025.3.0
null
{ "+1": 22, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 22, "url": "https://api.github.com/repos/huggingface/datasets/issues/7676/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/issues/7675
7,675
common_voice_11_0.py failure in dataset library
{ "avatar_url": "https://avatars.githubusercontent.com/u/98793855?v=4", "events_url": "https://api.github.com/users/egegurel/events{/privacy}", "followers_url": "https://api.github.com/users/egegurel/followers", "following_url": "https://api.github.com/users/egegurel/following{/other_user}", "gists_url": "https://api.github.com/users/egegurel/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/egegurel", "id": 98793855, "login": "egegurel", "node_id": "U_kgDOBeN5fw", "organizations_url": "https://api.github.com/users/egegurel/orgs", "received_events_url": "https://api.github.com/users/egegurel/received_events", "repos_url": "https://api.github.com/users/egegurel/repos", "site_admin": false, "starred_url": "https://api.github.com/users/egegurel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/egegurel/subscriptions", "type": "User", "url": "https://api.github.com/users/egegurel", "user_view_type": "public" }
[]
open
false
[ "Hi ! This dataset is not in a supported format and `datasets` 4 doesn't support datasets that based on python scripts which are often source of errors. Feel free to ask the dataset authors to convert the dataset to a supported format at https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0/discussions, e.g. parquet.\n\nIn the meantime you can pin old versions of `datasets` like `datasets==3.6.0`", "Thanks @lhoestq! I encountered the same issue and switching to an older version of `datasets` worked.", ">which version of datasets worked for you, I tried switching to 4.6.0 and also moved back for fsspec, but still facing issues for this.\n\n", "Try datasets<=3.6.0", "same issue " ]
2025-07-09T17:47:59Z
2025-07-22T09:35:42Z
null
NONE
null
null
### Describe the bug I tried to download dataset but have got this error: from datasets import load_dataset load_dataset("mozilla-foundation/common_voice_11_0", "en", split="test", streaming=True) --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) Cell In[8], line 4 1 from datasets import load_dataset ----> 4 load_dataset("mozilla-foundation/common_voice_11_0", "en", split="test", streaming=True) File c:\Users\ege_g\AppData\Local\Programs\Python\Python312\Lib\site-packages\datasets\load.py:1392, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, keep_in_memory, save_infos, revision, token, streaming, num_proc, storage_options, **config_kwargs) 1387 verification_mode = VerificationMode( 1388 (verification_mode or VerificationMode.BASIC_CHECKS) if not save_infos else VerificationMode.ALL_CHECKS 1389 ) 1391 # Create a dataset builder -> 1392 builder_instance = load_dataset_builder( 1393 path=path, 1394 name=name, 1395 data_dir=data_dir, 1396 data_files=data_files, 1397 cache_dir=cache_dir, 1398 features=features, 1399 download_config=download_config, 1400 download_mode=download_mode, 1401 revision=revision, 1402 token=token, 1403 storage_options=storage_options, 1404 **config_kwargs, 1405 ) 1407 # Return iterable dataset in case of streaming 1408 if streaming: File c:\Users\ege_g\AppData\Local\Programs\Python\Python312\Lib\site-packages\datasets\load.py:1132, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, token, storage_options, **config_kwargs) 1130 if features is not None: 1131 features = _fix_for_backward_compatible_features(features) -> 1132 dataset_module = dataset_module_factory( 1133 path, 1134 revision=revision, 1135 download_config=download_config, 1136 download_mode=download_mode, 1137 data_dir=data_dir, 1138 data_files=data_files, 1139 cache_dir=cache_dir, 1140 ) 1141 # Get dataset builder class 1142 builder_kwargs = dataset_module.builder_kwargs File c:\Users\ege_g\AppData\Local\Programs\Python\Python312\Lib\site-packages\datasets\load.py:1031, in dataset_module_factory(path, revision, download_config, download_mode, data_dir, data_files, cache_dir, **download_kwargs) 1026 if isinstance(e1, FileNotFoundError): 1027 raise FileNotFoundError( 1028 f"Couldn't find any data file at {relative_to_absolute_path(path)}. " 1029 f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}" 1030 ) from None -> 1031 raise e1 from None 1032 else: 1033 raise FileNotFoundError(f"Couldn't find any data file at {relative_to_absolute_path(path)}.") File c:\Users\ege_g\AppData\Local\Programs\Python\Python312\Lib\site-packages\datasets\load.py:989, in dataset_module_factory(path, revision, download_config, download_mode, data_dir, data_files, cache_dir, **download_kwargs) 981 try: 982 api.hf_hub_download( 983 repo_id=path, 984 filename=filename, (...) 987 proxies=download_config.proxies, 988 ) --> 989 raise RuntimeError(f"Dataset scripts are no longer supported, but found {filename}") 990 except EntryNotFoundError: 991 # Use the infos from the parquet export except in some cases: 992 if data_dir or data_files or (revision and revision != "main"): RuntimeError: Dataset scripts are no longer supported, but found common_voice_11_0.py ### Steps to reproduce the bug from datasets import load_dataset load_dataset("mozilla-foundation/common_voice_11_0", "en", split="test", streaming=True) ### Expected behavior its supposed to download this dataset. ### Environment info Python 3.12 , Windows 11
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7675/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/pull/7674
7,674
set dev version
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7674). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-07-09T15:01:25Z
2025-07-09T15:04:01Z
2025-07-09T15:01:33Z
MEMBER
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7674.diff", "html_url": "https://github.com/huggingface/datasets/pull/7674", "merged_at": "2025-07-09T15:01:33Z", "patch_url": "https://github.com/huggingface/datasets/pull/7674.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7674" }
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7674/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/pull/7673
7,673
Release: 4.0.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7673). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-07-09T14:03:16Z
2025-07-09T14:36:19Z
2025-07-09T14:36:18Z
MEMBER
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7673.diff", "html_url": "https://github.com/huggingface/datasets/pull/7673", "merged_at": "2025-07-09T14:36:18Z", "patch_url": "https://github.com/huggingface/datasets/pull/7673.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7673" }
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7673/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/pull/7672
7,672
Fix double sequence
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7672). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-07-09T09:53:39Z
2025-07-09T09:56:29Z
2025-07-09T09:56:28Z
MEMBER
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7672.diff", "html_url": "https://github.com/huggingface/datasets/pull/7672", "merged_at": "2025-07-09T09:56:27Z", "patch_url": "https://github.com/huggingface/datasets/pull/7672.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7672" }
```python >>> Features({"a": Sequence(Sequence({"c": Value("int64")}))}) {'a': List({'c': List(Value('int64'))})} ``` instead of `{'a': {'c': List(List(Value('int64')))}}`
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7672/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/issues/7671
7,671
Mapping function not working if the first example is returned as None
{ "avatar_url": "https://avatars.githubusercontent.com/u/46325823?v=4", "events_url": "https://api.github.com/users/dnaihao/events{/privacy}", "followers_url": "https://api.github.com/users/dnaihao/followers", "following_url": "https://api.github.com/users/dnaihao/following{/other_user}", "gists_url": "https://api.github.com/users/dnaihao/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dnaihao", "id": 46325823, "login": "dnaihao", "node_id": "MDQ6VXNlcjQ2MzI1ODIz", "organizations_url": "https://api.github.com/users/dnaihao/orgs", "received_events_url": "https://api.github.com/users/dnaihao/received_events", "repos_url": "https://api.github.com/users/dnaihao/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dnaihao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dnaihao/subscriptions", "type": "User", "url": "https://api.github.com/users/dnaihao", "user_view_type": "public" }
[]
closed
false
[ "Hi, map() always expect an output.\n\nIf you wish to filter examples, you should use filter(), in your case it could be something like this:\n\n```python\nds = ds.map(my_processing_function).filter(ignore_long_prompts)\n```", "Realized this! Thanks a lot, I will close this issue then." ]
2025-07-08T17:07:47Z
2025-07-09T12:30:32Z
2025-07-09T12:30:32Z
NONE
null
null
### Describe the bug https://github.com/huggingface/datasets/blob/8a19de052e3d79f79cea26821454bbcf0e9dcd68/src/datasets/arrow_dataset.py#L3652C29-L3652C37 Here we can see the writer is initialized on `i==0`. However, there can be cases where in the user mapping function, the first example is filtered out (length constraints, etc). In this case, the writer would be a `None` type and the code will report `NoneType has no write function`. A simple fix is available, simply change line 3652 from `if i == 0:` to `if writer is None:` ### Steps to reproduce the bug Prepare a dataset have this function ``` import datasets def make_map_fn(split, max_prompt_tokens=3): def process_fn(example, idx): question = example['question'] reasoning_steps = example['reasoning_steps'] label = example['label'] answer_format = "" for i in range(len(reasoning_steps)): system_message = "Dummy" all_steps_formatted = [] content = f"""Dummy""" prompt = [ {"role": "system", "content": system_message}, {"role": "user", "content": content}, ] tokenized = tokenizer.apply_chat_template(prompt, return_tensors="pt", truncation=False) if tokenized.shape[1] > max_prompt_tokens: return None # skip overly long examples data = { "dummy": "dummy" } return data return process_fn ... # load your dataset ... train = train.map(function=make_map_fn('train'), with_indices=True) ``` ### Expected behavior The dataset mapping shall behave even when the first example is filtered out. ### Environment info I am using `datasets==3.6.0` but I have observed this issue in the github repo too: https://github.com/huggingface/datasets/blob/8a19de052e3d79f79cea26821454bbcf0e9dcd68/src/datasets/arrow_dataset.py#L3652C29-L3652C37
{ "avatar_url": "https://avatars.githubusercontent.com/u/46325823?v=4", "events_url": "https://api.github.com/users/dnaihao/events{/privacy}", "followers_url": "https://api.github.com/users/dnaihao/followers", "following_url": "https://api.github.com/users/dnaihao/following{/other_user}", "gists_url": "https://api.github.com/users/dnaihao/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dnaihao", "id": 46325823, "login": "dnaihao", "node_id": "MDQ6VXNlcjQ2MzI1ODIz", "organizations_url": "https://api.github.com/users/dnaihao/orgs", "received_events_url": "https://api.github.com/users/dnaihao/received_events", "repos_url": "https://api.github.com/users/dnaihao/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dnaihao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dnaihao/subscriptions", "type": "User", "url": "https://api.github.com/users/dnaihao", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7671/reactions" }
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/pull/7670
7,670
Fix audio bytes
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7670). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-07-07T13:05:15Z
2025-07-07T13:07:47Z
2025-07-07T13:05:33Z
MEMBER
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7670.diff", "html_url": "https://github.com/huggingface/datasets/pull/7670", "merged_at": "2025-07-07T13:05:33Z", "patch_url": "https://github.com/huggingface/datasets/pull/7670.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7670" }
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7670/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/issues/7669
7,669
How can I add my custom data to huggingface datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/219205504?v=4", "events_url": "https://api.github.com/users/xiagod/events{/privacy}", "followers_url": "https://api.github.com/users/xiagod/followers", "following_url": "https://api.github.com/users/xiagod/following{/other_user}", "gists_url": "https://api.github.com/users/xiagod/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/xiagod", "id": 219205504, "login": "xiagod", "node_id": "U_kgDODRDPgA", "organizations_url": "https://api.github.com/users/xiagod/orgs", "received_events_url": "https://api.github.com/users/xiagod/received_events", "repos_url": "https://api.github.com/users/xiagod/repos", "site_admin": false, "starred_url": "https://api.github.com/users/xiagod/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xiagod/subscriptions", "type": "User", "url": "https://api.github.com/users/xiagod", "user_view_type": "public" }
[]
open
false
[ "Hey @xiagod \n\nThe easiest way to add your custom data to Hugging Face Datasets is to use the built-in load_dataset function with your local files. Some examples include:\n\nCSV files:\nfrom datasets import load_dataset\ndataset = load_dataset(\"csv\", data_files=\"my_file.csv\")\n\nJSON or JSONL files:\nfrom datasets import load_dataset\ndataset = load_dataset(\"json\", data_files=\"my_file.json\")\n\n\nImages stored in folders (e.g. data/train/cat/, data/train/dog/):\nfrom datasets import load_dataset\ndataset = load_dataset(\"imagefolder\", data_dir=\"/path/to/pokemon\")\n\n\nThese methods let you quickly create a custom dataset without needing to write a full script.\n\nMore information can be found in Hugging Face's tutorial \"Create a dataset\" or \"Load\" documentation here: \n\nhttps://huggingface.co/docs/datasets/create_dataset \n\nhttps://huggingface.co/docs/datasets/loading#local-and-remote-files\n\n\n\nIf you want to submit your dataset to the Hugging Face Datasets GitHub repo so others can load it follow this guide: \n\nhttps://huggingface.co/docs/datasets/upload_dataset \n\n\n" ]
2025-07-04T19:19:54Z
2025-07-05T18:19:37Z
null
NONE
null
null
I want to add my custom dataset in huggingface dataset. Please guide me how to achieve that.
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7669/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/issues/7668
7,668
Broken EXIF crash the whole program
{ "avatar_url": "https://avatars.githubusercontent.com/u/30485844?v=4", "events_url": "https://api.github.com/users/Seas0/events{/privacy}", "followers_url": "https://api.github.com/users/Seas0/followers", "following_url": "https://api.github.com/users/Seas0/following{/other_user}", "gists_url": "https://api.github.com/users/Seas0/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Seas0", "id": 30485844, "login": "Seas0", "node_id": "MDQ6VXNlcjMwNDg1ODQ0", "organizations_url": "https://api.github.com/users/Seas0/orgs", "received_events_url": "https://api.github.com/users/Seas0/received_events", "repos_url": "https://api.github.com/users/Seas0/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Seas0/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Seas0/subscriptions", "type": "User", "url": "https://api.github.com/users/Seas0", "user_view_type": "public" }
[]
open
false
[ "There are other discussions about error handling for images decoding here : https://github.com/huggingface/datasets/issues/7632 https://github.com/huggingface/datasets/issues/7612\n\nand a PR here: https://github.com/huggingface/datasets/pull/7638 (would love your input on the proposed solution !)" ]
2025-07-03T11:24:15Z
2025-07-03T12:27:16Z
null
NONE
null
null
### Describe the bug When parsing this image in the ImageNet1K dataset, the `datasets` crashs whole training process just because unable to parse an invalid EXIF tag. ![Image](https://github.com/user-attachments/assets/3c840203-ac8c-41a0-9cf7-45f64488037d) ### Steps to reproduce the bug Use the `datasets.Image.decode_example` method to decode the aforementioned image could reproduce the bug. The decoding function will throw an unhandled exception at the `image.getexif()` method call due to invalid utf-8 stream in EXIF tags. ``` File lib/python3.12/site-packages/datasets/features/image.py:188, in Image.decode_example(self, value, token_per_repo_id) 186 image = PIL.Image.open(BytesIO(bytes_)) 187 image.load() # to avoid "Too many open files" errors --> 188 if image.getexif().get(PIL.Image.ExifTags.Base.Orientation) is not None: 189 image = PIL.ImageOps.exif_transpose(image) 190 if self.mode and self.mode != image.mode: File lib/python3.12/site-packages/PIL/Image.py:1542, in Image.getexif(self) 1540 xmp_tags = self.info.get("XML:com.adobe.xmp") 1541 if not xmp_tags and (xmp_tags := self.info.get("xmp")): -> 1542 xmp_tags = xmp_tags.decode("utf-8") 1543 if xmp_tags: 1544 match = re.search(r'tiff:Orientation(="|>)([0-9])', xmp_tags) UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa8 in position 4312: invalid start byte ``` ### Expected behavior The invalid EXIF tag should simply be ignored or issue a warning message, instead of crash the whole program at once. ### Environment info - `datasets` version: 3.6.0 - Platform: Linux-6.5.0-18-generic-x86_64-with-glibc2.35 - Python version: 3.12.11 - `huggingface_hub` version: 0.33.0 - PyArrow version: 20.0.0 - Pandas version: 2.3.0 - `fsspec` version: 2025.3.0
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7668/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/pull/7667
7,667
Fix infer list of images
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7667). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-07-02T15:07:58Z
2025-07-02T15:10:28Z
2025-07-02T15:08:03Z
MEMBER
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7667.diff", "html_url": "https://github.com/huggingface/datasets/pull/7667", "merged_at": "2025-07-02T15:08:03Z", "patch_url": "https://github.com/huggingface/datasets/pull/7667.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7667" }
cc @kashif
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7667/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/pull/7666
7,666
Backward compat list feature
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7666). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-07-02T14:58:00Z
2025-07-02T15:00:37Z
2025-07-02T14:59:40Z
MEMBER
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7666.diff", "html_url": "https://github.com/huggingface/datasets/pull/7666", "merged_at": "2025-07-02T14:59:40Z", "patch_url": "https://github.com/huggingface/datasets/pull/7666.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7666" }
cc @kashif
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/7666/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/issues/7665
7,665
Function load_dataset() misinterprets string field content as part of dataset schema when dealing with `.jsonl` files
{ "avatar_url": "https://avatars.githubusercontent.com/u/1151198?v=4", "events_url": "https://api.github.com/users/zdzichukowalski/events{/privacy}", "followers_url": "https://api.github.com/users/zdzichukowalski/followers", "following_url": "https://api.github.com/users/zdzichukowalski/following{/other_user}", "gists_url": "https://api.github.com/users/zdzichukowalski/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/zdzichukowalski", "id": 1151198, "login": "zdzichukowalski", "node_id": "MDQ6VXNlcjExNTExOTg=", "organizations_url": "https://api.github.com/users/zdzichukowalski/orgs", "received_events_url": "https://api.github.com/users/zdzichukowalski/received_events", "repos_url": "https://api.github.com/users/zdzichukowalski/repos", "site_admin": false, "starred_url": "https://api.github.com/users/zdzichukowalski/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zdzichukowalski/subscriptions", "type": "User", "url": "https://api.github.com/users/zdzichukowalski", "user_view_type": "public" }
[]
closed
false
[ "Somehow I created the issue twiceπŸ™ˆ This one is an exact duplicate of #7664." ]
2025-07-01T17:14:53Z
2025-07-01T17:17:48Z
2025-07-01T17:17:48Z
NONE
null
null
### Describe the bug When loading a `.jsonl` file using `load_dataset("json", data_files="data.jsonl", split="train")`, the function misinterprets the content of a string field as if it were part of the dataset schema. In my case there is a field `body:` with a string value ``` "### Describe the bug (...) ,action: string, datetime: timestamp[s], author: string, (...) Pandas version: 1.3.4" ``` As a result, I got an exception ``` "TypeError: Couldn't cast array of type timestamp[s] to null". ``` Full stack-trace in the attached file below. I also attach a minimized dataset (data.json, a single entry) that reproduces the error. **Observations**(on the minimal example): - if I remove _all fields before_ `body`, a different error appears, - if I remove _all fields after_ `body`, yet another error appears, - if `body` is _the only field_, the error disappears. So this might be one complex bug or several edge cases interacting. I haven’t dug deeper. Also changing the file extension to `.json` or `.txt` avoids the problem. This suggests **a possible workaround** for the general case: convert `.jsonl` to `.json`. Though I haven’t verified correctness of that workaround yet. Anyway my understanding is that `load_dataset` with first argument set to "json" should properly handle `.jsonl` files. Correct me if I'm wrong. [stack_trace.txt](https://github.com/user-attachments/files/21004153/stack_trace.txt) [data.json](https://github.com/user-attachments/files/21004164/data.json) P.S. I discovered this while going through the HuggingFace tutorial. Specifically [this part](https://huggingface.co/learn/llm-course/chapter5/5?fw=pt).I will try to inform the tutorial team about this issue, as it can be a showstopper for young πŸ€— adepts. ### Steps to reproduce the bug 1. Download attached [data.json](https://github.com/user-attachments/files/21004164/data.json) file. 2. Run the following code which should work correctly: ``` from datasets import load_dataset load_dataset("json", data_files="data.json", split="train") ``` 3. Change extension of the `data` file to `.jsonl` and run: ``` from datasets import load_dataset load_dataset("json", data_files="data.jsonl", split="train") ``` This will trigger an error like the one in the attached [stack_trace.txt](https://github.com/user-attachments/files/21004153/stack_trace.txt). One can also try removing fields before the `body` field and after it. These actions give different errors. ### Expected behavior Parsing data in `.jsonl` format should yield the same result as parsing the same data in `.json` format. In any case, the content of a string field should never be interpreted as part of the dataset schema. ### Environment info datasets version: _3.6.0_ pyarrow version: _20.0.0_ Python version: _3.11.9_ platform version: _macOS-15.5-arm64-arm-64bit_
{ "avatar_url": "https://avatars.githubusercontent.com/u/1151198?v=4", "events_url": "https://api.github.com/users/zdzichukowalski/events{/privacy}", "followers_url": "https://api.github.com/users/zdzichukowalski/followers", "following_url": "https://api.github.com/users/zdzichukowalski/following{/other_user}", "gists_url": "https://api.github.com/users/zdzichukowalski/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/zdzichukowalski", "id": 1151198, "login": "zdzichukowalski", "node_id": "MDQ6VXNlcjExNTExOTg=", "organizations_url": "https://api.github.com/users/zdzichukowalski/orgs", "received_events_url": "https://api.github.com/users/zdzichukowalski/received_events", "repos_url": "https://api.github.com/users/zdzichukowalski/repos", "site_admin": false, "starred_url": "https://api.github.com/users/zdzichukowalski/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zdzichukowalski/subscriptions", "type": "User", "url": "https://api.github.com/users/zdzichukowalski", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7665/reactions" }
duplicate
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/issues/7664
7,664
Function load_dataset() misinterprets string field content as part of dataset schema when dealing with `.jsonl` files
{ "avatar_url": "https://avatars.githubusercontent.com/u/1151198?v=4", "events_url": "https://api.github.com/users/zdzichukowalski/events{/privacy}", "followers_url": "https://api.github.com/users/zdzichukowalski/followers", "following_url": "https://api.github.com/users/zdzichukowalski/following{/other_user}", "gists_url": "https://api.github.com/users/zdzichukowalski/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/zdzichukowalski", "id": 1151198, "login": "zdzichukowalski", "node_id": "MDQ6VXNlcjExNTExOTg=", "organizations_url": "https://api.github.com/users/zdzichukowalski/orgs", "received_events_url": "https://api.github.com/users/zdzichukowalski/received_events", "repos_url": "https://api.github.com/users/zdzichukowalski/repos", "site_admin": false, "starred_url": "https://api.github.com/users/zdzichukowalski/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zdzichukowalski/subscriptions", "type": "User", "url": "https://api.github.com/users/zdzichukowalski", "user_view_type": "public" }
[]
open
false
[ "Hey @zdzichukowalski, I was not able to reproduce this on python 3.11.9 and datasets 3.6.0. The contents of \"body\" are correctly parsed as a string and no other fields like timestamps are created. Could you try reproducing this in a fresh environment, or posting the complete code where you encountered that stacktrace? (I noticed in the stacktrace you had a bigger program, perhaps there are some side effects)", "Hi @zdzichukowalski, thanks for reporting this!\n\nTo help investigate this further, could you please share the following:\n\nExact contents of the data.jsonl file you're using β€” especially the first few lines that trigger the error.\n\nThe full code snippet you used to run load_dataset(), along with any environment setup (if not already shared).\n\nCan you confirm whether the issue persists when running in a clean virtual environment (e.g., with only datasets, pyarrow, and their dependencies)?\n\nIf possible, could you try running the same with an explicit features schema, like:\n\n```\nfrom datasets import load_dataset, Features, Value\nfeatures = Features({\"body\": Value(\"string\")})\nds = load_dataset(\"json\", data_files=\"data.jsonl\", split=\"train\", features=features)\n```\nAlso, just to clarify β€” does the \"body\" field contain plain string content, or is it sometimes being parsed from multi-line or structured inputs (like embedded JSON or CSV-like text)?\n\nOnce we have this info, we can check whether this is a schema inference issue, a PyArrow type coercion bug, or something else.", "Ok I can confirm that I also cannot reproduce the error in a clean environment with the minimized version of the dataset that I provided. Same story for the old environment. Nonetheless the bug still happens in the new environment with the full version of the dataset, which I am providing now. Please let me know if now you can reproduce the problem.\n\nAdditionally I'm attaching result of the `pip freeze` command.\n\n[datasets-issues.jsonl.zip](https://github.com/user-attachments/files/21081755/datasets-issues.jsonl.zip)\n[requirements.txt](https://github.com/user-attachments/files/21081776/requirements.txt)\n\n@ArjunJagdale running with explicit script gives the following stack:\n[stack_features_version.txt](https://github.com/user-attachments/files/21082056/stack_features_version.txt)\n\nThe problematic `body` field seems to be e.g. content of [this comment](https://github.com/huggingface/datasets/issues/5596#issue-1604919993) from Github in which someone provided a stack trace containing json structure ;) I would say that it is intended to be a plain string. \n\nTo find a part that triggers an error, simply search for the \"timestamp[s]\" in the dataset. There are few such entries.\n\nI think I provided all the information you asked. \n\nOh, and workaround I suggested, that is convert `.jsonl` to `.json` worked for me.\n\nP.S\n1. @itsmejul the stack trace I provided is coming from running the two-liner script that I attached. There is no bigger program, although there were some jupiter files alongside the script, which were run in the same env. I am not sure what part of the stack trace suggests that there is something more ;) \n\n2. Is it possible that on some layer in the python/env/jupiter there is some caching mechanism for files that would give false results for my minimized version of the dataset file? There is of course possibility that I made a mistake and run the script with the wrong file, but I double and triple checked things before creating an issue. Earlier I wrote that \"(...) changing the file extension to `.json` or `.txt` avoids the problem\". But with the full version this is not true(when I change to `txt`), and minimized version always works. So it looks like that when I changed the extension to e.g. `txt` then a minimized file loaded from the disk and it was parsed correctly, but every time when I changed back to `jsonl` my script must have used an original content of the file - the one before I made a minimization. But this is still all strange because I even removed the fields before and after the body from my minimized `jsonl` and there were some different errors(I mention it in my original post), so I do not get why today I cannot reproduce it in the original env... \n\n", "Hi @zdzichukowalski, thanks again for the detailed info and files!\n\nI’ve reviewed the `datasets-issues.jsonl` you shared, and I can now confirm the issue with full clarity:\n\nSome entries in the `\"body\"` field contain string content that resembles schema definitions β€” for example:\n\n```\nstruct<type: string, action: string, datetime: timestamp[s], ...>\n```\n\nThese strings appear to be copied from GitHub comments or stack traces (e.g., from #5596)\n\nWhen using the `.jsonl` format, `load_dataset()` relies on row-wise schema inference via PyArrow. If some rows contain real structured fields like `pull_request.merged_at` (a valid timestamp), and others contain schema-like text inside string fields, PyArrow can get confused while unifying the schema β€” leading to cast errors.\n\nThat’s why:\n\n* Using a reduced schema like `features={\"body\": Value(\"string\")}` fails β€” because the full table has many more fields.\n* Converting the file to `.json` (a list of objects) works β€” because global schema inference kicks in.\n* Filtering the dataset to only the `body` field avoids the issue entirely.\n\n### Suggested Workarounds\n\n* Convert the `.jsonl` file to `.json` to enable global schema inference.\n* Or, preprocess the `.jsonl` file to extract only the `\"body\"` field if that’s all you need.", "So in summary should we treat it as a low severity bug in `PyArrow`, in `Datasets` library, or as a proper behavior and do nothing with it?", "You are right actually! I’d also categorize this as a low-severity schema inference edge case, mainly stemming from PyArrow, but exposed by how datasets handles .jsonl inputs.\n\nIt's not a bug in datasets per se, but confusing when string fields (like body) contain text that resembles schema β€” e.g., \"timestamp[s]\".\n\nMaybe @lhoestq β€” could this be considered as a small feature/improvement?" ]
2025-07-01T17:14:32Z
2025-07-09T13:14:11Z
null
NONE
null
null
### Describe the bug When loading a `.jsonl` file using `load_dataset("json", data_files="data.jsonl", split="train")`, the function misinterprets the content of a string field as if it were part of the dataset schema. In my case there is a field `body:` with a string value ``` "### Describe the bug (...) ,action: string, datetime: timestamp[s], author: string, (...) Pandas version: 1.3.4" ``` As a result, I got an exception ``` "TypeError: Couldn't cast array of type timestamp[s] to null". ``` Full stack-trace in the attached file below. I also attach a minimized dataset (data.json, a single entry) that reproduces the error. **Observations**(on the minimal example): - if I remove _all fields before_ `body`, a different error appears, - if I remove _all fields after_ `body`, yet another error appears, - if `body` is _the only field_, the error disappears. So this might be one complex bug or several edge cases interacting. I haven’t dug deeper. Also changing the file extension to `.json` or `.txt` avoids the problem. This suggests **a possible workaround** for the general case: convert `.jsonl` to `.json`. Though I haven’t verified correctness of that workaround yet. Anyway my understanding is that `load_dataset` with first argument set to "json" should properly handle `.jsonl` files. Correct me if I'm wrong. [stack_trace.txt](https://github.com/user-attachments/files/21004153/stack_trace.txt) [data.json](https://github.com/user-attachments/files/21004164/data.json) P.S. I discovered this while going through the HuggingFace tutorial. Specifically [this part](https://huggingface.co/learn/llm-course/chapter5/5?fw=pt). I will try to inform the tutorial team about this issue, as it can be a showstopper for young πŸ€— adepts. ### Steps to reproduce the bug 1. Download attached [data.json](https://github.com/user-attachments/files/21004164/data.json) file. 2. Run the following code which should work correctly: ``` from datasets import load_dataset load_dataset("json", data_files="data.json", split="train") ``` 3. Change extension of the `data` file to `.jsonl` and run: ``` from datasets import load_dataset load_dataset("json", data_files="data.jsonl", split="train") ``` This will trigger an error like the one in the attached [stack_trace.txt](https://github.com/user-attachments/files/21004153/stack_trace.txt). One can also try removing fields before the `body` field and after it. These actions give different errors. ### Expected behavior Parsing data in `.jsonl` format should yield the same result as parsing the same data in `.json` format. In any case, the content of a string field should never be interpreted as part of the dataset schema. ### Environment info datasets version: _3.6.0_ pyarrow version: _20.0.0_ Python version: _3.11.9_ platform version: _macOS-15.5-arm64-arm-64bit_
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7664/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/pull/7663
7,663
Custom metadata filenames
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7663). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-07-01T13:50:36Z
2025-07-01T13:58:41Z
2025-07-01T13:58:39Z
MEMBER
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7663.diff", "html_url": "https://github.com/huggingface/datasets/pull/7663", "merged_at": "2025-07-01T13:58:39Z", "patch_url": "https://github.com/huggingface/datasets/pull/7663.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7663" }
example: https://huggingface.co/datasets/lhoestq/overlapping-subsets-imagefolder/tree/main To make multiple subsets for an imagefolder (one metadata file per subset), e.g. ```yaml configs: - config_name: default metadata_filenames: - metadata.csv - config_name: other metadata_filenames: - metadata2.csv ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 1, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/7663/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/issues/7662
7,662
Applying map after transform with multiprocessing will cause OOM
{ "avatar_url": "https://avatars.githubusercontent.com/u/26482910?v=4", "events_url": "https://api.github.com/users/JunjieLl/events{/privacy}", "followers_url": "https://api.github.com/users/JunjieLl/followers", "following_url": "https://api.github.com/users/JunjieLl/following{/other_user}", "gists_url": "https://api.github.com/users/JunjieLl/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/JunjieLl", "id": 26482910, "login": "JunjieLl", "node_id": "MDQ6VXNlcjI2NDgyOTEw", "organizations_url": "https://api.github.com/users/JunjieLl/orgs", "received_events_url": "https://api.github.com/users/JunjieLl/received_events", "repos_url": "https://api.github.com/users/JunjieLl/repos", "site_admin": false, "starred_url": "https://api.github.com/users/JunjieLl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JunjieLl/subscriptions", "type": "User", "url": "https://api.github.com/users/JunjieLl", "user_view_type": "public" }
[]
open
false
[ "Hi ! `add_column` loads the full column data in memory:\n\nhttps://github.com/huggingface/datasets/blob/bfa497b1666f4c58bd231c440d8b92f9859f3a58/src/datasets/arrow_dataset.py#L6021-L6021\n\na workaround to add the new column is to include the new data in the map() function instead, which only loads one batch at a time", "> Hi ! `add_column` loads the full column data in memory:\n> \n> [datasets/src/datasets/arrow_dataset.py](https://github.com/huggingface/datasets/blob/bfa497b1666f4c58bd231c440d8b92f9859f3a58/src/datasets/arrow_dataset.py#L6021-L6021)\n> \n> Line 6021 in [bfa497b](/huggingface/datasets/commit/bfa497b1666f4c58bd231c440d8b92f9859f3a58)\n> \n> column_table = InMemoryTable.from_pydict({name: column}, schema=pyarrow_schema) \n> a workaround to add the new column is to include the new data in the map() function instead, which only loads one batch at a time\n\n\nHow about cast_column,since map cannot apply type transformation, e.g. Audio(16000) to Audio(24000)", "cast_column calls `pyarrow.Table.cast` on the full dataset which I believe the memory usage depends on the source and target types but should be low in general\n\ncasting from Audio(16000) to Audio(24000) is cheap since the source and target arrow types are the same", "> cast_column calls `pyarrow.Table.cast` on the full dataset which I believe the memory usage depends on the source and target types but should be low in general\n> \n> casting from Audio(16000) to Audio(24000) is cheap since the source and target arrow types are the same\n\nThanks for replying. So the OOM is caused by add_column operation. When I skip the operation, low memory will be achieved. Right?", "> Hi ! `add_column` loads the full column data in memory:\n> \n> [datasets/src/datasets/arrow_dataset.py](https://github.com/huggingface/datasets/blob/bfa497b1666f4c58bd231c440d8b92f9859f3a58/src/datasets/arrow_dataset.py#L6021-L6021)\n> \n> Line 6021 in [bfa497b](/huggingface/datasets/commit/bfa497b1666f4c58bd231c440d8b92f9859f3a58)\n> \n> column_table = InMemoryTable.from_pydict({name: column}, schema=pyarrow_schema) \n> a workaround to add the new column is to include the new data in the map() function instead, which only loads one batch at a time\n\n\nNote num_process=1 would not cause OOM. I'm confused.\n\n" ]
2025-07-01T05:45:57Z
2025-07-10T06:17:40Z
null
NONE
null
null
### Describe the bug I have a 30TB dataset. When I perform add_column and cast_column operations on it and then execute a multiprocessing map, it results in an OOM (Out of Memory) error. However, if I skip the add_column and cast_column steps and directly run the map, there is no OOM. After debugging step by step, I found that the OOM is caused at this point, and I suspect it’s because the add_column and cast_column operations are not cached, which causes the entire dataset to be loaded in each subprocess, leading to the OOM. The critical line of code is: https://github.com/huggingface/datasets/blob/e71b0b19d79c7531f9b9bea7c09916b5f6157f42/src/datasets/utils/py_utils.py#L607 Note num_process=1 would not cause OOM. I'm confused. ### Steps to reproduce the bug For reproduce, you can load dataset and set cache_dir (for caching): amphion/Emilia-Dataset which is a veru large datasets that RAM can not fits. And apply the map with multiprocessing after a transform operation (e.g. add_column, cast_column). As long as num_process>1, it must cause OOM. ### Expected behavior It should not cause OOM. ### Environment info - `datasets` version: 3.6.0 - Platform: Linux-5.10.134-16.101.al8.x86_64-x86_64-with-glibc2.35 - Python version: 3.10.12 - `huggingface_hub` version: 0.33.1 - PyArrow version: 20.0.0 - Pandas version: 2.3.0 - `fsspec` version: 2024.6.1
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7662/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/pull/7661
7,661
fix del tqdm lock error
{ "avatar_url": "https://avatars.githubusercontent.com/u/44766273?v=4", "events_url": "https://api.github.com/users/Hypothesis-Z/events{/privacy}", "followers_url": "https://api.github.com/users/Hypothesis-Z/followers", "following_url": "https://api.github.com/users/Hypothesis-Z/following{/other_user}", "gists_url": "https://api.github.com/users/Hypothesis-Z/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Hypothesis-Z", "id": 44766273, "login": "Hypothesis-Z", "node_id": "MDQ6VXNlcjQ0NzY2Mjcz", "organizations_url": "https://api.github.com/users/Hypothesis-Z/orgs", "received_events_url": "https://api.github.com/users/Hypothesis-Z/received_events", "repos_url": "https://api.github.com/users/Hypothesis-Z/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Hypothesis-Z/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Hypothesis-Z/subscriptions", "type": "User", "url": "https://api.github.com/users/Hypothesis-Z", "user_view_type": "public" }
[]
open
false
[ "let's see which solution is found at https://github.com/huggingface/huggingface_hub/pull/3286 and do the same maybe ?" ]
2025-07-01T02:04:02Z
2025-08-13T13:16:44Z
null
NONE
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7661.diff", "html_url": "https://github.com/huggingface/datasets/pull/7661", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7661.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7661" }
fixes https://github.com/huggingface/datasets/issues/7660
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7661/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/issues/7660
7,660
AttributeError: type object 'tqdm' has no attribute '_lock'
{ "avatar_url": "https://avatars.githubusercontent.com/u/44766273?v=4", "events_url": "https://api.github.com/users/Hypothesis-Z/events{/privacy}", "followers_url": "https://api.github.com/users/Hypothesis-Z/followers", "following_url": "https://api.github.com/users/Hypothesis-Z/following{/other_user}", "gists_url": "https://api.github.com/users/Hypothesis-Z/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Hypothesis-Z", "id": 44766273, "login": "Hypothesis-Z", "node_id": "MDQ6VXNlcjQ0NzY2Mjcz", "organizations_url": "https://api.github.com/users/Hypothesis-Z/orgs", "received_events_url": "https://api.github.com/users/Hypothesis-Z/received_events", "repos_url": "https://api.github.com/users/Hypothesis-Z/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Hypothesis-Z/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Hypothesis-Z/subscriptions", "type": "User", "url": "https://api.github.com/users/Hypothesis-Z", "user_view_type": "public" }
[]
open
false
[ "Deleting a class (**not instance**) attribute might be invalid in this case, which is `tqdm` doing in `ensure_lock`.\n\n```python\nfrom tqdm import tqdm as old_tqdm\n\nclass tqdm1(old_tqdm):\n def __delattr__(self, attr):\n try:\n super().__delattr__(attr)\n except AttributeError:\n if attr != '_lock':\n print(attr)\n raise\n\nclass Meta(type):\n def __delattr__(cls, name):\n if name == \"_lock\":\n return \n return super().__delattr__(name)\n \nclass tqdm2(old_tqdm, metaclass=Meta):\n pass\n\ndel tqdm2._lock\ndel tqdm1._lock # error\n```\n\nhttps://github.com/huggingface/datasets/blob/e71b0b19d79c7531f9b9bea7c09916b5f6157f42/src/datasets/utils/tqdm.py#L104-L122", "A cheaper option (seems to work in my case): \n```python\nfrom datasets import tqdm as hf_tqdm\nhf_tqdm.set_lock(hf_tqdm.get_lock())\n```" ]
2025-06-30T15:57:16Z
2025-07-03T15:14:27Z
null
NONE
null
null
### Describe the bug `AttributeError: type object 'tqdm' has no attribute '_lock'` It occurs when I'm trying to load datasets in thread pool. Issue https://github.com/huggingface/datasets/issues/6066 and PR https://github.com/huggingface/datasets/pull/6067 https://github.com/huggingface/datasets/pull/6068 tried to fix this. ### Steps to reproduce the bug Will have to try several times to reproduce the error because it is concerned with threads. 1. save some datasets for test ```pythonfrom datasets import Dataset, DatasetDict import os os.makedirs("test_dataset_shards", exist_ok=True) for i in range(10): data = Dataset.from_dict({"text": [f"example {j}" for j in range(1000000)]}) data = DatasetDict({'train': data}) data.save_to_disk(f"test_dataset_shards/shard_{i}") ``` 2. load them in a thread pool ```python from datasets import load_from_disk from tqdm import tqdm from concurrent.futures import ThreadPoolExecutor, as_completed import glob datas = glob.glob('test_dataset_shards/shard_*') with ThreadPoolExecutor(max_workers=10) as pool: futures = [pool.submit(load_from_disk, it) for it in datas] datas = [] for future in tqdm(as_completed(futures), total=len(futures)): datas.append(future.result()) ``` ### Expected behavior no exception raised ### Environment info datasets==2.19.0 python==3.10
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7660/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/pull/7659
7,659
Update the beans dataset link in Preprocess
{ "avatar_url": "https://avatars.githubusercontent.com/u/5434867?v=4", "events_url": "https://api.github.com/users/HJassar/events{/privacy}", "followers_url": "https://api.github.com/users/HJassar/followers", "following_url": "https://api.github.com/users/HJassar/following{/other_user}", "gists_url": "https://api.github.com/users/HJassar/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/HJassar", "id": 5434867, "login": "HJassar", "node_id": "MDQ6VXNlcjU0MzQ4Njc=", "organizations_url": "https://api.github.com/users/HJassar/orgs", "received_events_url": "https://api.github.com/users/HJassar/received_events", "repos_url": "https://api.github.com/users/HJassar/repos", "site_admin": false, "starred_url": "https://api.github.com/users/HJassar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HJassar/subscriptions", "type": "User", "url": "https://api.github.com/users/HJassar", "user_view_type": "public" }
[]
closed
false
[]
2025-06-30T09:58:44Z
2025-07-07T08:38:19Z
2025-07-01T14:01:42Z
CONTRIBUTOR
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7659.diff", "html_url": "https://github.com/huggingface/datasets/pull/7659", "merged_at": "2025-07-01T14:01:42Z", "patch_url": "https://github.com/huggingface/datasets/pull/7659.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7659" }
In the Preprocess tutorial, the to "the beans dataset" is incorrect. Fixed.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7659/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/pull/7658
7,658
Fix: Prevent loss of info.features and column_names in IterableDatasetDict.map when features is None
{ "avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4", "events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}", "followers_url": "https://api.github.com/users/ArjunJagdale/followers", "following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}", "gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ArjunJagdale", "id": 142811259, "login": "ArjunJagdale", "node_id": "U_kgDOCIMgew", "organizations_url": "https://api.github.com/users/ArjunJagdale/orgs", "received_events_url": "https://api.github.com/users/ArjunJagdale/received_events", "repos_url": "https://api.github.com/users/ArjunJagdale/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions", "type": "User", "url": "https://api.github.com/users/ArjunJagdale", "user_view_type": "public" }
[]
closed
false
[ "Hi!\r\nI haven’t included a test for this change, as the fix is quite small and targeted.\r\nPlease let me know if you’d like a test for this case or if you’d prefer to handle it during review.\r\nThanks!", "we can't know in advance the `features` after map() (it transforms the data !), so you can reuse the `features` from `info.features`", "I'll the patch as suggested β€” `info.features = features` or `self.info.features` β€” to ensure schema preservation while keeping the logic simple and explicit. WDYT?\r\n", "info.features should be None in the general case, and replaced by the user's `features` if it's passed explicitly with `map(..., features=...)`\r\n\r\nhttps://github.com/huggingface/datasets/issues/7568 is not an issue we can fix", "> info.features should be None in the general case, and replaced by the user's `features` if it's passed explicitly with `map(..., features=...)`\r\n> \r\n> #7568 is not an issue we can fix\r\n\r\nThanks for the clarification! Totally makes sense now β€” I understand that features=None is the expected behavior post-map() unless explicitly passed, and that preserving old schema by default could lead to incorrect assumptions.\r\nClosing this one β€” appreciate the feedback as always" ]
2025-06-30T09:31:12Z
2025-07-01T16:26:30Z
2025-07-01T16:26:12Z
CONTRIBUTOR
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7658.diff", "html_url": "https://github.com/huggingface/datasets/pull/7658", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7658.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7658" }
This PR fixes a bug where calling `IterableDatasetDict.map()` or `IterableDataset.map()` with the default `features=None` argument would overwrite the existing `info.features` attribute with `None`. This, in turn, caused the resulting dataset to lose its schema, breaking downstream usage of attributes like `column_names`. Why Previously, the code would always set `info.features = features`, even if `features` was `None`. When mapping with removal of columns or other transformations, this led to the destruction of the schema and caused failures in code that relied on the dataset schema being present. How We now update `info.features` only if `features` is not `None`. This preserves the original schema unless the user explicitly provides a new one. Reference Fixes #7568
{ "avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4", "events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}", "followers_url": "https://api.github.com/users/ArjunJagdale/followers", "following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}", "gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ArjunJagdale", "id": 142811259, "login": "ArjunJagdale", "node_id": "U_kgDOCIMgew", "organizations_url": "https://api.github.com/users/ArjunJagdale/orgs", "received_events_url": "https://api.github.com/users/ArjunJagdale/received_events", "repos_url": "https://api.github.com/users/ArjunJagdale/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions", "type": "User", "url": "https://api.github.com/users/ArjunJagdale", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7658/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/pull/7657
7,657
feat: add subset_name as alias for name in load_dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4", "events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}", "followers_url": "https://api.github.com/users/ArjunJagdale/followers", "following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}", "gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ArjunJagdale", "id": 142811259, "login": "ArjunJagdale", "node_id": "U_kgDOCIMgew", "organizations_url": "https://api.github.com/users/ArjunJagdale/orgs", "received_events_url": "https://api.github.com/users/ArjunJagdale/received_events", "repos_url": "https://api.github.com/users/ArjunJagdale/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions", "type": "User", "url": "https://api.github.com/users/ArjunJagdale", "user_view_type": "public" }
[]
open
false
[]
2025-06-29T10:39:00Z
2025-07-18T17:45:41Z
null
CONTRIBUTOR
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7657.diff", "html_url": "https://github.com/huggingface/datasets/pull/7657", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7657.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7657" }
fixes #7637 This PR introduces subset_name as a user-facing alias for the name (previously `config_name`) argument in load_dataset. It aligns terminology with the Hugging Face Hub UI (which shows β€œSubset”), reducing confusion for new users. Supports `subset_name` in `load_dataset()` Adds `.subset_name` property to DatasetBuilder Maintains full backward compatibility Raises clear error if name and `subset_name` conflict
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7657/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/pull/7656
7,656
fix(iterable): ensure MappedExamplesIterable supports state_dict for resume
{ "avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4", "events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}", "followers_url": "https://api.github.com/users/ArjunJagdale/followers", "following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}", "gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ArjunJagdale", "id": 142811259, "login": "ArjunJagdale", "node_id": "U_kgDOCIMgew", "organizations_url": "https://api.github.com/users/ArjunJagdale/orgs", "received_events_url": "https://api.github.com/users/ArjunJagdale/received_events", "repos_url": "https://api.github.com/users/ArjunJagdale/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions", "type": "User", "url": "https://api.github.com/users/ArjunJagdale", "user_view_type": "public" }
[]
open
false
[]
2025-06-29T07:50:13Z
2025-06-29T07:50:13Z
null
CONTRIBUTOR
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7656.diff", "html_url": "https://github.com/huggingface/datasets/pull/7656", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7656.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7656" }
Fixes #7630 ### Problem When calling `.map()` on an `IterableDataset`, resuming from a checkpoint skips a large number of samples. This is because `MappedExamplesIterable` did not implement `state_dict()` or `load_state_dict()`, so checkpointing was not properly delegated to the underlying iterable. ### What This PR Does This patch adds: ```python def state_dict(self): return self.ex_iterable.state_dict() def load_state_dict(self, state): self.ex_iterable.load_state_dict(state) ``` to MappedExamplesIterable, so the wrapped base iterable's state can be saved and restored as expected. Result Using .map() no longer causes sample skipping after checkpoint resume. Let me know if a dedicated test case is required β€” happy to add one!
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7656/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/pull/7655
7,655
Added specific use cases in Improve Performace
{ "avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4", "events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}", "followers_url": "https://api.github.com/users/ArjunJagdale/followers", "following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}", "gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ArjunJagdale", "id": 142811259, "login": "ArjunJagdale", "node_id": "U_kgDOCIMgew", "organizations_url": "https://api.github.com/users/ArjunJagdale/orgs", "received_events_url": "https://api.github.com/users/ArjunJagdale/received_events", "repos_url": "https://api.github.com/users/ArjunJagdale/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions", "type": "User", "url": "https://api.github.com/users/ArjunJagdale", "user_view_type": "public" }
[]
open
false
[]
2025-06-28T19:00:32Z
2025-06-28T19:00:32Z
null
CONTRIBUTOR
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7655.diff", "html_url": "https://github.com/huggingface/datasets/pull/7655", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7655.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7655" }
Fixes #2494
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7655/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/pull/7654
7,654
fix(load): strip deprecated use_auth_token from config_kwargs
{ "avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4", "events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}", "followers_url": "https://api.github.com/users/ArjunJagdale/followers", "following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}", "gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ArjunJagdale", "id": 142811259, "login": "ArjunJagdale", "node_id": "U_kgDOCIMgew", "organizations_url": "https://api.github.com/users/ArjunJagdale/orgs", "received_events_url": "https://api.github.com/users/ArjunJagdale/received_events", "repos_url": "https://api.github.com/users/ArjunJagdale/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions", "type": "User", "url": "https://api.github.com/users/ArjunJagdale", "user_view_type": "public" }
[]
open
false
[]
2025-06-28T09:20:21Z
2025-06-28T09:20:21Z
null
CONTRIBUTOR
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7654.diff", "html_url": "https://github.com/huggingface/datasets/pull/7654", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7654.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7654" }
Fixes #7504 This PR resolves a compatibility error when loading datasets via `load_dataset()` using outdated arguments like `use_auth_token`. **What was happening:** Users passing `use_auth_token` in `load_dataset(..., use_auth_token=...)` encountered a `ValueError`: BuilderConfig ParquetConfig(...) doesn't have a 'use_auth_token' key. **Why:** `use_auth_token` has been deprecated and removed from config definitions (replaced by `token`), but the `load_dataset()` function still forwarded it via `**config_kwargs` to BuilderConfigs, leading to unrecognized key errors. **Fix:** We now intercept and strip `use_auth_token` from `config_kwargs` inside `load_dataset`, replacing it with a warning: ```python if "use_auth_token" in config_kwargs: logger.warning("The 'use_auth_token' argument is deprecated. Please use 'token' instead.") config_kwargs.pop("use_auth_token") ``` This ensures legacy compatibility while guiding users to switch to the token argument. Let me know if you'd prefer a deprecation error instead of a warning. Thanks!
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7654/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/pull/7653
7,653
feat(load): fallback to `load_from_disk()` when loading a saved dataset directory
{ "avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4", "events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}", "followers_url": "https://api.github.com/users/ArjunJagdale/followers", "following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}", "gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ArjunJagdale", "id": 142811259, "login": "ArjunJagdale", "node_id": "U_kgDOCIMgew", "organizations_url": "https://api.github.com/users/ArjunJagdale/orgs", "received_events_url": "https://api.github.com/users/ArjunJagdale/received_events", "repos_url": "https://api.github.com/users/ArjunJagdale/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions", "type": "User", "url": "https://api.github.com/users/ArjunJagdale", "user_view_type": "public" }
[]
open
false
[]
2025-06-28T08:47:36Z
2025-06-28T08:47:36Z
null
CONTRIBUTOR
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7653.diff", "html_url": "https://github.com/huggingface/datasets/pull/7653", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7653.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7653" }
### Related Issue Fixes #7503 Partially addresses #5044 by allowing `load_dataset()` to auto-detect and gracefully delegate to `load_from_disk()` for locally saved datasets. --- ### What does this PR do? This PR introduces a minimal fallback mechanism in `load_dataset()` that detects when the provided `path` points to a dataset saved using `save_to_disk()`, and automatically redirects to `load_from_disk()`. #### πŸ› Before (unexpected metadata-only rows): ```python ds = load_dataset("/path/to/saved_dataset") # β†’ returns rows with only internal metadata (_data_files, _fingerprint, etc.) ```` #### βœ… After (graceful fallback): ```python ds = load_dataset("/path/to/saved_dataset") # β†’ logs a warning and internally switches to load_from_disk() ``` --- ### Why is this useful? * Prevents confusion when reloading local datasets saved via `save_to_disk()`. * Enables smoother compatibility with frameworks (e.g., TRL, `lighteval`) that rely on `load_dataset()` calls. * Fully backward-compatible β€” hub-based loading, custom builders, and streaming remain untouched.
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7653/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/pull/7652
7,652
Add columns support to JSON loader for selective key filtering
{ "avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4", "events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}", "followers_url": "https://api.github.com/users/ArjunJagdale/followers", "following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}", "gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ArjunJagdale", "id": 142811259, "login": "ArjunJagdale", "node_id": "U_kgDOCIMgew", "organizations_url": "https://api.github.com/users/ArjunJagdale/orgs", "received_events_url": "https://api.github.com/users/ArjunJagdale/received_events", "repos_url": "https://api.github.com/users/ArjunJagdale/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions", "type": "User", "url": "https://api.github.com/users/ArjunJagdale", "user_view_type": "public" }
[]
closed
false
[ "I need this feature right now. It would be great if it could automatically fill in None for non-existent keys instead of reporting an error.", "> I need this feature right now. It would be great if it could automatically fill in None for non-existent keys instead of reporting an error.\r\n\r\nHi @aihao2000, Just to confirm β€” I have done the changes you asked for!\r\nIf you pass columns=[\"key1\", \"key2\", \"optional_key\"] to load_dataset(..., columns=...), and any of those keys are missing from the input JSON objects, the loader will automatically fill those columns with None values, instead of raising an error.", "Hi! any update on this PR?" ]
2025-06-27T16:18:42Z
2025-09-04T17:35:31Z
2025-09-04T17:35:31Z
CONTRIBUTOR
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7652.diff", "html_url": "https://github.com/huggingface/datasets/pull/7652", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7652.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7652" }
Fixes #7594 This PR adds support for filtering specific columns when loading datasets from .json or .jsonl files β€” similar to how the columns=... argument works for Parquet. As suggested, support for the `columns=...` argument (previously available for Parquet) has now been extended to **JSON and JSONL** loading via `load_dataset(...)`. You can now load only specific keys/columns and skip the rest β€” which should help in cases where some fields are unclean, inconsistent, or just unnecessary. ### Example: ```python from datasets import load_dataset dataset = load_dataset("json", data_files="your_data.jsonl", columns=["id", "title"]) print(dataset["train"].column_names) # Output: ['id', 'title'] ``` ### Summary of changes: * Added `columns: Optional[List[str]]` to `JsonConfig` * Updated `_generate_tables()` to filter selected columns * Forwarded `columns` argument from `load_dataset()` to the config * Added test for validation(should be fine!) Let me know if you'd like the same to be added for CSV or others as a follow-up β€” happy to help.
{ "avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4", "events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}", "followers_url": "https://api.github.com/users/ArjunJagdale/followers", "following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}", "gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ArjunJagdale", "id": 142811259, "login": "ArjunJagdale", "node_id": "U_kgDOCIMgew", "organizations_url": "https://api.github.com/users/ArjunJagdale/orgs", "received_events_url": "https://api.github.com/users/ArjunJagdale/received_events", "repos_url": "https://api.github.com/users/ArjunJagdale/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions", "type": "User", "url": "https://api.github.com/users/ArjunJagdale", "user_view_type": "public" }
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/7652/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/pull/7651
7,651
fix: Extended metadata file names for folder_based_builder
{ "avatar_url": "https://avatars.githubusercontent.com/u/6965756?v=4", "events_url": "https://api.github.com/users/iPieter/events{/privacy}", "followers_url": "https://api.github.com/users/iPieter/followers", "following_url": "https://api.github.com/users/iPieter/following{/other_user}", "gists_url": "https://api.github.com/users/iPieter/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/iPieter", "id": 6965756, "login": "iPieter", "node_id": "MDQ6VXNlcjY5NjU3NTY=", "organizations_url": "https://api.github.com/users/iPieter/orgs", "received_events_url": "https://api.github.com/users/iPieter/received_events", "repos_url": "https://api.github.com/users/iPieter/repos", "site_admin": false, "starred_url": "https://api.github.com/users/iPieter/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/iPieter/subscriptions", "type": "User", "url": "https://api.github.com/users/iPieter", "user_view_type": "public" }
[]
open
false
[]
2025-06-27T13:12:11Z
2025-06-30T08:19:37Z
null
NONE
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7651.diff", "html_url": "https://github.com/huggingface/datasets/pull/7651", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7651.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7651" }
Fixes #7650. The metadata files generated by the `DatasetDict.save_to_file` function are not included in the folder_based_builder's metadata list, causing issues when only 1 actual data file is present, as described in issue #7650. This PR adds these filenames to the builder, allowing correct loading.
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7651/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/issues/7650
7,650
`load_dataset` defaults to json file format for datasets with 1 shard
{ "avatar_url": "https://avatars.githubusercontent.com/u/6965756?v=4", "events_url": "https://api.github.com/users/iPieter/events{/privacy}", "followers_url": "https://api.github.com/users/iPieter/followers", "following_url": "https://api.github.com/users/iPieter/following{/other_user}", "gists_url": "https://api.github.com/users/iPieter/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/iPieter", "id": 6965756, "login": "iPieter", "node_id": "MDQ6VXNlcjY5NjU3NTY=", "organizations_url": "https://api.github.com/users/iPieter/orgs", "received_events_url": "https://api.github.com/users/iPieter/received_events", "repos_url": "https://api.github.com/users/iPieter/repos", "site_admin": false, "starred_url": "https://api.github.com/users/iPieter/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/iPieter/subscriptions", "type": "User", "url": "https://api.github.com/users/iPieter", "user_view_type": "public" }
[]
open
false
[]
2025-06-27T12:54:25Z
2025-06-27T12:54:25Z
null
NONE
null
null
### Describe the bug I currently have multiple datasets (train+validation) saved as 50MB shards. For one dataset the validation pair is small enough to fit into a single shard and this apparently causes problems when loading the dataset. I created the datasets using a DatasetDict, saved them as 50MB arrow files for streaming and then load each dataset. I have no problem loading any of the other datasets with more than 1 arrow file/shard. The error indicates the training set got loaded in arrow format (correct) and the validation set in json (incorrect). This seems to be because some of the metadata files are considered as dataset files. ``` Error loading /nfs/dataset_pt-uk: Couldn't infer the same data file format for all splits. Got {NamedSplit('train'): ('arrow', {}), NamedSplit('validation'): ('json', {})} ``` ![Image](https://github.com/user-attachments/assets/f6e7596a-dd53-46a9-9a23-4e9cac2ac049) Concretely, there is a mismatch between the metadata created by the `DatasetDict.save_to_file` and the builder for `datasets.load_dataset`: https://github.com/huggingface/datasets/blob/e71b0b19d79c7531f9b9bea7c09916b5f6157f42/src/datasets/data_files.py#L107 The `folder_based_builder` lists all files and with 1 arrow file the json files (that are actually metadata) are in the majority. https://github.com/huggingface/datasets/blob/e71b0b19d79c7531f9b9bea7c09916b5f6157f42/src/datasets/packaged_modules/folder_based_builder/folder_based_builder.py#L58 ### Steps to reproduce the bug Create a dataset with metadata and 1 arrow file in validation and multiple arrow files in the training set, following the above description. In my case, I saved the files via: ```python dataset = DatasetDict({ 'train': train_dataset, 'validation': val_dataset }) dataset.save_to_disk(output_path, max_shard_size="50MB") ``` ### Expected behavior The dataset would get loaded. ### Environment info - `datasets` version: 3.6.0 - Platform: Linux-6.14.0-22-generic-x86_64-with-glibc2.41 - Python version: 3.12.7 - `huggingface_hub` version: 0.31.1 - PyArrow version: 18.1.0 - Pandas version: 2.2.3 - `fsspec` version: 2024.6.1
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7650/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/pull/7649
7,649
Enable parallel shard upload in push_to_hub() using num_proc
{ "avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4", "events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}", "followers_url": "https://api.github.com/users/ArjunJagdale/followers", "following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}", "gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ArjunJagdale", "id": 142811259, "login": "ArjunJagdale", "node_id": "U_kgDOCIMgew", "organizations_url": "https://api.github.com/users/ArjunJagdale/orgs", "received_events_url": "https://api.github.com/users/ArjunJagdale/received_events", "repos_url": "https://api.github.com/users/ArjunJagdale/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions", "type": "User", "url": "https://api.github.com/users/ArjunJagdale", "user_view_type": "public" }
[]
closed
false
[ "it was already added in https://github.com/huggingface/datasets/pull/7606 actually ^^'", "Oh sure sure, Closing this one as redundant." ]
2025-06-27T05:59:03Z
2025-07-07T18:13:53Z
2025-07-07T18:13:52Z
CONTRIBUTOR
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7649.diff", "html_url": "https://github.com/huggingface/datasets/pull/7649", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7649.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7649" }
Fixes #7591 ### Add num_proc support to `push_to_hub()` for parallel shard upload This PR adds support for parallel upload of dataset shards via the `num_proc` argument in `Dataset.push_to_hub()`. πŸ“Œ While the `num_proc` parameter was already present in the `push_to_hub()` signature and correctly passed to `_push_parquet_shards_to_hub()`, it was not being used to parallelize the upload. πŸ”§ This PR updates the internal `_push_parquet_shards_to_hub()` function to: - Use `multiprocessing.Pool` and `iflatmap_unordered()` for concurrent shard upload when `num_proc > 1` - Preserve original serial upload behavior if `num_proc` is `None` or ≀ 1 - Keep tqdm progress and commit behavior unchanged Let me know if any test coverage or further changes are needed!
{ "avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4", "events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}", "followers_url": "https://api.github.com/users/ArjunJagdale/followers", "following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}", "gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ArjunJagdale", "id": 142811259, "login": "ArjunJagdale", "node_id": "U_kgDOCIMgew", "organizations_url": "https://api.github.com/users/ArjunJagdale/orgs", "received_events_url": "https://api.github.com/users/ArjunJagdale/received_events", "repos_url": "https://api.github.com/users/ArjunJagdale/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions", "type": "User", "url": "https://api.github.com/users/ArjunJagdale", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7649/reactions" }
null
null
null
true