html_url
stringlengths
46
51
number
int64
1
7.85k
title
stringlengths
1
290
user
dict
labels
listlengths
0
4
state
stringclasses
2 values
locked
bool
1 class
comments
listlengths
0
30
created_at
timestamp[ns, tz=UTC]date
2020-04-14 10:18:02
2025-11-05 18:11:12
updated_at
timestamp[ns, tz=UTC]date
2020-04-27 16:04:17
2025-11-06 09:44:34
closed_at
timestamp[ns, tz=UTC]date
2020-04-14 12:01:40
2025-11-05 16:02:32
author_association
stringclasses
4 values
draft
bool
2 classes
pull_request
dict
body
stringlengths
0
228k
closed_by
dict
reactions
dict
state_reason
stringclasses
4 values
sub_issues_summary
dict
issue_dependencies_summary
dict
is_pull_request
bool
2 classes
https://github.com/huggingface/datasets/issues/7336
7,336
Clarify documentation or Create DatasetCard
{ "avatar_url": "https://avatars.githubusercontent.com/u/145011209?v=4", "events_url": "https://api.github.com/users/August-murr/events{/privacy}", "followers_url": "https://api.github.com/users/August-murr/followers", "following_url": "https://api.github.com/users/August-murr/following{/other_user}", "gists_url": "https://api.github.com/users/August-murr/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/August-murr", "id": 145011209, "login": "August-murr", "node_id": "U_kgDOCKSyCQ", "organizations_url": "https://api.github.com/users/August-murr/orgs", "received_events_url": "https://api.github.com/users/August-murr/received_events", "repos_url": "https://api.github.com/users/August-murr/repos", "site_admin": false, "starred_url": "https://api.github.com/users/August-murr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/August-murr/subscriptions", "type": "User", "url": "https://api.github.com/users/August-murr", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
[]
2024-12-17T12:01:00Z
2024-12-17T12:01:00Z
null
NONE
null
null
### Feature request I noticed that you can use a Model Card instead of a Dataset Card when pushing a dataset to the Hub, but this isn’t clearly mentioned in [the docs.](https://huggingface.co/docs/datasets/dataset_card) - Update the docs to clarify that a Model Card can work for datasets too. - It might be worth creating a dedicated DatasetCard module, similar to the ModelCard module, for consistency and better support. Not sure if this belongs here or on the [Hub repo](https://github.com/huggingface/huggingface_hub), but thought I’d bring it up! ### Motivation I just spent an hour like on [this issue](https://github.com/huggingface/trl/pull/2491) trying to create a `DatasetCard` for a script. ### Your contribution might later
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7336/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/issues/7335
7,335
Too many open files: '/root/.cache/huggingface/token'
{ "avatar_url": "https://avatars.githubusercontent.com/u/17604849?v=4", "events_url": "https://api.github.com/users/kopyl/events{/privacy}", "followers_url": "https://api.github.com/users/kopyl/followers", "following_url": "https://api.github.com/users/kopyl/following{/other_user}", "gists_url": "https://api.github.com/users/kopyl/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kopyl", "id": 17604849, "login": "kopyl", "node_id": "MDQ6VXNlcjE3NjA0ODQ5", "organizations_url": "https://api.github.com/users/kopyl/orgs", "received_events_url": "https://api.github.com/users/kopyl/received_events", "repos_url": "https://api.github.com/users/kopyl/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kopyl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kopyl/subscriptions", "type": "User", "url": "https://api.github.com/users/kopyl", "user_view_type": "public" }
[]
open
false
[]
2024-12-16T21:30:24Z
2024-12-16T21:30:24Z
null
NONE
null
null
### Describe the bug I ran this code: ``` from datasets import load_dataset dataset = load_dataset("common-canvas/commoncatalog-cc-by", cache_dir="/datadrive/datasets/cc", num_proc=1000) ``` And got this error. Before it was some other file though (lie something...incomplete) runnting ``` ulimit -n 8192 ``` did not help at all. ### Steps to reproduce the bug Run the code i sent ### Expected behavior Should be no errors ### Environment info linux, jupyter lab.
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7335/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/issues/7334
7,334
TypeError: Value.__init__() missing 1 required positional argument: 'dtype'
{ "avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4", "events_url": "https://api.github.com/users/ghost/events{/privacy}", "followers_url": "https://api.github.com/users/ghost/followers", "following_url": "https://api.github.com/users/ghost/following{/other_user}", "gists_url": "https://api.github.com/users/ghost/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ghost", "id": 10137, "login": "ghost", "node_id": "MDQ6VXNlcjEwMTM3", "organizations_url": "https://api.github.com/users/ghost/orgs", "received_events_url": "https://api.github.com/users/ghost/received_events", "repos_url": "https://api.github.com/users/ghost/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ghost/subscriptions", "type": "User", "url": "https://api.github.com/users/ghost", "user_view_type": "public" }
[]
open
false
[ "same error \n```\ndata = load_dataset('/opt/deepseek_R1_finetune/hf_datasets/openai/gsm8k', 'main')[split] \n```", "> same error\n> \n> ```\n> data = load_dataset('/opt/deepseek_R1_finetune/hf_datasets/openai/gsm8k', 'main')[split] \n> ```\n\nhttps://github.com/huggingface/open-r1/issues/204 this help me", "Solved by delete `dataset_infos.json` file in dataset dir, or you can transfer datasets from Hugginface to Modelscope by [hf-ms-transfer](https://github.com/wa008/hf-ms-transfer), which will solve this problem by default. " ]
2024-12-15T04:08:46Z
2025-10-30T09:05:53Z
null
NONE
null
null
### Describe the bug ds = load_dataset( "./xxx.py", name="default", split="train", ) The datasets does not support debugging locally anymore... ### Steps to reproduce the bug ``` from datasets import load_dataset ds = load_dataset( "./repo.py", name="default", split="train", ) for item in ds: print(item) ``` It works fine for "username/repo", but it does not work for "./repo.py" when debugging locally... Running above code template will report TypeError: Value.__init__() missing 1 required positional argument: 'dtype' ### Expected behavior fix this bug ### Environment info python 3.10 datasets==2.21
null
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/7334/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/pull/7328
7,328
Fix typo in arrow_dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/5564745?v=4", "events_url": "https://api.github.com/users/AndreaFrancis/events{/privacy}", "followers_url": "https://api.github.com/users/AndreaFrancis/followers", "following_url": "https://api.github.com/users/AndreaFrancis/following{/other_user}", "gists_url": "https://api.github.com/users/AndreaFrancis/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/AndreaFrancis", "id": 5564745, "login": "AndreaFrancis", "node_id": "MDQ6VXNlcjU1NjQ3NDU=", "organizations_url": "https://api.github.com/users/AndreaFrancis/orgs", "received_events_url": "https://api.github.com/users/AndreaFrancis/received_events", "repos_url": "https://api.github.com/users/AndreaFrancis/repos", "site_admin": false, "starred_url": "https://api.github.com/users/AndreaFrancis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AndreaFrancis/subscriptions", "type": "User", "url": "https://api.github.com/users/AndreaFrancis", "user_view_type": "public" }
[]
closed
false
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7328). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-12-13T15:17:09Z
2024-12-19T17:10:27Z
2024-12-19T17:10:25Z
CONTRIBUTOR
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7328.diff", "html_url": "https://github.com/huggingface/datasets/pull/7328", "merged_at": "2024-12-19T17:10:25Z", "patch_url": "https://github.com/huggingface/datasets/pull/7328.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7328" }
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/5564745?v=4", "events_url": "https://api.github.com/users/AndreaFrancis/events{/privacy}", "followers_url": "https://api.github.com/users/AndreaFrancis/followers", "following_url": "https://api.github.com/users/AndreaFrancis/following{/other_user}", "gists_url": "https://api.github.com/users/AndreaFrancis/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/AndreaFrancis", "id": 5564745, "login": "AndreaFrancis", "node_id": "MDQ6VXNlcjU1NjQ3NDU=", "organizations_url": "https://api.github.com/users/AndreaFrancis/orgs", "received_events_url": "https://api.github.com/users/AndreaFrancis/received_events", "repos_url": "https://api.github.com/users/AndreaFrancis/repos", "site_admin": false, "starred_url": "https://api.github.com/users/AndreaFrancis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AndreaFrancis/subscriptions", "type": "User", "url": "https://api.github.com/users/AndreaFrancis", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7328/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/issues/7327
7,327
.map() is not caching and ram goes OOM
{ "avatar_url": "https://avatars.githubusercontent.com/u/7136076?v=4", "events_url": "https://api.github.com/users/simeneide/events{/privacy}", "followers_url": "https://api.github.com/users/simeneide/followers", "following_url": "https://api.github.com/users/simeneide/following{/other_user}", "gists_url": "https://api.github.com/users/simeneide/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/simeneide", "id": 7136076, "login": "simeneide", "node_id": "MDQ6VXNlcjcxMzYwNzY=", "organizations_url": "https://api.github.com/users/simeneide/orgs", "received_events_url": "https://api.github.com/users/simeneide/received_events", "repos_url": "https://api.github.com/users/simeneide/repos", "site_admin": false, "starred_url": "https://api.github.com/users/simeneide/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/simeneide/subscriptions", "type": "User", "url": "https://api.github.com/users/simeneide", "user_view_type": "public" }
[]
open
false
[ "I have the same issue - any update on this?" ]
2024-12-13T14:22:56Z
2025-02-10T10:42:38Z
null
NONE
null
null
### Describe the bug Im trying to run a fairly simple map that is converting a dataset into numpy arrays. however, it just piles up on memory and doesnt write to disk. Ive tried multiple cache techniques such as specifying the cache dir, setting max mem, +++ but none seem to work. What am I missing here? ### Steps to reproduce the bug ``` from pydub import AudioSegment import io import base64 import numpy as np import os CACHE_PATH = "/mnt/extdisk/cache" # "/root/.cache/huggingface/"# os.environ["HF_HOME"] = CACHE_PATH import datasets import logging logger = logging.getLogger() logger.setLevel(logging.INFO) # Create a handler for Jupyter notebook handler = logging.StreamHandler() formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s') handler.setFormatter(formatter) logger.addHandler(handler) #datasets.config.IN_MEMORY_MAX_SIZE= 1000#*(2**30) #50 gb print(datasets.config.HF_CACHE_HOME) print(datasets.config.HF_DATASETS_CACHE) # Decode the base64 string into bytes def convert_mp3_to_audio_segment(example): """ example = ds['train'][0] """ try: audio_data_bytes = base64.b64decode(example['audio']) # Use pydub to load the MP3 audio from the decoded bytes audio_segment = AudioSegment.from_file(io.BytesIO(audio_data_bytes), format="mp3") # Resample to 24_000 audio_segment = audio_segment.set_frame_rate(24_000) audio = {'sampling_rate' : audio_segment.frame_rate, 'array' : np.array(audio_segment.get_array_of_samples(), dtype="float")} del audio_segment duration = len(audio['array']) / audio['sampling_rate'] except Exception as e: logger.warning(f"Failed to convert audio for {example['id']}. Error: {e}") audio = {'sampling_rate' : 0, 'array' : np.array([]), duration : 0} return {'audio' : audio, 'duration' : duration} ds = datasets.load_dataset("NbAiLab/nb_distil_speech_noconcat_stortinget", cache_dir=CACHE_PATH, keep_in_memory=False) #%% num_proc=32 ds_processed = ( ds #.select(range(10)) .map(convert_mp3_to_audio_segment, num_proc=num_proc, desc="Converting mp3 to audio segment") #, cache_file_name=f"{CACHE_PATH}/stortinget_audio" # , cache_file_name="test" ) ``` ### Expected behavior the map should write to disk ### Environment info - `datasets` version: 3.2.0 - Platform: Linux-6.8.0-45-generic-x86_64-with-glibc2.39 - Python version: 3.12.7 - `huggingface_hub` version: 0.26.3 - PyArrow version: 18.1.0 - Pandas version: 2.2.3 - `fsspec` version: 2024.9.0
null
{ "+1": 3, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 3, "url": "https://api.github.com/repos/huggingface/datasets/issues/7327/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/issues/7326
7,326
Remove upper bound for fsspec
{ "avatar_url": "https://avatars.githubusercontent.com/u/26092524?v=4", "events_url": "https://api.github.com/users/fellhorn/events{/privacy}", "followers_url": "https://api.github.com/users/fellhorn/followers", "following_url": "https://api.github.com/users/fellhorn/following{/other_user}", "gists_url": "https://api.github.com/users/fellhorn/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/fellhorn", "id": 26092524, "login": "fellhorn", "node_id": "MDQ6VXNlcjI2MDkyNTI0", "organizations_url": "https://api.github.com/users/fellhorn/orgs", "received_events_url": "https://api.github.com/users/fellhorn/received_events", "repos_url": "https://api.github.com/users/fellhorn/repos", "site_admin": false, "starred_url": "https://api.github.com/users/fellhorn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fellhorn/subscriptions", "type": "User", "url": "https://api.github.com/users/fellhorn", "user_view_type": "public" }
[]
open
false
[ "Unfortunately `fsspec` versioning allows breaking changes across version and there is no way we can keep it without constrains at the moment. It already broke `datasets` once in the past. Maybe one day once `fsspec` decides on a stable and future proof API but I don't think this will happen anytime soon\r\n\r\nedit: bumped to 2024.10.0 in https://github.com/huggingface/datasets/pull/7352" ]
2024-12-13T11:35:12Z
2025-01-03T15:34:37Z
null
NONE
null
null
### Describe the bug As also raised by @cyyever in https://github.com/huggingface/datasets/pull/7296 and @NeilGirdhar in https://github.com/huggingface/datasets/commit/d5468836fe94e8be1ae093397dd43d4a2503b926#commitcomment-140952162 , `datasets` has a problematic version constraint on `fsspec`. In our case this causes (unnecessary?) troubles due to a race condition bug in that version of the corresponding `gcsfs` plugin, that causes deadlocks: https://github.com/fsspec/gcsfs/pull/643 We just use a version override to ignore the constraint from `datasets`, but imho the version constraint could just be removed in the first place? The last few PRs bumping the upper bound were basically uneventful: * https://github.com/huggingface/datasets/pull/7219 * https://github.com/huggingface/datasets/pull/6921 * https://github.com/huggingface/datasets/pull/6747 ### Steps to reproduce the bug - ### Expected behavior Installing `fsspec>=2024.10.0` along `datasets` should be possible without overwriting constraints. ### Environment info All recent datasets versions
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 3, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 3, "url": "https://api.github.com/repos/huggingface/datasets/issues/7326/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/pull/7325
7,325
Introduce pdf support (#7318)
{ "avatar_url": "https://avatars.githubusercontent.com/u/4812761?v=4", "events_url": "https://api.github.com/users/yabramuvdi/events{/privacy}", "followers_url": "https://api.github.com/users/yabramuvdi/followers", "following_url": "https://api.github.com/users/yabramuvdi/following{/other_user}", "gists_url": "https://api.github.com/users/yabramuvdi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yabramuvdi", "id": 4812761, "login": "yabramuvdi", "node_id": "MDQ6VXNlcjQ4MTI3NjE=", "organizations_url": "https://api.github.com/users/yabramuvdi/orgs", "received_events_url": "https://api.github.com/users/yabramuvdi/received_events", "repos_url": "https://api.github.com/users/yabramuvdi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yabramuvdi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yabramuvdi/subscriptions", "type": "User", "url": "https://api.github.com/users/yabramuvdi", "user_view_type": "public" }
[]
closed
false
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7325). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Hi @AndreaFrancis and @lhoestq ! Thanks for looking at the code and for all the changes and suggestions. I have worked on all your suggestions. I need to work a bit more on the tests, but I created a first version and uploaded a simple pdf to use for testing. The pdf has a couple of pages some of which have images, figures, and tables (this is exactly the pdfs for which I think keeping the pdf format is very interesting because they have multiple types of content). Will try to finish the tests as soon as possible.", "(I updated the CI a bit, I took the liberty to update your branch and re-run it on your PR @yabramuvdi)" ]
2024-12-12T18:31:18Z
2025-03-18T14:00:36Z
2025-03-18T14:00:36Z
CONTRIBUTOR
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7325.diff", "html_url": "https://github.com/huggingface/datasets/pull/7325", "merged_at": "2025-03-18T14:00:36Z", "patch_url": "https://github.com/huggingface/datasets/pull/7325.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7325" }
First implementation of the Pdf feature to support pdfs (#7318) . Using [pdfplumber](https://github.com/jsvine/pdfplumber?tab=readme-ov-file#python-library) as the default library to work with pdfs. @lhoestq and @AndreaFrancis
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7325/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/issues/7323
7,323
Unexpected cache behaviour using load_dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/74349080?v=4", "events_url": "https://api.github.com/users/Moritz-Wirth/events{/privacy}", "followers_url": "https://api.github.com/users/Moritz-Wirth/followers", "following_url": "https://api.github.com/users/Moritz-Wirth/following{/other_user}", "gists_url": "https://api.github.com/users/Moritz-Wirth/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Moritz-Wirth", "id": 74349080, "login": "Moritz-Wirth", "node_id": "MDQ6VXNlcjc0MzQ5MDgw", "organizations_url": "https://api.github.com/users/Moritz-Wirth/orgs", "received_events_url": "https://api.github.com/users/Moritz-Wirth/received_events", "repos_url": "https://api.github.com/users/Moritz-Wirth/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Moritz-Wirth/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Moritz-Wirth/subscriptions", "type": "User", "url": "https://api.github.com/users/Moritz-Wirth", "user_view_type": "public" }
[]
closed
false
[ "Hi ! Since `datasets` 3.x, the `datasets` specific files are in `cache_dir=` and the HF files are cached using `huggingface_hub` and you can set its cache directory using the `HF_HOME` environment variable.\r\n\r\nThey are independent, for example you can delete the Hub cache (containing downloaded files) but still reload your cached datasets from the `datasets` cache (containing prepared datasets in Arrow format)" ]
2024-12-12T14:03:00Z
2025-01-31T11:34:24Z
2025-01-31T11:34:24Z
NONE
null
null
### Describe the bug Following the (Cache management)[https://huggingface.co/docs/datasets/en/cache] docu and previous behaviour from datasets version 2.18.0, one is able to change the cache directory. Previously, all downloaded/extracted/etc files were found in this folder. As i have recently update to the latest version this is not the case anymore. Downloaded files are stored in `~/.cache/huggingface/hub`. Providing the `cache_dir` argument in `load_dataset` the cache directory is created and there are some files but the bulk is still in `~/.cache/huggingface/hub`. I believe this could be solved by adding the cache_dir argument [here](https://github.com/huggingface/datasets/blob/fdda5585ab18ea1292547f36c969d12c408ab842/src/datasets/utils/file_utils.py#L188) ### Steps to reproduce the bug For example using https://huggingface.co/datasets/ashraq/esc50: ```python from datasets import load_dataset ds = load_dataset("ashraq/esc50", "default", cache_dir="~/custom/cache/path/esc50") ``` ### Expected behavior I would expect the bulk of files related to the dataset to be stored somewhere in `~/custom/cache/path/esc50`, but it seems they are in `~/.cache/huggingface/hub/datasets--ashraq--esc50`. ### Environment info - `datasets` version: 3.2.0 - Platform: Linux-5.14.0-503.15.1.el9_5.x86_64-x86_64-with-glibc2.34 - Python version: 3.10.14 - `huggingface_hub` version: 0.26.5 - PyArrow version: 17.0.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.6.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/74349080?v=4", "events_url": "https://api.github.com/users/Moritz-Wirth/events{/privacy}", "followers_url": "https://api.github.com/users/Moritz-Wirth/followers", "following_url": "https://api.github.com/users/Moritz-Wirth/following{/other_user}", "gists_url": "https://api.github.com/users/Moritz-Wirth/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Moritz-Wirth", "id": 74349080, "login": "Moritz-Wirth", "node_id": "MDQ6VXNlcjc0MzQ5MDgw", "organizations_url": "https://api.github.com/users/Moritz-Wirth/orgs", "received_events_url": "https://api.github.com/users/Moritz-Wirth/received_events", "repos_url": "https://api.github.com/users/Moritz-Wirth/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Moritz-Wirth/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Moritz-Wirth/subscriptions", "type": "User", "url": "https://api.github.com/users/Moritz-Wirth", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7323/reactions" }
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/issues/7322
7,322
ArrowInvalid: JSON parse error: Column() changed from object to array in row 0
{ "avatar_url": "https://avatars.githubusercontent.com/u/41767521?v=4", "events_url": "https://api.github.com/users/Polarisamoon/events{/privacy}", "followers_url": "https://api.github.com/users/Polarisamoon/followers", "following_url": "https://api.github.com/users/Polarisamoon/following{/other_user}", "gists_url": "https://api.github.com/users/Polarisamoon/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Polarisamoon", "id": 41767521, "login": "Polarisamoon", "node_id": "MDQ6VXNlcjQxNzY3NTIx", "organizations_url": "https://api.github.com/users/Polarisamoon/orgs", "received_events_url": "https://api.github.com/users/Polarisamoon/received_events", "repos_url": "https://api.github.com/users/Polarisamoon/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Polarisamoon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Polarisamoon/subscriptions", "type": "User", "url": "https://api.github.com/users/Polarisamoon", "user_view_type": "public" }
[]
open
false
[ "Hi ! `datasets` uses Arrow under the hood which expects each column and array to have fixed types that don't change across rows of a dataset, which is why we get this error. This dataset in particular doesn't have a format compatible with Arrow unfortunately. Don't hesitate to open a discussion or PR on HF to fix the dataset", "@lhoestq Is it correct to assume that most multimodal datasets with variable number of images across conversations are not compatible with Arrow ? \n\nI’m running into a problem while trying to format multimodal datasets (image + text) using the Hugging Face datasets library. Specifically, I’m working with a structure where conversations include both images and text messages. When I convert my dataset from a Python dict to a Hugging Face Dataset, I’m seeing unexpected None values being inserted for some fields that aren’t relevant for a given message (e.g., \"text\": None in image messages). Here’s what seems to be happening:\n\n🔍 What’s going wrong\n\nHugging Face datasets (backed by Apache Arrow) tries to flatten the schema across all samples. That means it enforces a fixed set of fields across the dataset – even if some of them are None for a given entry. Since my dataset contains heterogeneous conversation messages (some with images, others with text), Arrow is injecting None for whichever attributes don’t exist in each message type to preserve a unified schema.\n\nThis results in a lot of meaningless or misleading Nones across the dataset, and breaks logic further down the pipeline – in particular, it causes the qwen_vl_utils functions (like process_vision_info) to crash or misbehave.\n\n\n🤔 Workarounds considered\n\nI could stick with raw JSON, which preserves the heterogeneous structure properly. But that means giving up all the nice features of datasets and Arrow (e.g. streaming, map/filter, etc.), which feels like a shame.\n\n❓My question\n\nDo you know of a clean way to define a flexible schema with Hugging Face Datasets – maybe using nested structures or dynamic fields – so that each conversation message doesn’t get forced into a flat structure with irrelevant keys? Or should I just stick with JSON for this kind of multimodal case?\n\nLet me know if you’ve run into this before or have any tips!\n", "Datasets without fixed types are harder to use in many data frameworks unfortunately. You will have to handle the case with None values if you want to use Arrow/`datasets` (or any other framework using Arrow like spark, ray, dask). **IMO the short term solution is to fix qwen_vl_utils / process_vision_info.**\n\nAlternatively we can explore adding the Arrow `Json` type to `datasets`, but doesn't allow having image types in the Json object so I don't think this is the right solution.", "I totally share the analysis, I am happy to try to help with this! I am currently using Unsloth so it is the `UnslothVisionDataCollator` which has almost the same logic as `qwen_vl_utils.process_vision_info`. Is there a Huggingface class I could patch too ? " ]
2024-12-11T08:41:39Z
2025-07-15T13:06:55Z
null
NONE
null
null
### Describe the bug Encountering an error while loading the ```liuhaotian/LLaVA-Instruct-150K dataset```. ### Steps to reproduce the bug ``` from datasets import load_dataset fw =load_dataset("liuhaotian/LLaVA-Instruct-150K") ``` Error: ``` ArrowInvalid Traceback (most recent call last) [/usr/local/lib/python3.10/dist-packages/datasets/packaged_modules/json/json.py](https://localhost:8080/#) in _generate_tables(self, files) 136 try: --> 137 pa_table = paj.read_json( 138 io.BytesIO(batch), read_options=paj.ReadOptions(block_size=block_size) 20 frames ArrowInvalid: JSON parse error: Column() changed from object to array in row 0 During handling of the above exception, another exception occurred: ArrowTypeError Traceback (most recent call last) ArrowTypeError: ("Expected bytes, got a 'int' object", 'Conversion failed for column id with type object') The above exception was the direct cause of the following exception: DatasetGenerationError Traceback (most recent call last) [/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id) 1895 if isinstance(e, DatasetGenerationError): 1896 raise -> 1897 raise DatasetGenerationError("An error occurred while generating the dataset") from e 1898 1899 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths) DatasetGenerationError: An error occurred while generating the dataset ``` ### Expected behavior I have tried loading the dataset both on my own server and on Colab, and encountered errors in both instances. ### Environment info ``` - `datasets` version: 3.2.0 - Platform: Linux-6.1.85+-x86_64-with-glibc2.35 - Python version: 3.10.12 - `huggingface_hub` version: 0.26.3 - PyArrow version: 17.0.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.9.0 ```
null
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 1, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/7322/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/issues/7321
7,321
ImportError: cannot import name 'set_caching_enabled' from 'datasets'
{ "avatar_url": "https://avatars.githubusercontent.com/u/33318353?v=4", "events_url": "https://api.github.com/users/sankexin/events{/privacy}", "followers_url": "https://api.github.com/users/sankexin/followers", "following_url": "https://api.github.com/users/sankexin/following{/other_user}", "gists_url": "https://api.github.com/users/sankexin/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sankexin", "id": 33318353, "login": "sankexin", "node_id": "MDQ6VXNlcjMzMzE4MzUz", "organizations_url": "https://api.github.com/users/sankexin/orgs", "received_events_url": "https://api.github.com/users/sankexin/received_events", "repos_url": "https://api.github.com/users/sankexin/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sankexin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sankexin/subscriptions", "type": "User", "url": "https://api.github.com/users/sankexin", "user_view_type": "public" }
[]
open
false
[ "pip install datasets==2.18.0", "Hi ! I think you need to update axolotl" ]
2024-12-11T01:58:46Z
2024-12-11T13:32:15Z
null
NONE
null
null
### Describe the bug Traceback (most recent call last): File "/usr/local/lib/python3.10/runpy.py", line 187, in _run_module_as_main mod_name, mod_spec, code = _get_module_details(mod_name, _Error) File "/usr/local/lib/python3.10/runpy.py", line 110, in _get_module_details __import__(pkg_name) File "/home/Medusa/axolotl/src/axolotl/cli/__init__.py", line 23, in <module> from axolotl.train import TrainDatasetMeta File "/home/Medusa/axolotl/src/axolotl/train.py", line 23, in <module> from axolotl.utils.trainer import setup_trainer File "/home/Medusa/axolotl/src/axolotl/utils/trainer.py", line 13, in <module> from datasets import set_caching_enabled ImportError: cannot import name 'set_caching_enabled' from 'datasets' (/usr/local/lib/python3.10/site-packages/datasets/__init__.py) ### Steps to reproduce the bug 1、axolotl 2、accelerate launch -m axolotl.cli.train examples/medusa/qwen_lora_stage1.yml ### Expected behavior enable datasets ### Environment info python3.10
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7321/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/issues/7320
7,320
ValueError: You should supply an encoding or a list of encodings to this method that includes input_ids, but you provided ['label']
{ "avatar_url": "https://avatars.githubusercontent.com/u/38381084?v=4", "events_url": "https://api.github.com/users/atrompeterog/events{/privacy}", "followers_url": "https://api.github.com/users/atrompeterog/followers", "following_url": "https://api.github.com/users/atrompeterog/following{/other_user}", "gists_url": "https://api.github.com/users/atrompeterog/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/atrompeterog", "id": 38381084, "login": "atrompeterog", "node_id": "MDQ6VXNlcjM4MzgxMDg0", "organizations_url": "https://api.github.com/users/atrompeterog/orgs", "received_events_url": "https://api.github.com/users/atrompeterog/received_events", "repos_url": "https://api.github.com/users/atrompeterog/repos", "site_admin": false, "starred_url": "https://api.github.com/users/atrompeterog/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/atrompeterog/subscriptions", "type": "User", "url": "https://api.github.com/users/atrompeterog", "user_view_type": "public" }
[]
closed
false
[ "Now i have other error" ]
2024-12-10T20:23:11Z
2024-12-10T23:22:23Z
2024-12-10T23:22:23Z
NONE
null
null
### Describe the bug I am trying to create a PEFT model from DISTILBERT model, and run a training loop. However, the trainer.train() is giving me this error: ValueError: You should supply an encoding or a list of encodings to this method that includes input_ids, but you provided ['label'] Here is my code: ### Steps to reproduce the bug #Creating a PEFT Config from peft import LoraConfig from transformers import AutoTokenizer, AutoModelForSequenceClassification from peft import get_peft_model lora_config = LoraConfig( task_type="SEQ_CLASS", r=8, lora_alpha=32, target_modules=["q_lin", "k_lin", "v_lin"], lora_dropout=0.01, ) #Converting a Transformers Model into a PEFT Model model = AutoModelForSequenceClassification.from_pretrained( "distilbert-base-uncased", num_labels=2, #Binary classification, 1 = positive, 0 = negative ) lora_model = get_peft_model(model, lora_config) print(lora_model) Tokenize data set from datasets import load_dataset from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") # Load the train and test splits dataset dataset = load_dataset("fancyzhx/amazon_polarity") #create a smaller subset for train and test subset_size = 5000 small_train_dataset = dataset["train"].shuffle(seed=42).select(range(subset_size)) small_test_dataset = dataset["test"].shuffle(seed=42).select(range(subset_size)) #Tokenize data def tokenize_function(example): return tokenizer(example["content"], padding="max_length", truncation=True) tokenized_train_dataset = small_train_dataset.map(tokenize_function, batched=True) tokenized_test_dataset = small_test_dataset.map(tokenize_function, batched=True) train_lora = tokenized_train_dataset.rename_column('label', 'labels') test_lora = tokenized_test_dataset.rename_column('label', 'labels') print(tokenized_train_dataset.column_names) print(tokenized_test_dataset.column_names) #Train the PEFT model import numpy as np from transformers import Trainer, TrainingArguments, default_data_collator, DataCollatorWithPadding from datasets import load_dataset from transformers import AutoTokenizer, AutoModelForSequenceClassification def compute_metrics(eval_pred): predictions, labels = eval_pred predictions = np.argmax(predictions, axis=1) return {"accuracy": (predictions == labels).mean()} trainer = Trainer( model=lora_model, args=TrainingArguments( output_dir=".", learning_rate=2e-3, # Reduce the batch size if you don't have enough memory per_device_train_batch_size=1, per_device_eval_batch_size=1, num_train_epochs=3, weight_decay=0.01, evaluation_strategy="epoch", save_strategy="epoch", load_best_model_at_end=True, ), train_dataset=tokenized_train_dataset, eval_dataset=tokenized_test_dataset, tokenizer=tokenizer, data_collator=DataCollatorWithPadding(tokenizer=tokenizer, return_tensors="pt"), compute_metrics=compute_metrics, ) trainer.train() ### Expected behavior Example of output: [558/558 01:04, Epoch XX] Epoch | Training Loss | Validation Loss | Accuracy -- | -- | -- | -- 1 | No log | 0.046478 | 0.988341 2 | 0.052800 | 0.048840 | 0.988341 ### Environment info Using python and jupyter notbook
{ "avatar_url": "https://avatars.githubusercontent.com/u/38381084?v=4", "events_url": "https://api.github.com/users/atrompeterog/events{/privacy}", "followers_url": "https://api.github.com/users/atrompeterog/followers", "following_url": "https://api.github.com/users/atrompeterog/following{/other_user}", "gists_url": "https://api.github.com/users/atrompeterog/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/atrompeterog", "id": 38381084, "login": "atrompeterog", "node_id": "MDQ6VXNlcjM4MzgxMDg0", "organizations_url": "https://api.github.com/users/atrompeterog/orgs", "received_events_url": "https://api.github.com/users/atrompeterog/received_events", "repos_url": "https://api.github.com/users/atrompeterog/repos", "site_admin": false, "starred_url": "https://api.github.com/users/atrompeterog/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/atrompeterog/subscriptions", "type": "User", "url": "https://api.github.com/users/atrompeterog", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7320/reactions" }
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/pull/7319
7,319
set dev version
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7319). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-12-10T17:01:34Z
2024-12-10T17:04:04Z
2024-12-10T17:01:45Z
MEMBER
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7319.diff", "html_url": "https://github.com/huggingface/datasets/pull/7319", "merged_at": "2024-12-10T17:01:45Z", "patch_url": "https://github.com/huggingface/datasets/pull/7319.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7319" }
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7319/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/issues/7318
7,318
Introduce support for PDFs
{ "avatar_url": "https://avatars.githubusercontent.com/u/4812761?v=4", "events_url": "https://api.github.com/users/yabramuvdi/events{/privacy}", "followers_url": "https://api.github.com/users/yabramuvdi/followers", "following_url": "https://api.github.com/users/yabramuvdi/following{/other_user}", "gists_url": "https://api.github.com/users/yabramuvdi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yabramuvdi", "id": 4812761, "login": "yabramuvdi", "node_id": "MDQ6VXNlcjQ4MTI3NjE=", "organizations_url": "https://api.github.com/users/yabramuvdi/orgs", "received_events_url": "https://api.github.com/users/yabramuvdi/received_events", "repos_url": "https://api.github.com/users/yabramuvdi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yabramuvdi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yabramuvdi/subscriptions", "type": "User", "url": "https://api.github.com/users/yabramuvdi", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
[ "#self-assign", "Awesome ! Let me know if you have any question or if I can help :)\r\n\r\ncc @AndreaFrancis as well for viz", "Other candidates libraries for the Pdf type: PyMuPDF pypdf and pdfplumber\r\n\r\nEDIT: Pymupdf looks like a good choice when it comes to maturity + performance + versatility BUT the license is maybe an issue, and pypdf, pypdfium2 or pdfplumber are good options imo", "Related to https://github.com/huggingface/datasets/issues/7058", "PyMuPDF is AGPL licensed, so we can't use it. I will move forward with [pdfplumber](https://github.com/jsvine/pdfplumber?tab=readme-ov-file#python-library).", "Hi both! I have made a pull request with a first basic implementation of the Pdf feature. I followed closely what I saw on the Video and Image features. It is my first time contributing so any comments are very welcomed. I think it would be useful to outline together what additional things we can implement (e.g. enabling parsing of the pdf). Thanks :) " ]
2024-12-10T16:59:48Z
2024-12-12T18:38:13Z
null
CONTRIBUTOR
null
null
### Feature request The idea (discussed in the Discord server with @lhoestq ) is to have a Pdf type like Image/Audio/Video. For example [Video](https://github.com/huggingface/datasets/blob/main/src/datasets/features/video.py) was recently added and contains how to decode a video file encoded in a dictionary like {"path": ..., "bytes": ...} as a VideoReader using decord. We want to do the same with pdf and get a [pypdfium2.PdfDocument](https://pypdfium2.readthedocs.io/en/stable/_modules/pypdfium2/_helpers/document.html#PdfDocument). ### Motivation In many cases PDFs contain very valuable information beyond text (e.g. images, figures). Support for PDFs would help create datasets where all the information is preserved. ### Your contribution I can start the implementation of the Pdf type :)
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/7318/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/pull/7317
7,317
Release: 3.2.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7317). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-12-10T16:53:20Z
2024-12-10T16:56:58Z
2024-12-10T16:56:56Z
MEMBER
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7317.diff", "html_url": "https://github.com/huggingface/datasets/pull/7317", "merged_at": "2024-12-10T16:56:56Z", "patch_url": "https://github.com/huggingface/datasets/pull/7317.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7317" }
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7317/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/pull/7316
7,316
More docs to from_dict to mention that the result lives in RAM
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7316). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-12-10T13:56:01Z
2024-12-10T13:58:32Z
2024-12-10T13:57:02Z
MEMBER
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7316.diff", "html_url": "https://github.com/huggingface/datasets/pull/7316", "merged_at": "2024-12-10T13:57:02Z", "patch_url": "https://github.com/huggingface/datasets/pull/7316.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7316" }
following discussions at https://discuss.huggingface.co/t/how-to-load-this-simple-audio-data-set-and-use-dataset-map-without-memory-issues/17722/14
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/7316/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/pull/7314
7,314
Resolved for empty datafiles
{ "avatar_url": "https://avatars.githubusercontent.com/u/20582290?v=4", "events_url": "https://api.github.com/users/sahillihas/events{/privacy}", "followers_url": "https://api.github.com/users/sahillihas/followers", "following_url": "https://api.github.com/users/sahillihas/following{/other_user}", "gists_url": "https://api.github.com/users/sahillihas/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sahillihas", "id": 20582290, "login": "sahillihas", "node_id": "MDQ6VXNlcjIwNTgyMjkw", "organizations_url": "https://api.github.com/users/sahillihas/orgs", "received_events_url": "https://api.github.com/users/sahillihas/received_events", "repos_url": "https://api.github.com/users/sahillihas/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sahillihas/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sahillihas/subscriptions", "type": "User", "url": "https://api.github.com/users/sahillihas", "user_view_type": "public" }
[]
open
false
[ "Closes #6152 ", "@mariosasko I hope this resolves #6152 " ]
2024-12-09T15:47:22Z
2024-12-27T18:20:21Z
null
NONE
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7314.diff", "html_url": "https://github.com/huggingface/datasets/pull/7314", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7314.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7314" }
Resolved for Issue#6152
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7314/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/issues/7313
7,313
Cannot create a dataset with relative audio path
{ "avatar_url": "https://avatars.githubusercontent.com/u/5188731?v=4", "events_url": "https://api.github.com/users/sedol1339/events{/privacy}", "followers_url": "https://api.github.com/users/sedol1339/followers", "following_url": "https://api.github.com/users/sedol1339/following{/other_user}", "gists_url": "https://api.github.com/users/sedol1339/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sedol1339", "id": 5188731, "login": "sedol1339", "node_id": "MDQ6VXNlcjUxODg3MzE=", "organizations_url": "https://api.github.com/users/sedol1339/orgs", "received_events_url": "https://api.github.com/users/sedol1339/received_events", "repos_url": "https://api.github.com/users/sedol1339/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sedol1339/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sedol1339/subscriptions", "type": "User", "url": "https://api.github.com/users/sedol1339", "user_view_type": "public" }
[]
open
false
[ "Hello ! when you `cast_column` you need the paths to be absolute paths or relative paths to your working directory, not the original dataset directory.\r\n\r\nThough I'd recommend structuring your dataset as an AudioFolder which automatically links a metadata.jsonl or csv to the audio files via relative paths **within** the dataset repository: https://huggingface.co/docs/datasets/v3.2.0/en/audio_load#audiofolder", "@lhoestq thank you, but there are two problems with using AudioFolder:\r\n1. It is said that AudioFolder requires metadata.csv. However, my datset is too large and contains nested and np.ndarray fields, so I can't use csv.\r\n2. It is said that I need to load the dataset with `load_dataset(\"audiofolder\", ...)`. However, if possible, I want my dataset to be loaded as usual with `load_dataset(dataset_name)` after I upload if to HF.", "You can use metadata.jsonl if you have nested data :)\r\n\r\nAnd actually if you have a dataset structured as an AudioFolder then `load_dataset(dataset_name)` does work after uploading to HF", "I have created an audio dataset. In my repo, I have explained the steps and structure. An example dataset is also available in the repo. https://github.com/pr0mila/ParquetToHuggingFace " ]
2024-12-09T07:34:20Z
2025-04-19T07:13:08Z
null
NONE
null
null
### Describe the bug Hello! I want to create a dataset of parquet files, with audios stored as separate .mp3 files. However, it says "No such file or directory" (see the reproducing code). ### Steps to reproduce the bug Creating a dataset ``` from pathlib import Path from datasets import Dataset, load_dataset, Audio Path('my_dataset/audio').mkdir(parents=True, exist_ok=True) Path('my_dataset/audio/file.mp3').touch(exist_ok=True) Dataset.from_list( [{'audio': {'path': 'audio/file.mp3'}}] ).to_parquet('my_dataset/data.parquet') ``` Result: ``` # my_dataset # ├── audio # │ └── file.mp3 # └── data.parquet ``` Trying to load the dataset ``` dataset = ( load_dataset('my_dataset', split='train') .cast_column('audio', Audio(sampling_rate=16_000)) ) dataset[0] >>> FileNotFoundError: [Errno 2] No such file or directory: 'audio/file.mp3' ``` ### Expected behavior I expect the dataset to load correctly. I've found 2 workarounds, but they are not very good: 1. I can specify an absolute path to the audio, however, when I move the folder or upload to HF it will stop working. 2. I can set `'path': 'file.mp3'`, and load with `load_dataset('my_dataset', data_dir='audio')` - it seems to work, but does this mean that anyone from Hugging Face who wants to use this dataset should also pass the `data_dir` argument, otherwise it won't work? ### Environment info datasets 3.1.0, Ubuntu 24.04.1
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7313/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/pull/7312
7,312
[Audio Features - DO NOT MERGE] PoC for adding an offset+sliced reading to audio file.
{ "avatar_url": "https://avatars.githubusercontent.com/u/11910731?v=4", "events_url": "https://api.github.com/users/TParcollet/events{/privacy}", "followers_url": "https://api.github.com/users/TParcollet/followers", "following_url": "https://api.github.com/users/TParcollet/following{/other_user}", "gists_url": "https://api.github.com/users/TParcollet/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/TParcollet", "id": 11910731, "login": "TParcollet", "node_id": "MDQ6VXNlcjExOTEwNzMx", "organizations_url": "https://api.github.com/users/TParcollet/orgs", "received_events_url": "https://api.github.com/users/TParcollet/received_events", "repos_url": "https://api.github.com/users/TParcollet/repos", "site_admin": false, "starred_url": "https://api.github.com/users/TParcollet/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TParcollet/subscriptions", "type": "User", "url": "https://api.github.com/users/TParcollet", "user_view_type": "public" }
[]
open
false
[]
2024-12-08T10:27:31Z
2024-12-08T10:27:31Z
null
NONE
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7312.diff", "html_url": "https://github.com/huggingface/datasets/pull/7312", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7312.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7312" }
This is a proof of concept for #7310 . The idea is to enable the access to others column of the dataset row when loading an audio file into a table. This is to allow sliced reading. As stated in the issue, many people have very long audio files and use start and stop slicing in this audio file. Right now, this code work as a PoC on my dataset. However, this is **just to illustrate** the idea. Many things are messed up, the first being that the shards have wildly varying sizes. Could be of interest to @lhoestq and @sanchit-gandhi ? Happy to test better ideas locally.
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/7312/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/issues/7311
7,311
How to get the original dataset name with username?
{ "avatar_url": "https://avatars.githubusercontent.com/u/11533479?v=4", "events_url": "https://api.github.com/users/npuichigo/events{/privacy}", "followers_url": "https://api.github.com/users/npuichigo/followers", "following_url": "https://api.github.com/users/npuichigo/following{/other_user}", "gists_url": "https://api.github.com/users/npuichigo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/npuichigo", "id": 11533479, "login": "npuichigo", "node_id": "MDQ6VXNlcjExNTMzNDc5", "organizations_url": "https://api.github.com/users/npuichigo/orgs", "received_events_url": "https://api.github.com/users/npuichigo/received_events", "repos_url": "https://api.github.com/users/npuichigo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/npuichigo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/npuichigo/subscriptions", "type": "User", "url": "https://api.github.com/users/npuichigo", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
[ "Hi ! why not pass the dataset id to Ray and let it check the parquet files ? Or pass the parquet files lists directly ?", "I'm not sure why ray design an API like this to accept a `Dataset` object, so they need to verify the `Dataset` is the original one and use the `DatasetInfo` to query the huggingface hub. I'll advise the ray data team to use dataset id instead of dataset for the `HuggingFaceDatasource` API." ]
2024-12-08T07:18:14Z
2025-01-09T10:48:02Z
null
CONTRIBUTOR
null
null
### Feature request The issue is related to ray data https://github.com/ray-project/ray/issues/49008 which it requires to check if the dataset is the original one just after `load_dataset` and parquet files are already available on hf hub. The solution used now is to get the dataset name, config and split, then `load_dataset` again and check the fingerprint. But it's unable to get the correct dataset name if it contains username. So how to get the dataset name with username prefix, or is there another way to query if a dataset is the original one with parquet available? @lhoestq ### Motivation https://github.com/ray-project/ray/issues/49008 ### Your contribution Would like to fix that.
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7311/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/issues/7310
7,310
Enable the Audio Feature to decode / read with an offset + duration
{ "avatar_url": "https://avatars.githubusercontent.com/u/11910731?v=4", "events_url": "https://api.github.com/users/TParcollet/events{/privacy}", "followers_url": "https://api.github.com/users/TParcollet/followers", "following_url": "https://api.github.com/users/TParcollet/following{/other_user}", "gists_url": "https://api.github.com/users/TParcollet/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/TParcollet", "id": 11910731, "login": "TParcollet", "node_id": "MDQ6VXNlcjExOTEwNzMx", "organizations_url": "https://api.github.com/users/TParcollet/orgs", "received_events_url": "https://api.github.com/users/TParcollet/received_events", "repos_url": "https://api.github.com/users/TParcollet/repos", "site_admin": false, "starred_url": "https://api.github.com/users/TParcollet/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TParcollet/subscriptions", "type": "User", "url": "https://api.github.com/users/TParcollet", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
[ "Hi ! What about having audio + start + duration columns and enable something like this ?\r\n\r\n```python\r\nfor example in ds:\r\n array = example[\"audio\"].read(start=example[\"start\"], frames=example[\"duration\"])\r\n```", "Hi @lhoestq, this would work with a file-based dataset but would be terrible for a sharded one as it would duplicate the large audio file many times. Also, very long audio files are not embedded very well in the parquet file, even with large_binary(). It crashed a few times for me until I switched to one sample == one file :-( " ]
2024-12-07T22:01:44Z
2024-12-09T21:09:46Z
null
NONE
null
null
### Feature request For most large speech dataset, we do not wish to generate hundreds of millions of small audio samples. Instead, it is quite common to provide larger audio files with frame offset (soundfile start and stop arguments). We should be able to pass these arguments to Audio() (column ID corresponding in the dataset row). ### Motivation I am currently generating a fairly big dataset to .parquet(). Unfortunately, it does not work because all existing functions load the whole .wav file corresponding to the row. All my attempts at bypassing this failed. We should be able to put in the Table only the bytes corresponding to what soundfile reads with an offset (and subset of the audio file). ### Your contribution I can totally test whatever code on my large dataset creation script.
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7310/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/issues/7315
7,315
Allow manual configuration of Dataset Viewer for datasets not created with the `datasets` library
{ "avatar_url": "https://avatars.githubusercontent.com/u/114512099?v=4", "events_url": "https://api.github.com/users/diarray-hub/events{/privacy}", "followers_url": "https://api.github.com/users/diarray-hub/followers", "following_url": "https://api.github.com/users/diarray-hub/following{/other_user}", "gists_url": "https://api.github.com/users/diarray-hub/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/diarray-hub", "id": 114512099, "login": "diarray-hub", "node_id": "U_kgDOBtNQ4w", "organizations_url": "https://api.github.com/users/diarray-hub/orgs", "received_events_url": "https://api.github.com/users/diarray-hub/received_events", "repos_url": "https://api.github.com/users/diarray-hub/repos", "site_admin": false, "starred_url": "https://api.github.com/users/diarray-hub/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/diarray-hub/subscriptions", "type": "User", "url": "https://api.github.com/users/diarray-hub", "user_view_type": "public" }
[]
open
false
[ "Hi @diarray-hub , thanks for opening the issue :) Let me ping @lhoestq and @severo from the dataset viewer team :hugs: ", "amazing :)", "Hi ! why not modify the manifest.json file directly ? this way users see in the viewer the dataset as is instead which makes it easier to use using e.g. the `datasets` library", "Can I create and push the dataset with the dataset library while also pushing the dataset directory, mainting its structure and all the files as with git? ", "(I transferred to the issue to the `datasets` repo as it's not related to `huggingface_hub`)", "> Can I create and push the dataset with the dataset library while also pushing the dataset directory, mainting its structure and all the files as with git?\r\n\r\nyes push_to_hub simply uploads Parquet files in a directory named \"data\" in the git repository\r\n", "That's the problem actually, I need that the data stays in the same format and the directory they are in keep the same structure in order to go quick with Nemo training so users of Nvidia's Nemo framework don't need to write any preprocessing code before starting training. That's why I used git instead of push_to_hub so me and other users working with Nemo can just:\r\n1. git clone\r\n2. asr_model.setup_training_data(train_data_config={'manifest_filepath': training_manifest_filepath})\r\n\r\nAnd start training already. It may be not very kind of me to prioritize users of a specific framework but I noticed that it take much more code to convert an huggingFace dataset with the parquet file to Nemo manifest format than the inverse :haha: ", "Happy to help if you think the Nemo dataset format should be supported in `datasets` (and therefore in the HF Viewer that is based on `datasets`). Maybe the Nemo team could help as well\r\n\r\nThough I'm not sure if there is only one but actually many formats/structure in Nemo depending on the task ?", "Yeah, you're right Quentin, it depends of the task. This one is for ASR. And, yes maybe they can help. I noticed that they already share their models through HF. Maybe someone in your teams already have a contact point there. Anyway it's not really a big issues since people can easily understand the dataset and its format with the dataset card but it's a little annoying for those who wanna visually explore each features with the viewer as for regular HF datasets", "In that case I'd recommend you to upload the dataset in Nemo format and \r\n1) add the \"nemo\" tag\r\n2) add how to use the dataset in Nemo in the dataset README.md\r\n\r\nThe viewer is likely to show the audio content by default but without the transcriptions. You can also configure the viewer to show the transcriptions instead (without the audio).", "I already did, it's just a little bit \"dommage\" (Hope you'll understand, you speak french right? Cause I don't know any english word for this) that I have to choose which one the viewer displays. But it's no problem for the usability of the dataset. Thanks Quentin :+1: ", "It's \"dommage\" for now, but feel free to ping the Nemo people if you think there is room for making this better together :)\r\n\r\nKinda related, but the `datasets` AudioFolder structure looks similar and simply asks for a `metadata.jsonl` with a field named \"file_name\" to link the transcriptions to the audio files - you could also add this file to your repository to make the viewer show audio + transcripts.\r\n\r\nAlternatively maybe we can expand the AudioFolder configuration to allow you to set the metadata file to be the \"manifest.json\" and the linking field to be \"audio_file_name\" (we just need to agree on something general - not just for Nemo)", "Right, actually that was my idea when I opened this issues. That's what I suggested, taking my case as an exemple but you should think of a more general approach like adding a field to configure the viewer as you wish in the metadata (in the dataset card) or a config.yaml or json file. With a level of abstraction like the solution I proposed ot even higher abstraction, it would allow for more customizability :)" ]
2024-12-07T16:37:12Z
2024-12-11T11:05:22Z
null
NONE
null
null
#### **Problem Description** Currently, the Hugging Face Dataset Viewer automatically interprets dataset fields for datasets created with the `datasets` library. However, for datasets pushed directly via `git`, the Viewer: - Defaults to generic columns like `label` with `null` values if no explicit mapping is provided. - Does not allow dataset creators to configure field mappings or suppress default fields unless the dataset is recreated and pushed using the `datasets` library. This creates a limitation for creators who: - Use custom workflows to prepare datasets (e.g., manifest files with audio-transcription mappings). - Push large datasets directly via `git` and cannot easily restructure them to conform to the `datasets` library format. #### **Proposed Solution** Introduce a feature that allows dataset creators to manually configure the Dataset Viewer behavior for datasets not created with the `datasets` library. This could be achieved by: 1. **Using the YAML Metadata in `README.md`:** - Add support for defining the dataset's field mappings directly in the `README.md` YAML section. - Example: ```yaml viewer: fields: - name: "audio" type: "audio_path" / "text" source: "manifest['audio']" - name: "bambara_transcription" type: "text" source: "manifest['bambara']" - name: "french_translation" type: "text" source: "manifest['french']" ``` With manifest being a csv or json like format file in the repository so that the viewer understands that it should look for the values of each field in that file. #### **Benefits** - Improves flexibility for dataset creators who push datasets via `git`. - Enhances dataset discoverability and usability on the Hugging Face Hub by allowing creators to present meaningful field mappings without restructuring their data. - Reduces overhead for creators of large or complex datasets. #### **Examples of Use Case** - An audio dataset with transcriptions in multiple languages stored in a `manifest.json` file, where the user wants the Viewer to: - Display the `audio` column and Explicitly map features that he defined such as `bambara_transcription` and `french_translation` from the manifest.
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7315/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/pull/7309
7,309
Faster parquet streaming + filters with predicate pushdown
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7309). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-12-06T18:01:54Z
2024-12-07T23:32:30Z
2024-12-07T23:32:28Z
MEMBER
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7309.diff", "html_url": "https://github.com/huggingface/datasets/pull/7309", "merged_at": "2024-12-07T23:32:28Z", "patch_url": "https://github.com/huggingface/datasets/pull/7309.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7309" }
ParquetFragment.to_batches uses a buffered stream to read parquet data, which makes streaming faster (x2 on my laptop). I also added the `filters` config parameter to support filtering with predicate pushdown, e.g. ```python from datasets import load_dataset filters = [('problem_source', '==', 'math')] ds = load_dataset("nvidia/OpenMathInstruct-2", streaming=True, filters=filters) first_example = next(iter(ds["train"])) print(first_example["problem_source"]) # 'math' ``` cc @allisonwang-db this is a nice plus for usage in spark
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7309/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/pull/7307
7,307
refactor: remove unnecessary else
{ "avatar_url": "https://avatars.githubusercontent.com/u/20921177?v=4", "events_url": "https://api.github.com/users/HarikrishnanBalagopal/events{/privacy}", "followers_url": "https://api.github.com/users/HarikrishnanBalagopal/followers", "following_url": "https://api.github.com/users/HarikrishnanBalagopal/following{/other_user}", "gists_url": "https://api.github.com/users/HarikrishnanBalagopal/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/HarikrishnanBalagopal", "id": 20921177, "login": "HarikrishnanBalagopal", "node_id": "MDQ6VXNlcjIwOTIxMTc3", "organizations_url": "https://api.github.com/users/HarikrishnanBalagopal/orgs", "received_events_url": "https://api.github.com/users/HarikrishnanBalagopal/received_events", "repos_url": "https://api.github.com/users/HarikrishnanBalagopal/repos", "site_admin": false, "starred_url": "https://api.github.com/users/HarikrishnanBalagopal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HarikrishnanBalagopal/subscriptions", "type": "User", "url": "https://api.github.com/users/HarikrishnanBalagopal", "user_view_type": "public" }
[]
open
false
[]
2024-12-05T12:11:09Z
2024-12-06T15:11:33Z
null
NONE
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7307.diff", "html_url": "https://github.com/huggingface/datasets/pull/7307", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7307.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7307" }
null
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7307/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/issues/7306
7,306
Creating new dataset from list loses information. (Audio Information Lost - either Datatype or Values).
{ "avatar_url": "https://avatars.githubusercontent.com/u/9797804?v=4", "events_url": "https://api.github.com/users/ai-nikolai/events{/privacy}", "followers_url": "https://api.github.com/users/ai-nikolai/followers", "following_url": "https://api.github.com/users/ai-nikolai/following{/other_user}", "gists_url": "https://api.github.com/users/ai-nikolai/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ai-nikolai", "id": 9797804, "login": "ai-nikolai", "node_id": "MDQ6VXNlcjk3OTc4MDQ=", "organizations_url": "https://api.github.com/users/ai-nikolai/orgs", "received_events_url": "https://api.github.com/users/ai-nikolai/received_events", "repos_url": "https://api.github.com/users/ai-nikolai/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ai-nikolai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ai-nikolai/subscriptions", "type": "User", "url": "https://api.github.com/users/ai-nikolai", "user_view_type": "public" }
[]
open
false
[]
2024-12-05T09:07:53Z
2024-12-05T09:09:38Z
null
NONE
null
null
### Describe the bug When creating a dataset from a list of datapoints, information is lost of the individual items. Specifically, when creating a dataset from a list of datapoints (from another dataset). Either the datatype is lost or the values are lost. See examples below. -> What is the best way to create a dataset from a list of datapoints? --- e.g.: **When running this code:** ```python from datasets import load_dataset, Dataset commonvoice_data = load_dataset("mozilla-foundation/common_voice_17_0", "it", split="test", streaming=True) datapoint = next(iter(commonvoice_data)) out = [datapoint] new_data = Dataset.from_list(out) #this loses datatype information new_data2= Dataset.from_list(out,features=commonvoice_data.features) #this loses value information ``` **We get the following**: --- 1. `datapoint`: (the original datapoint) ``` 'audio': {'path': 'it_test_0/common_voice_it_23606167.mp3', 'array': array([0.00000000e+00, 0.00000000e+00, 0.00000000e+00, ..., 2.21619011e-05, 2.72628222e-05, 0.00000000e+00]), 'sampling_rate': 48000} ``` Original Dataset Features: ``` >>> commonvoice_data.features 'audio': Audio(sampling_rate=48000, mono=True, decode=True, id=None) ``` - Here we see column "audio", has the proper values (both `path` & and `array`) and has the correct datatype (Audio). ---- 2. new_data[0]: ``` # Cannot be printed (as it prints the entire array). ``` New Dataset 1 Features: ``` >>> new_data.features 'audio': {'array': Sequence(feature=Value(dtype='float64', id=None), length=-1, id=None), 'path': Value(dtype='string', id=None), 'sampling_rate': Value(dtype='int64', id=None)} ``` - Here we see that the column "audio", has the correct values, but is not the Audio datatype anymore. --- 3. new_data2[0]: ``` 'audio': {'path': None, 'array': array([0., 0., 0., ..., 0., 0., 0.]), 'sampling_rate': 48000}, ``` New Dataset 2 Features: ``` >>> new_data2.features 'audio': Audio(sampling_rate=48000, mono=True, decode=True, id=None), ``` - Here we see that the column "audio", has the correct datatype, but all the array & path values were lost! ### Steps to reproduce the bug ## Run: ```python from datasets import load_dataset, Dataset commonvoice_data = load_dataset("mozilla-foundation/common_voice_17_0", "it", split="test", streaming=True) datapoint = next(iter(commonvoice_data)) out = [datapoint] new_data = Dataset.from_list(out) #this loses datatype information new_data2= Dataset.from_list(out,features=commonvoice_data.features) #this loses value information ``` ### Expected behavior ## Expected: ```datapoint == new_data[0]``` AND ```datapoint == new_data2[0]``` ### Environment info - `datasets` version: 3.1.0 - Platform: Linux-6.2.0-37-generic-x86_64-with-glibc2.35 - Python version: 3.10.12 - `huggingface_hub` version: 0.26.2 - PyArrow version: 15.0.2 - Pandas version: 2.2.2 - `fsspec` version: 2024.3.1
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7306/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/issues/7305
7,305
Build Documentation Test Fails Due to "Bad Credentials" Error
{ "avatar_url": "https://avatars.githubusercontent.com/u/31152346?v=4", "events_url": "https://api.github.com/users/ruidazeng/events{/privacy}", "followers_url": "https://api.github.com/users/ruidazeng/followers", "following_url": "https://api.github.com/users/ruidazeng/following{/other_user}", "gists_url": "https://api.github.com/users/ruidazeng/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ruidazeng", "id": 31152346, "login": "ruidazeng", "node_id": "MDQ6VXNlcjMxMTUyMzQ2", "organizations_url": "https://api.github.com/users/ruidazeng/orgs", "received_events_url": "https://api.github.com/users/ruidazeng/received_events", "repos_url": "https://api.github.com/users/ruidazeng/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ruidazeng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ruidazeng/subscriptions", "type": "User", "url": "https://api.github.com/users/ruidazeng", "user_view_type": "public" }
[]
open
false
[ "how were you able to fix this please?", "> how were you able to fix this please?\r\n\r\nI was not able to fix this." ]
2024-12-03T20:22:54Z
2025-01-08T22:38:14Z
null
CONTRIBUTOR
null
null
### Describe the bug The `Build documentation / build / build_main_documentation (push)` job is consistently failing during the "Syncing repository" step. The error occurs when attempting to determine the default branch name, resulting in "Bad credentials" errors. ### Steps to reproduce the bug 1. Trigger the `build_main_documentation` job. 2. Observe the logs during the "Syncing repository" step. ### Expected behavior The workflow should be able to retrieve the default branch name without encountering credential issues. ### Environment info ```plaintext Syncing repository: huggingface/notebooks Getting Git version info Temporarily overriding HOME='/home/runner/work/_temp/00e62748-9940-4a4f-bbbc-eb2cda6d7ed6' before making global git config changes Adding repository directory to the temporary git global config as a safe directory /usr/bin/git config --global --add safe.directory /home/runner/work/datasets/datasets/notebooks Initializing the repository Disabling automatic garbage collection Setting up auth Determining the default branch Retrieving the default branch name Bad credentials - https://docs.github.com/rest Waiting 20 seconds before trying again Retrieving the default branch name Bad credentials - https://docs.github.com/rest Waiting 19 seconds before trying again Retrieving the default branch name Error: Bad credentials - https://docs.github.com/rest ```
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7305/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/pull/7304
7,304
Update iterable_dataset.py
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7304). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-12-03T14:25:42Z
2024-12-03T14:28:10Z
2024-12-03T14:27:02Z
MEMBER
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7304.diff", "html_url": "https://github.com/huggingface/datasets/pull/7304", "merged_at": "2024-12-03T14:27:02Z", "patch_url": "https://github.com/huggingface/datasets/pull/7304.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7304" }
close https://github.com/huggingface/datasets/issues/7297
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7304/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/issues/7303
7,303
DataFilesNotFoundError for datasets LM1B
{ "avatar_url": "https://avatars.githubusercontent.com/u/72264324?v=4", "events_url": "https://api.github.com/users/hml1996-fight/events{/privacy}", "followers_url": "https://api.github.com/users/hml1996-fight/followers", "following_url": "https://api.github.com/users/hml1996-fight/following{/other_user}", "gists_url": "https://api.github.com/users/hml1996-fight/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/hml1996-fight", "id": 72264324, "login": "hml1996-fight", "node_id": "MDQ6VXNlcjcyMjY0MzI0", "organizations_url": "https://api.github.com/users/hml1996-fight/orgs", "received_events_url": "https://api.github.com/users/hml1996-fight/received_events", "repos_url": "https://api.github.com/users/hml1996-fight/repos", "site_admin": false, "starred_url": "https://api.github.com/users/hml1996-fight/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hml1996-fight/subscriptions", "type": "User", "url": "https://api.github.com/users/hml1996-fight", "user_view_type": "public" }
[]
closed
false
[ "Hi ! Can you try with a more recent version of `datasets` ? Also you might need to pass trust_remote_code=True since it's a script based dataset" ]
2024-11-29T17:27:45Z
2024-12-11T13:22:47Z
2024-12-11T13:22:47Z
NONE
null
null
### Describe the bug Cannot load the dataset https://huggingface.co/datasets/billion-word-benchmark/lm1b ### Steps to reproduce the bug `dataset = datasets.load_dataset('lm1b', split=split)` ### Expected behavior `Traceback (most recent call last): File "/home/hml/projects/DeepLearning/Generative_model/Diffusion-BERT/word_freq.py", line 13, in <module> train_data = DiffusionLoader(tokenizer=tokenizer).my_load(task_name='lm1b', splits=['train'])[0] File "/home/hml/projects/DeepLearning/Generative_model/Diffusion-BERT/dataloader.py", line 20, in my_load return [self._load(task_name, name) for name in splits] File "/home/hml/projects/DeepLearning/Generative_model/Diffusion-BERT/dataloader.py", line 20, in <listcomp> return [self._load(task_name, name) for name in splits] File "/home/hml/projects/DeepLearning/Generative_model/Diffusion-BERT/dataloader.py", line 13, in _load dataset = datasets.load_dataset('lm1b', split=split) File "/home/hml/.conda/envs/DB/lib/python3.10/site-packages/datasets/load.py", line 2594, in load_dataset builder_instance = load_dataset_builder( File "/home/hml/.conda/envs/DB/lib/python3.10/site-packages/datasets/load.py", line 2266, in load_dataset_builder dataset_module = dataset_module_factory( File "/home/hml/.conda/envs/DB/lib/python3.10/site-packages/datasets/load.py", line 1827, in dataset_module_factory ).get_module() File "/home/hml/.conda/envs/DB/lib/python3.10/site-packages/datasets/load.py", line 1040, in get_module module_name, default_builder_kwargs = infer_module_for_data_files( File "/home/hml/.conda/envs/DB/lib/python3.10/site-packages/datasets/load.py", line 598, in infer_module_for_data_files raise DataFilesNotFoundError("No (supported) data files found" + (f" in {path}" if path else "")) datasets.exceptions.DataFilesNotFoundError: No (supported) data files found in lm1b` ### Environment info datasets: 2.20.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/72264324?v=4", "events_url": "https://api.github.com/users/hml1996-fight/events{/privacy}", "followers_url": "https://api.github.com/users/hml1996-fight/followers", "following_url": "https://api.github.com/users/hml1996-fight/following{/other_user}", "gists_url": "https://api.github.com/users/hml1996-fight/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/hml1996-fight", "id": 72264324, "login": "hml1996-fight", "node_id": "MDQ6VXNlcjcyMjY0MzI0", "organizations_url": "https://api.github.com/users/hml1996-fight/orgs", "received_events_url": "https://api.github.com/users/hml1996-fight/received_events", "repos_url": "https://api.github.com/users/hml1996-fight/repos", "site_admin": false, "starred_url": "https://api.github.com/users/hml1996-fight/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hml1996-fight/subscriptions", "type": "User", "url": "https://api.github.com/users/hml1996-fight", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7303/reactions" }
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/pull/7302
7,302
Let server decide default repo visibility
{ "avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4", "events_url": "https://api.github.com/users/Wauplin/events{/privacy}", "followers_url": "https://api.github.com/users/Wauplin/followers", "following_url": "https://api.github.com/users/Wauplin/following{/other_user}", "gists_url": "https://api.github.com/users/Wauplin/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Wauplin", "id": 11801849, "login": "Wauplin", "node_id": "MDQ6VXNlcjExODAxODQ5", "organizations_url": "https://api.github.com/users/Wauplin/orgs", "received_events_url": "https://api.github.com/users/Wauplin/received_events", "repos_url": "https://api.github.com/users/Wauplin/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Wauplin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Wauplin/subscriptions", "type": "User", "url": "https://api.github.com/users/Wauplin", "user_view_type": "public" }
[]
closed
false
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7302). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "No need for a specific version of huggingface_hub to avoid a breaking change no (it's a server-side change)" ]
2024-11-28T16:01:13Z
2024-11-29T17:00:40Z
2024-11-29T17:00:38Z
CONTRIBUTOR
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7302.diff", "html_url": "https://github.com/huggingface/datasets/pull/7302", "merged_at": "2024-11-29T17:00:38Z", "patch_url": "https://github.com/huggingface/datasets/pull/7302.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7302" }
Until now, all repos were public by default when created without passing the `private` argument. This meant that passing `private=False` or `private=None` was strictly the same. This is not the case anymore. Enterprise Hub offers organizations to set a default visibility setting for new repos. This is useful for organizations forbidding public repos for security matters. This PR mostly updates docstrings + default values so that `private=None` is always passed when users don't set it manually. This PR doesn't create any breaking change. The real update has been done server-side when introducing the new Enterprise Hub feature. Related to https://github.com/huggingface/huggingface_hub/pull/2679.
{ "avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4", "events_url": "https://api.github.com/users/Wauplin/events{/privacy}", "followers_url": "https://api.github.com/users/Wauplin/followers", "following_url": "https://api.github.com/users/Wauplin/following{/other_user}", "gists_url": "https://api.github.com/users/Wauplin/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Wauplin", "id": 11801849, "login": "Wauplin", "node_id": "MDQ6VXNlcjExODAxODQ5", "organizations_url": "https://api.github.com/users/Wauplin/orgs", "received_events_url": "https://api.github.com/users/Wauplin/received_events", "repos_url": "https://api.github.com/users/Wauplin/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Wauplin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Wauplin/subscriptions", "type": "User", "url": "https://api.github.com/users/Wauplin", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7302/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/pull/7301
7,301
update load_dataset doctring
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7301). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-11-28T11:19:20Z
2024-11-29T10:31:43Z
2024-11-29T10:31:40Z
MEMBER
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7301.diff", "html_url": "https://github.com/huggingface/datasets/pull/7301", "merged_at": "2024-11-29T10:31:40Z", "patch_url": "https://github.com/huggingface/datasets/pull/7301.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7301" }
- remove canonical dataset name - remove dataset script logic - add streaming info - clearer download and prepare steps
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7301/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/pull/7300
7,300
fix: update elasticsearch version
{ "avatar_url": "https://avatars.githubusercontent.com/u/31152346?v=4", "events_url": "https://api.github.com/users/ruidazeng/events{/privacy}", "followers_url": "https://api.github.com/users/ruidazeng/followers", "following_url": "https://api.github.com/users/ruidazeng/following{/other_user}", "gists_url": "https://api.github.com/users/ruidazeng/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ruidazeng", "id": 31152346, "login": "ruidazeng", "node_id": "MDQ6VXNlcjMxMTUyMzQ2", "organizations_url": "https://api.github.com/users/ruidazeng/orgs", "received_events_url": "https://api.github.com/users/ruidazeng/received_events", "repos_url": "https://api.github.com/users/ruidazeng/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ruidazeng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ruidazeng/subscriptions", "type": "User", "url": "https://api.github.com/users/ruidazeng", "user_view_type": "public" }
[]
closed
false
[ "May I request a review @lhoestq", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7300). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-11-28T09:14:21Z
2024-12-03T14:36:56Z
2024-12-03T14:24:42Z
CONTRIBUTOR
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7300.diff", "html_url": "https://github.com/huggingface/datasets/pull/7300", "merged_at": "2024-12-03T14:24:42Z", "patch_url": "https://github.com/huggingface/datasets/pull/7300.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7300" }
This should fix the `test_py311 (windows latest, deps-latest` errors. ``` =========================== short test summary info =========================== ERROR tests/test_search.py - AttributeError: `np.float_` was removed in the NumPy 2.0 release. Use `np.float64` instead. ERROR tests/test_search.py - AttributeError: `np.float_` was removed in the NumPy 2.0 release. Use `np.float64` instead. ===== 2822 passed, 54 skipped, 10 warnings, 2 errors in 373.36s (0:06:13) ===== Error: Process completed with exit code 1. ``` The elasticsearch version used is `elasticsearch==7.9.1`, which is 4 years old and uses the removed `numpy.float_`. elasticsearch fixed this in [https://github.com/elastic/elasticsearch-py/pull/2551](https://github.com/elastic/elasticsearch-py/pull/2551) and released in 8.15.0 (August 2024) and 7.17.12 (September 2024).
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7300/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/issues/7299
7,299
Efficient Image Augmentation in Hugging Face Datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/46443190?v=4", "events_url": "https://api.github.com/users/fabiozappo/events{/privacy}", "followers_url": "https://api.github.com/users/fabiozappo/followers", "following_url": "https://api.github.com/users/fabiozappo/following{/other_user}", "gists_url": "https://api.github.com/users/fabiozappo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/fabiozappo", "id": 46443190, "login": "fabiozappo", "node_id": "MDQ6VXNlcjQ2NDQzMTkw", "organizations_url": "https://api.github.com/users/fabiozappo/orgs", "received_events_url": "https://api.github.com/users/fabiozappo/received_events", "repos_url": "https://api.github.com/users/fabiozappo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/fabiozappo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fabiozappo/subscriptions", "type": "User", "url": "https://api.github.com/users/fabiozappo", "user_view_type": "public" }
[]
open
false
[]
2024-11-26T16:50:32Z
2024-11-26T16:53:53Z
null
NONE
null
null
### Describe the bug I'm using the Hugging Face datasets library to load images in batch and would like to apply a torchvision transform to solve the inconsistent image sizes in the dataset and apply some on the fly image augmentation. I can just think about using the collate_fn, but seems quite inefficient. I'm new to the Hugging Face datasets library, I didn't find nothing in the documentation or the issues here on github. Is there an existing way to add image transformations directly to the dataset loading pipeline? ### Steps to reproduce the bug from datasets import load_dataset from torch.utils.data import DataLoader ```python def collate_fn(batch): images = [item['image'] for item in batch] texts = [item['text'] for item in batch] return { 'images': images, 'texts': texts } dataset = load_dataset("Yuki20/pokemon_caption", split="train") dataloader = DataLoader(dataset, batch_size=4, collate_fn=collate_fn) # Output shows varying image sizes: # [(1280, 1280), (431, 431), (789, 789), (769, 769)] ``` ### Expected behavior I'm looking for a way to resize images on-the-fly when loading the dataset, similar to PyTorch's Dataset.__getitem__ functionality. This would be more efficient than handling resizing in the collate_fn. ### Environment info - `datasets` version: 3.1.0 - Platform: Linux-6.5.0-41-generic-x86_64-with-glibc2.35 - Python version: 3.11.10 - `huggingface_hub` version: 0.26.2 - PyArrow version: 18.0.0 - Pandas version: 2.2.3 - `fsspec` version: 2024.9.0
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7299/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/issues/7298
7,298
loading dataset issue with load_dataset() when training controlnet
{ "avatar_url": "https://avatars.githubusercontent.com/u/81594044?v=4", "events_url": "https://api.github.com/users/sarahahtee/events{/privacy}", "followers_url": "https://api.github.com/users/sarahahtee/followers", "following_url": "https://api.github.com/users/sarahahtee/following{/other_user}", "gists_url": "https://api.github.com/users/sarahahtee/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sarahahtee", "id": 81594044, "login": "sarahahtee", "node_id": "MDQ6VXNlcjgxNTk0MDQ0", "organizations_url": "https://api.github.com/users/sarahahtee/orgs", "received_events_url": "https://api.github.com/users/sarahahtee/received_events", "repos_url": "https://api.github.com/users/sarahahtee/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sarahahtee/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sarahahtee/subscriptions", "type": "User", "url": "https://api.github.com/users/sarahahtee", "user_view_type": "public" }
[]
open
false
[]
2024-11-26T10:50:18Z
2024-11-26T10:50:18Z
null
NONE
null
null
### Describe the bug i'm unable to load my dataset for [controlnet training](https://github.com/huggingface/diffusers/blob/074e12358bc17e7dbe111ea4f62f05dbae8a49d5/examples/controlnet/train_controlnet.py#L606) using load_dataset(). however, load_from_disk() seems to work? would appreciate if someone can explain why that's the case here 1. for reference here's the structure of the original training files _before_ dataset creation - ``` - dir train - dir A (illustrations) - dir B (SignWriting) - prompt.json containing: {"source": "B/file.png", "target": "A/file.png", "prompt": "..."} ``` 2. here are features _after_ dataset creation - ``` "features": { "control_image": { "_type": "Image" }, "image": { "_type": "Image" }, "caption": { "dtype": "string", "_type": "Value" } ``` 3. I've also attempted to upload the dataset to huggingface with the same error output ### Steps to reproduce the bug 1. [dataset creation script](https://github.com/sign-language-processing/signwriting-illustration/blob/main/signwriting_illustration/controlnet_huggingface/dataset.py) 2. controlnet [training script](examples/controlnet/train_controlnet.py) used 3. training parameters - ! accelerate launch diffusers/examples/controlnet/train_controlnet.py \ --pretrained_model_name_or_path="stable-diffusion-v1-5/stable-diffusion-v1-5" \ --output_dir="$OUTPUT_DIR" \ --train_data_dir="$HF_DATASET_DIR" \ --conditioning_image_column=control_image \ --image_column=image \ --caption_column=caption \ --resolution=512\ --learning_rate=1e-5 \ --validation_image "./validation/0a4b3c71265bb3a726457837428dda78.png" "./validation/0a5922fe2c638e6776bd62f623145004.png" "./validation/1c9f1a53106f64c682cf5d009ee7156f.png" \ --validation_prompt "An illustration of a man with short hair" "An illustration of a woman with short hair" "An illustration of Barack Obama" \ --train_batch_size=4 \ --num_train_epochs=500 \ --tracker_project_name="sd-controlnet-signwriting-test" \ --hub_model_id="sarahahtee/signwriting-illustration-test" \ --checkpointing_steps=5000 \ --validation_steps=1000 \ --report_to wandb \ --push_to_hub 4. command - ` sbatch --export=HUGGINGFACE_TOKEN=hf_token,WANDB_API_KEY=api_key script.sh` ### Expected behavior ``` 11/25/2024 17:12:18 - INFO - __main__ - Initializing controlnet weights from unet Generating train split: 1 examples [00:00, 334.85 examples/s] Traceback (most recent call last): File "/data/user/user/signwriting_illustration/controlnet_huggingface/diffusers/examples/controlnet/train_controlnet.py", line 1189, in <module> main(args) File "/data/user/user/signwriting_illustration/controlnet_huggingface/diffusers/examples/controlnet/train_controlnet.py", line 923, in main train_dataset = make_train_dataset(args, tokenizer, accelerator) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/user/user/signwriting_illustration/controlnet_huggingface/diffusers/examples/controlnet/train_controlnet.py", line 639, in make_train_dataset raise ValueError( ValueError: `--image_column` value 'image' not found in dataset columns. Dataset columns are: _data_files, _fingerprint, _format_columns, _format_kwargs, _format_type, _output_all_columns, _split ``` ### Environment info accelerate 1.1.1 huggingface-hub 0.26.2 python 3.11 torch 2.5.1 transformers 4.46.2
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7298/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/issues/7297
7,297
wrong return type for `IterableDataset.shard()`
{ "avatar_url": "https://avatars.githubusercontent.com/u/47225236?v=4", "events_url": "https://api.github.com/users/ysngshn/events{/privacy}", "followers_url": "https://api.github.com/users/ysngshn/followers", "following_url": "https://api.github.com/users/ysngshn/following{/other_user}", "gists_url": "https://api.github.com/users/ysngshn/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ysngshn", "id": 47225236, "login": "ysngshn", "node_id": "MDQ6VXNlcjQ3MjI1MjM2", "organizations_url": "https://api.github.com/users/ysngshn/orgs", "received_events_url": "https://api.github.com/users/ysngshn/received_events", "repos_url": "https://api.github.com/users/ysngshn/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ysngshn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ysngshn/subscriptions", "type": "User", "url": "https://api.github.com/users/ysngshn", "user_view_type": "public" }
[]
closed
false
[ "Oops my bad ! thanks for reporting" ]
2024-11-22T17:25:46Z
2024-12-03T14:27:27Z
2024-12-03T14:27:03Z
NONE
null
null
### Describe the bug `IterableDataset.shard()` has the wrong typing for its return as `"Dataset"`. It should be `"IterableDataset"`. Makes my IDE unhappy. ### Steps to reproduce the bug look at [the source code](https://github.com/huggingface/datasets/blob/main/src/datasets/iterable_dataset.py#L2668)? ### Expected behavior Correct return type as `"IterableDataset"` ### Environment info datasets==3.1.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7297/reactions" }
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/pull/7296
7,296
Remove upper version limit of fsspec[http]
{ "avatar_url": "https://avatars.githubusercontent.com/u/17618148?v=4", "events_url": "https://api.github.com/users/cyyever/events{/privacy}", "followers_url": "https://api.github.com/users/cyyever/followers", "following_url": "https://api.github.com/users/cyyever/following{/other_user}", "gists_url": "https://api.github.com/users/cyyever/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cyyever", "id": 17618148, "login": "cyyever", "node_id": "MDQ6VXNlcjE3NjE4MTQ4", "organizations_url": "https://api.github.com/users/cyyever/orgs", "received_events_url": "https://api.github.com/users/cyyever/received_events", "repos_url": "https://api.github.com/users/cyyever/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cyyever/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cyyever/subscriptions", "type": "User", "url": "https://api.github.com/users/cyyever", "user_view_type": "public" }
[]
closed
false
[]
2024-11-20T11:29:16Z
2025-03-06T04:47:04Z
2025-03-06T04:47:01Z
CONTRIBUTOR
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7296.diff", "html_url": "https://github.com/huggingface/datasets/pull/7296", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7296.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7296" }
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/17618148?v=4", "events_url": "https://api.github.com/users/cyyever/events{/privacy}", "followers_url": "https://api.github.com/users/cyyever/followers", "following_url": "https://api.github.com/users/cyyever/following{/other_user}", "gists_url": "https://api.github.com/users/cyyever/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cyyever", "id": 17618148, "login": "cyyever", "node_id": "MDQ6VXNlcjE3NjE4MTQ4", "organizations_url": "https://api.github.com/users/cyyever/orgs", "received_events_url": "https://api.github.com/users/cyyever/received_events", "repos_url": "https://api.github.com/users/cyyever/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cyyever/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cyyever/subscriptions", "type": "User", "url": "https://api.github.com/users/cyyever", "user_view_type": "public" }
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/7296/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/issues/7295
7,295
[BUG]: Streaming from S3 triggers `unexpected keyword argument 'requote_redirect_url'`
{ "avatar_url": "https://avatars.githubusercontent.com/u/27340033?v=4", "events_url": "https://api.github.com/users/casper-hansen/events{/privacy}", "followers_url": "https://api.github.com/users/casper-hansen/followers", "following_url": "https://api.github.com/users/casper-hansen/following{/other_user}", "gists_url": "https://api.github.com/users/casper-hansen/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/casper-hansen", "id": 27340033, "login": "casper-hansen", "node_id": "MDQ6VXNlcjI3MzQwMDMz", "organizations_url": "https://api.github.com/users/casper-hansen/orgs", "received_events_url": "https://api.github.com/users/casper-hansen/received_events", "repos_url": "https://api.github.com/users/casper-hansen/repos", "site_admin": false, "starred_url": "https://api.github.com/users/casper-hansen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/casper-hansen/subscriptions", "type": "User", "url": "https://api.github.com/users/casper-hansen", "user_view_type": "public" }
[]
open
false
[]
2024-11-19T12:23:36Z
2024-11-19T13:01:53Z
null
NONE
null
null
### Describe the bug Note that this bug is only triggered when `streaming=True`. #5459 introduced always calling fsspec with `client_kwargs={"requote_redirect_url": False}`, which seems to have incompatibility issues even in the newest versions. Analysis of what's happening: 1. `datasets` passes the `client_kwargs` through `fsspec` 2. `fsspec` passes the `client_kwargs` through `s3fs` 3. `s3fs` passes the `client_kwargs` to `aiobotocore` which uses `aiohttp` ``` s3creator = self.session.create_client( "s3", config=conf, **init_kwargs, **client_kwargs ) ``` 4. The `session` tries to create an `aiohttp` session but the `**kwargs` are not just kept as unfolded `**kwargs` but passed in as individual variables (`requote_redirect_url` and `trust_env`). Error: ``` Traceback (most recent call last): File "/Users/cxrh/Documents/GitHub/nlp_foundation/nlp_train/test.py", line 14, in <module> batch = next(iter(ds)) File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 1353, in __iter__ for key, example in ex_iterable: File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 255, in __iter__ for key, pa_table in self.generate_tables_fn(**self.kwargs): File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/datasets/packaged_modules/json/json.py", line 78, in _generate_tables for file_idx, file in enumerate(itertools.chain.from_iterable(files)): File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/datasets/download/streaming_download_manager.py", line 840, in __iter__ yield from self.generator(*self.args, **self.kwargs) File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/datasets/download/streaming_download_manager.py", line 921, in _iter_from_urlpaths elif xisdir(urlpath, download_config=download_config): File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/datasets/download/streaming_download_manager.py", line 305, in xisdir return fs.isdir(inner_path) File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/fsspec/spec.py", line 721, in isdir return self.info(path)["type"] == "directory" File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/fsspec/archive.py", line 38, in info self._get_dirs() File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/datasets/filesystems/compression.py", line 64, in _get_dirs f = {**self.file.fs.info(self.file.path), "name": self.uncompressed_name} File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/fsspec/asyn.py", line 118, in wrapper return sync(self.loop, func, *args, **kwargs) File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/fsspec/asyn.py", line 103, in sync raise return_result File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/fsspec/asyn.py", line 56, in _runner result[0] = await coro File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/s3fs/core.py", line 1302, in _info out = await self._call_s3( File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/s3fs/core.py", line 341, in _call_s3 await self.set_session() File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/s3fs/core.py", line 524, in set_session s3creator = self.session.create_client( File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/aiobotocore/session.py", line 114, in create_client return ClientCreatorContext(self._create_client(*args, **kwargs)) TypeError: AioSession._create_client() got an unexpected keyword argument 'requote_redirect_url' ``` ### Steps to reproduce the bug 1. Install the necessary libraries, datasets having a requirement for being at least 2.19.0: ``` pip install s3fs fsspec aiohttp aiobotocore botocore 'datasets>=2.19.0' ``` 2. Run this code: ``` from datasets import load_dataset ds = load_dataset( "json", data_files="s3://your_path/*.jsonl.gz", streaming=True, split="train", ) batch = next(iter(ds)) print(batch) ``` 3. You get the `unexpected keyword argument 'requote_redirect_url'` error. ### Expected behavior The datasets is able to load a batch from the dataset stored on S3, without triggering this `requote_redirect_url` error. Fix: I could fix this by directly removing the `requote_redirect_url` and `trust_env` - then it loads properly. <img width="1127" alt="image" src="https://github.com/user-attachments/assets/4c40efa9-8787-4919-b613-e4908c3d1ab2"> ### Environment info - `datasets` version: 3.1.0 - Platform: macOS-15.1-arm64-arm-64bit - Python version: 3.10.15 - `huggingface_hub` version: 0.26.2 - PyArrow version: 18.0.0 - Pandas version: 2.2.3 - `fsspec` version: 2024.9.0
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7295/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/pull/7294
7,294
Remove `aiohttp` from direct dependencies
{ "avatar_url": "https://avatars.githubusercontent.com/u/58669?v=4", "events_url": "https://api.github.com/users/akx/events{/privacy}", "followers_url": "https://api.github.com/users/akx/followers", "following_url": "https://api.github.com/users/akx/following{/other_user}", "gists_url": "https://api.github.com/users/akx/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/akx", "id": 58669, "login": "akx", "node_id": "MDQ6VXNlcjU4NjY5", "organizations_url": "https://api.github.com/users/akx/orgs", "received_events_url": "https://api.github.com/users/akx/received_events", "repos_url": "https://api.github.com/users/akx/repos", "site_admin": false, "starred_url": "https://api.github.com/users/akx/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/akx/subscriptions", "type": "User", "url": "https://api.github.com/users/akx", "user_view_type": "public" }
[]
closed
false
[]
2024-11-18T14:00:59Z
2025-05-07T14:27:18Z
2025-05-07T14:27:17Z
CONTRIBUTOR
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7294.diff", "html_url": "https://github.com/huggingface/datasets/pull/7294", "merged_at": "2025-05-07T14:27:17Z", "patch_url": "https://github.com/huggingface/datasets/pull/7294.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7294" }
The dependency is only used for catching an exception from other code. That can be done with an import guard.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 1, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/7294/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/pull/7293
7,293
Updated inconsistent output in documentation examples for `ClassLabel`
{ "avatar_url": "https://avatars.githubusercontent.com/u/17179696?v=4", "events_url": "https://api.github.com/users/sergiopaniego/events{/privacy}", "followers_url": "https://api.github.com/users/sergiopaniego/followers", "following_url": "https://api.github.com/users/sergiopaniego/following{/other_user}", "gists_url": "https://api.github.com/users/sergiopaniego/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sergiopaniego", "id": 17179696, "login": "sergiopaniego", "node_id": "MDQ6VXNlcjE3MTc5Njk2", "organizations_url": "https://api.github.com/users/sergiopaniego/orgs", "received_events_url": "https://api.github.com/users/sergiopaniego/received_events", "repos_url": "https://api.github.com/users/sergiopaniego/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sergiopaniego/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sergiopaniego/subscriptions", "type": "User", "url": "https://api.github.com/users/sergiopaniego", "user_view_type": "public" }
[]
closed
false
[ "Updated! 😄 ", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7293). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "@lhoestq, can you help with this failing test please? 🙏 " ]
2024-11-16T16:20:57Z
2024-12-06T11:33:33Z
2024-12-06T11:32:01Z
MEMBER
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7293.diff", "html_url": "https://github.com/huggingface/datasets/pull/7293", "merged_at": "2024-12-06T11:32:01Z", "patch_url": "https://github.com/huggingface/datasets/pull/7293.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7293" }
fix #7129 @stevhliu
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7293/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/issues/7292
7,292
DataFilesNotFoundError for datasets `OpenMol/PubChemSFT`
{ "avatar_url": "https://avatars.githubusercontent.com/u/17878022?v=4", "events_url": "https://api.github.com/users/xnuohz/events{/privacy}", "followers_url": "https://api.github.com/users/xnuohz/followers", "following_url": "https://api.github.com/users/xnuohz/following{/other_user}", "gists_url": "https://api.github.com/users/xnuohz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/xnuohz", "id": 17878022, "login": "xnuohz", "node_id": "MDQ6VXNlcjE3ODc4MDIy", "organizations_url": "https://api.github.com/users/xnuohz/orgs", "received_events_url": "https://api.github.com/users/xnuohz/received_events", "repos_url": "https://api.github.com/users/xnuohz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/xnuohz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xnuohz/subscriptions", "type": "User", "url": "https://api.github.com/users/xnuohz", "user_view_type": "public" }
[]
closed
false
[ "Hi ! If the dataset owner uses `push_to_hub()` instead of `save_to_disk()` and upload the local files it will fix the issue.\r\nRight now `datasets` sees the train/test/valid pickle files but they are not supported file formats.", "Alternatively you can load the arrow file instead:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('OpenMol/PubChemSFT', data_files='stage1/*.arrow')\r\n```", "Thanks! I'll have a try." ]
2024-11-16T11:54:31Z
2024-11-19T00:53:00Z
2024-11-19T00:52:59Z
NONE
null
null
### Describe the bug Cannot load the dataset https://huggingface.co/datasets/OpenMol/PubChemSFT ### Steps to reproduce the bug ``` from datasets import load_dataset dataset = load_dataset('OpenMol/PubChemSFT') ``` ### Expected behavior ``` --------------------------------------------------------------------------- DataFilesNotFoundError Traceback (most recent call last) Cell In[7], [line 2](vscode-notebook-cell:?execution_count=7&line=2) [1](vscode-notebook-cell:?execution_count=7&line=1) from datasets import load_dataset ----> [2](vscode-notebook-cell:?execution_count=7&line=2) dataset = load_dataset('OpenMol/PubChemSFT') File ~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2587, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs) [2582](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2582) verification_mode = VerificationMode( [2583](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2583) (verification_mode or VerificationMode.BASIC_CHECKS) if not save_infos else VerificationMode.ALL_CHECKS [2584](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2584) ) [2586](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2586) # Create a dataset builder -> [2587](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2587) builder_instance = load_dataset_builder( [2588](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2588) path=path, [2589](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2589) name=name, [2590](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2590) data_dir=data_dir, [2591](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2591) data_files=data_files, [2592](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2592) cache_dir=cache_dir, [2593](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2593) features=features, [2594](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2594) download_config=download_config, [2595](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2595) download_mode=download_mode, [2596](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2596) revision=revision, [2597](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2597) token=token, [2598](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2598) storage_options=storage_options, [2599](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2599) trust_remote_code=trust_remote_code, [2600](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2600) _require_default_config_name=name is None, [2601](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2601) **config_kwargs, [2602](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2602) ) [2604](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2604) # Return iterable dataset in case of streaming [2605](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2605) if streaming: File ~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2259, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, token, use_auth_token, storage_options, trust_remote_code, _require_default_config_name, **config_kwargs) [2257](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2257) download_config = download_config.copy() if download_config else DownloadConfig() [2258](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2258) download_config.storage_options.update(storage_options) -> [2259](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2259) dataset_module = dataset_module_factory( [2260](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2260) path, [2261](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2261) revision=revision, [2262](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2262) download_config=download_config, [2263](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2263) download_mode=download_mode, [2264](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2264) data_dir=data_dir, [2265](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2265) data_files=data_files, [2266](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2266) cache_dir=cache_dir, [2267](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2267) trust_remote_code=trust_remote_code, [2268](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2268) _require_default_config_name=_require_default_config_name, [2269](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2269) _require_custom_configs=bool(config_kwargs), [2270](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2270) ) [2271](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2271) # Get dataset builder class from the processing script [2272](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2272) builder_kwargs = dataset_module.builder_kwargs File ~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1904, in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, cache_dir, trust_remote_code, _require_default_config_name, _require_custom_configs, **download_kwargs) [1902](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1902) raise ConnectionError(f"Couldn't reach the Hugging Face Hub for dataset '{path}': {e1}") from None [1903](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1903) if isinstance(e1, (DataFilesNotFoundError, DatasetNotFoundError, EmptyDatasetError)): -> [1904](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1904) raise e1 from None [1905](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1905) if isinstance(e1, FileNotFoundError): [1906](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1906) raise FileNotFoundError( [1907](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1907) f"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory. " [1908](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1908) f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}" [1909](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1909) ) from None File ~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1885, in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, cache_dir, trust_remote_code, _require_default_config_name, _require_custom_configs, **download_kwargs) [1876](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1876) return HubDatasetModuleFactoryWithScript( [1877](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1877) path, [1878](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1878) revision=revision, (...) [1882](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1882) trust_remote_code=trust_remote_code, [1883](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1883) ).get_module() [1884](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1884) else: -> [1885](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1885) return HubDatasetModuleFactoryWithoutScript( [1886](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1886) path, [1887](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1887) revision=revision, [1888](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1888) data_dir=data_dir, [1889](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1889) data_files=data_files, [1890](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1890) download_config=download_config, [1891](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1891) download_mode=download_mode, [1892](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1892) ).get_module() [1893](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1893) except Exception as e1: [1894](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1894) # All the attempts failed, before raising the error we should check if the module is already cached [1895](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1895) try: File ~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1270, in HubDatasetModuleFactoryWithoutScript.get_module(self) [1263](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1263) patterns = get_data_patterns(base_path, download_config=self.download_config) [1264](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1264) data_files = DataFilesDict.from_patterns( [1265](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1265) patterns, [1266](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1266) base_path=base_path, [1267](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1267) allowed_extensions=ALL_ALLOWED_EXTENSIONS, [1268](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1268) download_config=self.download_config, [1269](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1269) ) -> [1270](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1270) module_name, default_builder_kwargs = infer_module_for_data_files( [1271](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1271) data_files=data_files, [1272](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1272) path=self.name, [1273](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1273) download_config=self.download_config, [1274](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1274) ) [1275](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1275) data_files = data_files.filter_extensions(_MODULE_TO_EXTENSIONS[module_name]) [1276](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1276) # Collect metadata files if the module supports them File ~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:597, in infer_module_for_data_files(data_files, path, download_config) [595](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:595) raise ValueError(f"Couldn't infer the same data file format for all splits. Got {split_modules}") [596](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:596) if not module_name: --> [597](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:597) raise DataFilesNotFoundError("No (supported) data files found" + (f" in {path}" if path else "")) [598](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:598) return module_name, default_builder_kwargs DataFilesNotFoundError: No (supported) data files found in OpenMol/PubChemSFT ``` ### Environment info ``` - `datasets` version: 3.1.0 - Platform: Linux-5.15.0-125-generic-x86_64-with-glibc2.31 - Python version: 3.9.18 - `huggingface_hub` version: 0.25.2 - PyArrow version: 18.0.0 - Pandas version: 2.0.3 - `fsspec` version: 2023.9.2 ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/17878022?v=4", "events_url": "https://api.github.com/users/xnuohz/events{/privacy}", "followers_url": "https://api.github.com/users/xnuohz/followers", "following_url": "https://api.github.com/users/xnuohz/following{/other_user}", "gists_url": "https://api.github.com/users/xnuohz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/xnuohz", "id": 17878022, "login": "xnuohz", "node_id": "MDQ6VXNlcjE3ODc4MDIy", "organizations_url": "https://api.github.com/users/xnuohz/orgs", "received_events_url": "https://api.github.com/users/xnuohz/received_events", "repos_url": "https://api.github.com/users/xnuohz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/xnuohz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xnuohz/subscriptions", "type": "User", "url": "https://api.github.com/users/xnuohz", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7292/reactions" }
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/issues/7291
7,291
Why return_tensors='pt' doesn't work?
{ "avatar_url": "https://avatars.githubusercontent.com/u/86752851?v=4", "events_url": "https://api.github.com/users/bw-wang19/events{/privacy}", "followers_url": "https://api.github.com/users/bw-wang19/followers", "following_url": "https://api.github.com/users/bw-wang19/following{/other_user}", "gists_url": "https://api.github.com/users/bw-wang19/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/bw-wang19", "id": 86752851, "login": "bw-wang19", "node_id": "MDQ6VXNlcjg2NzUyODUx", "organizations_url": "https://api.github.com/users/bw-wang19/orgs", "received_events_url": "https://api.github.com/users/bw-wang19/received_events", "repos_url": "https://api.github.com/users/bw-wang19/repos", "site_admin": false, "starred_url": "https://api.github.com/users/bw-wang19/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bw-wang19/subscriptions", "type": "User", "url": "https://api.github.com/users/bw-wang19", "user_view_type": "public" }
[]
open
false
[ "Hi ! `datasets` uses Arrow as storage backend which is agnostic to deep learning frameworks like torch. If you want to get torch tensors back, you need to do `dataset = dataset.with_format(\"torch\")`", "> Hi ! `datasets` uses Arrow as storage backend which is agnostic to deep learning frameworks like torch. If you want to get torch tensors back, you need to do `dataset = dataset.with_format(\"torch\")`\r\n\r\nIt does work! Thanks for your suggestion!" ]
2024-11-15T15:01:23Z
2024-11-18T13:47:08Z
null
NONE
null
null
### Describe the bug I tried to add input_ids to dataset with map(), and I used the return_tensors='pt', but why I got the callback with the type of List? ![image](https://github.com/user-attachments/assets/ab046e20-2174-4e91-9cd6-4a296a43e83c) ### Steps to reproduce the bug ![image](https://github.com/user-attachments/assets/5d504d4c-22c7-4742-99a1-9cab78739b17) ### Expected behavior Sorry for this silly question, I'm noob on using this tool. But I think it should return a tensor value as I have used the protocol? When I tokenize only one sentence using tokenized_input=tokenizer(input, return_tensors='pt' ),it does return in tensor type. Why doesn't it work in map()? ### Environment info transformers>=4.41.2,<=4.45.0 datasets>=2.16.0,<=2.21.0 accelerate>=0.30.1,<=0.34.2 peft>=0.11.1,<=0.12.0 trl>=0.8.6,<=0.9.6 gradio>=4.0.0 pandas>=2.0.0 scipy einops sentencepiece tiktoken protobuf uvicorn pydantic fastapi sse-starlette matplotlib>=3.7.0 fire packaging pyyaml numpy<2.0.0
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7291/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/issues/7290
7,290
`Dataset.save_to_disk` hangs when using num_proc > 1
{ "avatar_url": "https://avatars.githubusercontent.com/u/22243463?v=4", "events_url": "https://api.github.com/users/JohannesAck/events{/privacy}", "followers_url": "https://api.github.com/users/JohannesAck/followers", "following_url": "https://api.github.com/users/JohannesAck/following{/other_user}", "gists_url": "https://api.github.com/users/JohannesAck/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/JohannesAck", "id": 22243463, "login": "JohannesAck", "node_id": "MDQ6VXNlcjIyMjQzNDYz", "organizations_url": "https://api.github.com/users/JohannesAck/orgs", "received_events_url": "https://api.github.com/users/JohannesAck/received_events", "repos_url": "https://api.github.com/users/JohannesAck/repos", "site_admin": false, "starred_url": "https://api.github.com/users/JohannesAck/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JohannesAck/subscriptions", "type": "User", "url": "https://api.github.com/users/JohannesAck", "user_view_type": "public" }
[]
open
false
[ "I've met the same situations.\r\n\r\nHere's my logs:\r\nnum_proc = 64, I stop it early as it cost **too** much time.\r\n```\r\nSaving the dataset (1540/4775 shards): 32%|███▏ | 47752224/147853764 [15:32:54<132:28:34, 209.89 examples/s]\r\nSaving the dataset (1540/4775 shards): 32%|███▏ | 47754224/147853764 [15:32:54<78:14:37, 355.37 examples/s] \r\nSaving the dataset (1540/4775 shards): 32%|███▏ | 47755224/147853764 [15:32:59<93:43:45, 296.65 examples/s]\r\n```\r\n\r\nnum_proc = 1(Not set num_proc parameter)\r\n```\r\nSaving the dataset (1753/4775 shards): 37%|███▋ | 54301556/147853764 [24:46<38:33, 40440.93 examples/s]\r\nSaving the dataset (1753/4775 shards): 37%|███▋ | 54306556/147853764 [24:46<39:34, 39392.01 examples/s]\r\nSaving the dataset (1753/4775 shards): 37%|███▋ | 54311520/147853764 [24:46<38:56, 40030.53 examples/s]\r\n```\r\n\r\nI check the conditions of CPUs and Memory I/O, I found that disk I/O was blocked, but CPU and memory usage was high. There should be some bugs in the code.\r\n\r\n", "Any new process on this issue? I'm encountering the same issue.", "Not getting this issue. \n\nMy output;\n\n`Saving the dataset (0/87 shards): 7%|▎ | 588000/8557560 [01:29<21:02, 6314.04 examples/s]`\n\nAt setting `num_proc=64`, and \n\n`Saving the dataset (0/87 shards): 0%| | 28000/8557560 [03:20<16:59:06, 139.49 examples/s]` \n\nAt num_proc=1 (pass nothing)\n\nMy `pyproject.toml`; \n\n```\n[project]\nname = \"test\"\nversion = \"0.1.0\"\nrequires-python = \"==3.13.0\"\ndependencies = [\n \"absl-py==2.1.0\",\n \"accelerate==1.7.0\",\n \"aiohappyeyeballs==2.6.1\",\n \"aiohttp==3.12.11\",\n \"aiosignal==1.3.2\",\n \"annotated-types==0.7.0\",\n \"appdirs==1.4.4\",\n \"argcomplete>=1.8.1\",\n \"astunparse==1.6.3\",\n \"async-timeout==5.0.1\",\n \"attrs==21.2.0\",\n \"automat==20.2.0\",\n \"babel==2.8.0\",\n \"backcall==0.2.0\",\n \"bcrypt==3.2.0\",\n \"beautifulsoup4==4.10.0\",\n \"beniget==0.4.2\",\n \"bleach==4.1.0\",\n \"blinker==1.4\",\n \"blis==1.3.0\",\n \"bottle==0.12.19\",\n \"brotli==1.0.9\",\n \"catalogue==2.0.10\",\n \"certifi==2020.6.20\",\n \"cffi==1.15.0\",\n \"chardet==4.0.0\",\n \"charset-normalizer==3.4.2\",\n \"click==8.0.3\",\n \"cloudpathlib==0.21.1\",\n \"colorama>=0.4.4\",\n \"commonmark==0.9.1\",\n \"confection==0.1.5\",\n \"configobj==5.0.6\",\n \"constantly==15.1.0\",\n \"cryptography==3.4.8\",\n \"ctop==1.0.0\",\n \"cycler==0.11.0\",\n \"cymem==2.0.11\",\n \"datasets==3.6.0\",\n \"dbus-python==1.2.18\",\n \"decorator==4.4.2\",\n \"defusedxml==0.7.1\",\n \"dill==0.3.8\",\n \"distlib==0.3.4\",\n \"distro==1.7.0\",\n \"einops==0.8.1\",\n \"en-core-web-sm\",\n \"entrypoints==0.4\",\n \"evaluate==0.4.3\",\n \"filelock==3.6.0\",\n \"flake8==4.0.1\",\n \"flatbuffers==25.2.10\",\n \"fonttools==4.29.1\",\n \"frozenlist==1.7.0\",\n \"fs==2.4.12\",\n \"fsspec==2024.3.1\",\n \"future==0.18.2\",\n \"gast==0.6.0\",\n \"gitdb==4.0.12\",\n \"gitpython==3.1.44\",\n \"glances==3.2.4.2\",\n \"google-pasta==0.2.0\",\n \"grpcio==1.73.0\",\n \"h5py==3.14.0\",\n \"hf-xet==1.1.3\",\n \"html5lib==1.1\",\n \"httplib2==0.20.2\",\n \"huggingface-hub==0.32.4\",\n \"hyperlink==21.0.0\",\n \"icdiff==2.0.4\",\n \"idna==3.3\",\n \"importlib-metadata==4.6.4\",\n \"incremental==21.3.0\",\n \"influxdb==5.3.1\",\n \"ipykernel==6.7.0\",\n \"ipython==7.31.1\",\n \"ipython-genutils==0.2.0\",\n \"jax>=0.5.1\",\n \"jax-cuda12-pjrt>=0.5.1\",\n \"jax-cuda12-plugin>=0.5.1\",\n \"jaxlib>=0.5.1\",\n \"jedi==0.18.0\",\n \"jeepney==0.7.1\",\n \"jinja2==3.0.3\",\n \"joblib>=0.17.0\",\n \"jsonpatch==1.32\",\n \"jsonpointer==2.0\",\n \"jsonschema==3.2.0\",\n \"jupyter-client==7.1.2\",\n \"jupyter-core==4.9.1\",\n \"kaptan>=0.5.12\",\n \"keras==3.6.0\",\n \"keyring==23.5.0\",\n \"kiwisolver==1.3.2\",\n \"langcodes==3.5.0\",\n \"language-data==1.3.0\",\n \"launchpadlib==1.10.16\",\n \"lazr-restfulclient==0.14.4\",\n \"lazr-uri==1.0.6\",\n \"libclang==18.1.1\",\n \"libtmux==0.10.1\",\n \"livereload==2.6.3\",\n \"lxml==6.0.0\",\n \"lz4==3.1.3\",\n \"marisa-trie==1.2.1\",\n \"markdown==3.3.6\",\n \"markupsafe==2.0.1\",\n \"matplotlib==3.5.1\",\n \"matplotlib-inline==0.1.3\",\n \"mccabe==0.6.1\",\n \"mkdocs==1.1.2\",\n \"ml-dtypes==0.5.1\",\n \"more-itertools==8.10.0\",\n \"mpmath>=0.0.0\",\n \"msgpack==1.0.3\",\n \"multidict==6.4.4\",\n \"multiprocess==0.70.16\",\n \"murmurhash==1.0.13\",\n \"namex==0.0.8\",\n \"nest-asyncio==1.5.4\",\n \"netifaces==0.11.0\",\n \"networkx==2.4\",\n \"nltk==3.9.1\",\n \"numpy>=1.23.3\",\n \"nvidia-ml-py==12.555.43\",\n \"oauthlib==3.2.0\",\n \"olefile==0.46\",\n \"opt-einsum==3.3.0\",\n \"optree==0.13.1\",\n \"packaging==21.3\",\n \"pandas==2.3.0\",\n \"parso==0.8.1\",\n \"pexpect==4.8.0\",\n \"pickleshare==0.7.5\",\n \"pillow>=9.0.1\",\n \"pipx==1.0.0\",\n \"platformdirs==2.5.1\",\n \"ply==3.11\",\n \"portalocker==3.1.1\",\n \"preshed==3.0.10\",\n \"prompt-toolkit==3.0.28\",\n \"propcache==0.3.2\",\n \"protobuf==4.21.12\",\n \"psutil==5.9.0\",\n \"ptyprocess==0.7.0\",\n \"py==1.10.0\",\n \"pyarrow==20.0.0\",\n \"pyasn1==0.4.8\",\n \"pyasn1-modules==0.2.1\",\n \"pycodestyle==2.8.0\",\n \"pycparser==2.21\",\n \"pycryptodomex==3.11.0\",\n \"pydantic==2.11.5\",\n \"pydantic-core==2.33.2\",\n \"pyflakes==2.4.0\",\n \"pygments==2.11.2\",\n \"pyhamcrest==2.0.2\",\n \"pyinotify==0.9.6\",\n \"pyjwt==2.3.0\",\n \"pyopenssl==21.0.0\",\n \"pyparsing==2.4.7\",\n \"pyrsistent==0.18.1\",\n \"pyserial==3.5\",\n \"pysmi==0.3.2\",\n \"pysnmp==4.4.12\",\n \"pystache==0.6.0\",\n \"python-dateutil>=2.8.1\",\n \"python-magic==0.4.24\",\n \"pythran>=0.10.0\",\n \"pytz==2022.1\",\n \"pyyaml==6.0.2\",\n \"regex==2024.11.6\",\n \"requests==2.32.4\",\n \"rich==11.2.0\",\n \"rouge-score==0.1.2\",\n \"sacrebleu==2.5.1\",\n \"safetensors==0.5.3\",\n \"scikit-learn==1.7.0\",\n \"scipy>=1.8.0\",\n \"secretstorage==3.3.1\",\n \"sentence-transformers==4.1.0\",\n \"sentry-sdk==2.29.1\",\n \"service-identity==18.1.0\",\n \"setproctitle==1.3.6\",\n \"shellingham==1.5.4\",\n \"six==1.16.0\",\n \"smart-open==7.1.0\",\n \"smmap==5.0.2\",\n \"soupsieve==2.3.1\",\n \"spacy==3.8.7\",\n \"spacy-legacy==3.0.12\",\n \"spacy-loggers==1.0.5\",\n \"srsly==2.5.1\",\n \"ssh-import-id==5.11\",\n \"sympy>=1.12\",\n \"tabulate==0.9.0\",\n \"tensorboard==2.19.0\",\n \"tensorboard-data-server==0.7.2\",\n \"termcolor==1.1.0\",\n \"thinc==8.3.6\",\n \"threadpoolctl==3.1.0\",\n \"tmuxp==1.9.2\",\n \"tokenizers==0.21.1\",\n \"torch==2.6.0\",\n \"torchvision==0.21.0\",\n \"tornado==6.1\",\n \"tqdm==4.67.1\",\n \"traitlets==5.1.1\",\n \"transformers==4.52.4\",\n \"triton==3.2.0\",\n \"twisted==22.1.0\",\n \"typer==0.16.0\",\n \"typing-extensions==4.14.0\",\n \"typing-inspection==0.4.1\",\n \"ufolib2==0.13.1\",\n \"urllib3==2.4.0\",\n \"userpath==1.8.0\",\n \"virtualenv==20.13.0\",\n \"wadllib==1.3.6\",\n \"wandb==0.20.1\",\n \"wasabi==1.1.3\",\n \"wcwidth==0.2.5\",\n \"weasel==0.4.1\",\n \"webencodings==0.5.1\",\n \"websocket-client==1.2.3\",\n \"werkzeug==2.0.2\",\n \"xxhash==3.5.0\",\n \"yarl==1.20.1\",\n \"zipp==1.0.0\",\n \"zope-interface==5.4.0\",\n]\n\n[tool.uv.sources]\nen-core-web-sm = { url = \"https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-3.8.0/en_core_web_sm-3.8.0-py3-none-any.whl\" }\n```" ]
2024-11-14T05:25:13Z
2025-06-27T00:56:47Z
null
NONE
null
null
### Describe the bug Hi, I'm encountered a small issue when saving datasets that led to the saving taking up to multiple hours. Specifically, [`Dataset.save_to_disk`](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.Dataset.save_to_disk) is a lot slower when using `num_proc>1` than when using `num_proc=1` The documentation mentions that "Multiprocessing is disabled by default.", but there is no explanation on how to enable it. ### Steps to reproduce the bug ``` import numpy as np from datasets import Dataset n_samples = int(4e6) n_tokens_sample = 100 data_dict = { 'tokens' : np.random.randint(0, 100, (n_samples, n_tokens_sample)), } dataset = Dataset.from_dict(data_dict) dataset.save_to_disk('test_dataset', num_proc=1) dataset.save_to_disk('test_dataset', num_proc=4) dataset.save_to_disk('test_dataset', num_proc=8) ``` This results in: ``` >>> dataset.save_to_disk('test_dataset', num_proc=1) Saving the dataset (7/7 shards): 100%|██████████████| 4000000/4000000 [00:17<00:00, 228075.15 examples/s] >>> dataset.save_to_disk('test_dataset', num_proc=4) Saving the dataset (7/7 shards): 100%|██████████████| 4000000/4000000 [01:49<00:00, 36583.75 examples/s] >>> dataset.save_to_disk('test_dataset', num_proc=8) Saving the dataset (8/8 shards): 100%|██████████████| 4000000/4000000 [02:11<00:00, 30518.43 examples/s] ``` With larger datasets it can take hours, but I didn't benchmark that for this bug report. ### Expected behavior I would expect using `num_proc>1` to be faster instead of slower than `num_proc=1`. ### Environment info - `datasets` version: 3.1.0 - Platform: Linux-5.15.153.1-microsoft-standard-WSL2-x86_64-with-glibc2.35 - Python version: 3.10.12 - `huggingface_hub` version: 0.26.2 - PyArrow version: 18.0.0 - Pandas version: 2.2.3 - `fsspec` version: 2024.6.1
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7290/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/issues/7289
7,289
Dataset viewer displays wrong statists
{ "avatar_url": "https://avatars.githubusercontent.com/u/3585459?v=4", "events_url": "https://api.github.com/users/speedcell4/events{/privacy}", "followers_url": "https://api.github.com/users/speedcell4/followers", "following_url": "https://api.github.com/users/speedcell4/following{/other_user}", "gists_url": "https://api.github.com/users/speedcell4/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/speedcell4", "id": 3585459, "login": "speedcell4", "node_id": "MDQ6VXNlcjM1ODU0NTk=", "organizations_url": "https://api.github.com/users/speedcell4/orgs", "received_events_url": "https://api.github.com/users/speedcell4/received_events", "repos_url": "https://api.github.com/users/speedcell4/repos", "site_admin": false, "starred_url": "https://api.github.com/users/speedcell4/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/speedcell4/subscriptions", "type": "User", "url": "https://api.github.com/users/speedcell4", "user_view_type": "public" }
[]
closed
false
[ "i think this issue is more for https://github.com/huggingface/dataset-viewer" ]
2024-11-11T03:29:27Z
2024-11-13T13:02:25Z
2024-11-13T13:02:25Z
NONE
null
null
### Describe the bug In [my dataset](https://huggingface.co/datasets/speedcell4/opus-unigram2), there is a column called `lang2`, and there are 94 different classes in total, but the viewer says there are 83 values only. This issue only arises in the `train` split. The total number of values is also 94 in the `test` and `dev` columns, viewer tells the correct number of them. <img width="177" alt="image" src="https://github.com/user-attachments/assets/78d76ef2-fe0e-4fa3-85e0-fb2552813d1c"> ### Steps to reproduce the bug ```python3 from datasets import load_dataset ds = load_dataset('speedcell4/opus-unigram2').unique('lang2') for key, lang2 in ds.items(): print(key, len(lang2)) ``` This script returns the following and tells that the `train` split has 94 values in the `lang2` column. ``` train 94 dev 94 test 94 zero 5 ``` ### Expected behavior 94 in the reviewer. ### Environment info Collecting environment information... PyTorch version: 2.4.1+cu121 Is debug build: False CUDA used to build PyTorch: 12.1 ROCM used to build PyTorch: N/A OS: CentOS Linux release 8.2.2004 (Core) (x86_64) GCC version: (GCC) 8.3.1 20191121 (Red Hat 8.3.1-5) Clang version: Could not collect CMake version: version 3.11.4 Libc version: glibc-2.28 Python version: 3.9.20 (main, Oct 3 2024, 07:27:41) [GCC 11.2.0] (64-bit runtime) Python platform: Linux-4.18.0-193.28.1.el8_2.x86_64-x86_64-with-glibc2.28 Is CUDA available: True CUDA runtime version: 12.2.140 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA A100-SXM4-40GB GPU 1: NVIDIA A100-SXM4-40GB GPU 2: NVIDIA A100-SXM4-40GB GPU 3: NVIDIA A100-SXM4-40GB GPU 4: NVIDIA A100-SXM4-40GB GPU 5: NVIDIA A100-SXM4-40GB GPU 6: NVIDIA A100-SXM4-40GB GPU 7: NVIDIA A100-SXM4-40GB Nvidia driver version: 525.85.05 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 64 On-line CPU(s) list: 0-63 Thread(s) per core: 1 Core(s) per socket: 32 Socket(s): 2 NUMA node(s): 4 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC 7542 32-Core Processor Stepping: 0 CPU MHz: 3389.114 BogoMIPS: 5789.40 Virtualization: AMD-V L1d cache: 32K L1i cache: 32K L2 cache: 512K L3 cache: 16384K NUMA node0 CPU(s): 0-15 NUMA node1 CPU(s): 16-31 NUMA node2 CPU(s): 32-47 NUMA node3 CPU(s): 48-63 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif umip rdpid overflow_recov succor smca Versions of relevant libraries: [pip3] numpy==1.26.4 [pip3] torch==2.4.1+cu121 [pip3] torchaudio==2.4.1+cu121 [pip3] torchdevice==0.1.1 [pip3] torchglyph==0.3.2 [pip3] torchmetrics==1.5.0 [pip3] torchrua==0.5.1 [pip3] torchvision==0.19.1+cu121 [pip3] triton==3.0.0 [pip3] datasets==3.0.1 [conda] numpy 1.26.4 pypi_0 pypi [conda] torch 2.4.1+cu121 pypi_0 pypi [conda] torchaudio 2.4.1+cu121 pypi_0 pypi [conda] torchdevice 0.1.1 pypi_0 pypi [conda] torchglyph 0.3.2 pypi_0 pypi [conda] torchmetrics 1.5.0 pypi_0 pypi [conda] torchrua 0.5.1 pypi_0 pypi [conda] torchvision 0.19.1+cu121 pypi_0 pypi [conda] triton 3.0.0 pypi_0 pypi
{ "avatar_url": "https://avatars.githubusercontent.com/u/3585459?v=4", "events_url": "https://api.github.com/users/speedcell4/events{/privacy}", "followers_url": "https://api.github.com/users/speedcell4/followers", "following_url": "https://api.github.com/users/speedcell4/following{/other_user}", "gists_url": "https://api.github.com/users/speedcell4/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/speedcell4", "id": 3585459, "login": "speedcell4", "node_id": "MDQ6VXNlcjM1ODU0NTk=", "organizations_url": "https://api.github.com/users/speedcell4/orgs", "received_events_url": "https://api.github.com/users/speedcell4/received_events", "repos_url": "https://api.github.com/users/speedcell4/repos", "site_admin": false, "starred_url": "https://api.github.com/users/speedcell4/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/speedcell4/subscriptions", "type": "User", "url": "https://api.github.com/users/speedcell4", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7289/reactions" }
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/pull/7288
7,288
Release v3.1.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4", "events_url": "https://api.github.com/users/alex-hh/events{/privacy}", "followers_url": "https://api.github.com/users/alex-hh/followers", "following_url": "https://api.github.com/users/alex-hh/following{/other_user}", "gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alex-hh", "id": 5719745, "login": "alex-hh", "node_id": "MDQ6VXNlcjU3MTk3NDU=", "organizations_url": "https://api.github.com/users/alex-hh/orgs", "received_events_url": "https://api.github.com/users/alex-hh/received_events", "repos_url": "https://api.github.com/users/alex-hh/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions", "type": "User", "url": "https://api.github.com/users/alex-hh", "user_view_type": "public" }
[]
closed
false
[]
2024-11-10T09:38:15Z
2024-11-10T09:38:48Z
2024-11-10T09:38:48Z
CONTRIBUTOR
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7288.diff", "html_url": "https://github.com/huggingface/datasets/pull/7288", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7288.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7288" }
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4", "events_url": "https://api.github.com/users/alex-hh/events{/privacy}", "followers_url": "https://api.github.com/users/alex-hh/followers", "following_url": "https://api.github.com/users/alex-hh/following{/other_user}", "gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alex-hh", "id": 5719745, "login": "alex-hh", "node_id": "MDQ6VXNlcjU3MTk3NDU=", "organizations_url": "https://api.github.com/users/alex-hh/orgs", "received_events_url": "https://api.github.com/users/alex-hh/received_events", "repos_url": "https://api.github.com/users/alex-hh/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions", "type": "User", "url": "https://api.github.com/users/alex-hh", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7288/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/issues/7287
7,287
Support for identifier-based automated split construction
{ "avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4", "events_url": "https://api.github.com/users/alex-hh/events{/privacy}", "followers_url": "https://api.github.com/users/alex-hh/followers", "following_url": "https://api.github.com/users/alex-hh/following{/other_user}", "gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alex-hh", "id": 5719745, "login": "alex-hh", "node_id": "MDQ6VXNlcjU3MTk3NDU=", "organizations_url": "https://api.github.com/users/alex-hh/orgs", "received_events_url": "https://api.github.com/users/alex-hh/received_events", "repos_url": "https://api.github.com/users/alex-hh/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions", "type": "User", "url": "https://api.github.com/users/alex-hh", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
[ "Hi ! You can already configure the README.md to have multiple sets of splits, e.g.\r\n\r\n```yaml\r\nconfigs:\r\n- config_name: my_first_set_of_split\r\n data_files:\r\n - split: train\r\n path: *.csv\r\n- config_name: my_second_set_of_split\r\n data_files:\r\n - split: train\r\n path: train-*.csv\r\n - split: test\r\n path: test-*.csv\r\n```", "Hi - I had something slightly different in mind:\r\n\r\nCurrently the yaml splits specified like this only allow specifying which filenames to pass to each split.\r\nBut what if I have a situation where I know which individual *training examples* I want to put in each split.\r\n\r\nI could build split-specific files, however for large datasets with overlapping (e.g. multiple sets of) splits this could result in significant duplication of data.\r\n\r\nI can see that this could actually be very much intended (i.e. to discourage overlapping splits), but wondered whether some support for handling splits based on individual identifiers is something that could be considered. ", "This is not supported right now :/ Though you can load the data in two steps like this\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nfull_dataset = load_dataset(\"username/dataset\", split=\"train\")\r\nmy_first_set_indices = load_dataset(\"username/dataset\", \"my_first_set_of_split\", split=\"train\")\r\n\r\nmy_first_set = full_dataset.select(my_first_set_indices[\"indices\"])\r\n```\r\n\r\nyou can create such a dataset by adapting this code for example\r\n```python\r\n# upload the full dataset\r\nfull_dataset.push_to_hub(\"username/dataset\")\r\n# then upload the indices for each set\r\nDatasetDict({\r\n \"train\": Dataset.from_dict({\"indices\": [0, 1, 2, 3]}),\r\n \"test\": Dataset.from_dict({\"indices\": [4, 5]}),\r\n}).push_to_hub(\"username/dataset\", \"my_first_set_of_split\")" ]
2024-11-10T07:45:19Z
2024-11-19T14:37:02Z
null
CONTRIBUTOR
null
null
### Feature request As far as I understand, automated construction of splits for hub datasets is currently based on either file names or directory structure ([as described here](https://huggingface.co/docs/datasets/en/repository_structure)) It would seem to be pretty useful to also allow splits to be based on identifiers of individual examples This could be configured like {"split_name": {"column_name": [column values in split]}} (This in turn requires unique 'index' columns, which could be explicitly supported or just assumed to be defined appropriately by the user). I guess a potential downside would be that shards would end up spanning different splits - is this something that can be handled somehow? Would this only affect streaming from hub? ### Motivation The main motivation would be that all data files could be stored in a single directory, and multiple sets of splits could be generated from the same data. This is often useful for large datasets with multiple distinct sets of splits. This could all be configured via the README.md yaml configs ### Your contribution May be able to contribute if it seems like a good idea
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7287/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/issues/7286
7,286
Concurrent loading in `load_from_disk` - `num_proc` as a param
{ "avatar_url": "https://avatars.githubusercontent.com/u/5240449?v=4", "events_url": "https://api.github.com/users/unography/events{/privacy}", "followers_url": "https://api.github.com/users/unography/followers", "following_url": "https://api.github.com/users/unography/following{/other_user}", "gists_url": "https://api.github.com/users/unography/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/unography", "id": 5240449, "login": "unography", "node_id": "MDQ6VXNlcjUyNDA0NDk=", "organizations_url": "https://api.github.com/users/unography/orgs", "received_events_url": "https://api.github.com/users/unography/received_events", "repos_url": "https://api.github.com/users/unography/repos", "site_admin": false, "starred_url": "https://api.github.com/users/unography/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/unography/subscriptions", "type": "User", "url": "https://api.github.com/users/unography", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
[]
2024-11-08T23:21:40Z
2024-11-09T16:14:37Z
2024-11-09T16:14:37Z
NONE
null
null
### Feature request https://github.com/huggingface/datasets/pull/6464 mentions a `num_proc` param while loading dataset from disk, but can't find that in the documentation and code anywhere ### Motivation Make loading large datasets from disk faster ### Your contribution Happy to contribute if given pointers
{ "avatar_url": "https://avatars.githubusercontent.com/u/5240449?v=4", "events_url": "https://api.github.com/users/unography/events{/privacy}", "followers_url": "https://api.github.com/users/unography/followers", "following_url": "https://api.github.com/users/unography/following{/other_user}", "gists_url": "https://api.github.com/users/unography/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/unography", "id": 5240449, "login": "unography", "node_id": "MDQ6VXNlcjUyNDA0NDk=", "organizations_url": "https://api.github.com/users/unography/orgs", "received_events_url": "https://api.github.com/users/unography/received_events", "repos_url": "https://api.github.com/users/unography/repos", "site_admin": false, "starred_url": "https://api.github.com/users/unography/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/unography/subscriptions", "type": "User", "url": "https://api.github.com/users/unography", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7286/reactions" }
not_planned
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/pull/7285
7,285
Release v3.1.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4", "events_url": "https://api.github.com/users/alex-hh/events{/privacy}", "followers_url": "https://api.github.com/users/alex-hh/followers", "following_url": "https://api.github.com/users/alex-hh/following{/other_user}", "gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alex-hh", "id": 5719745, "login": "alex-hh", "node_id": "MDQ6VXNlcjU3MTk3NDU=", "organizations_url": "https://api.github.com/users/alex-hh/orgs", "received_events_url": "https://api.github.com/users/alex-hh/received_events", "repos_url": "https://api.github.com/users/alex-hh/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions", "type": "User", "url": "https://api.github.com/users/alex-hh", "user_view_type": "public" }
[]
closed
false
[]
2024-11-08T16:17:58Z
2024-11-08T16:18:05Z
2024-11-08T16:18:05Z
CONTRIBUTOR
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7285.diff", "html_url": "https://github.com/huggingface/datasets/pull/7285", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7285.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7285" }
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4", "events_url": "https://api.github.com/users/alex-hh/events{/privacy}", "followers_url": "https://api.github.com/users/alex-hh/followers", "following_url": "https://api.github.com/users/alex-hh/following{/other_user}", "gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alex-hh", "id": 5719745, "login": "alex-hh", "node_id": "MDQ6VXNlcjU3MTk3NDU=", "organizations_url": "https://api.github.com/users/alex-hh/orgs", "received_events_url": "https://api.github.com/users/alex-hh/received_events", "repos_url": "https://api.github.com/users/alex-hh/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions", "type": "User", "url": "https://api.github.com/users/alex-hh", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7285/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/pull/7284
7,284
support for custom feature encoding/decoding
{ "avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4", "events_url": "https://api.github.com/users/alex-hh/events{/privacy}", "followers_url": "https://api.github.com/users/alex-hh/followers", "following_url": "https://api.github.com/users/alex-hh/following{/other_user}", "gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alex-hh", "id": 5719745, "login": "alex-hh", "node_id": "MDQ6VXNlcjU3MTk3NDU=", "organizations_url": "https://api.github.com/users/alex-hh/orgs", "received_events_url": "https://api.github.com/users/alex-hh/received_events", "repos_url": "https://api.github.com/users/alex-hh/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions", "type": "User", "url": "https://api.github.com/users/alex-hh", "user_view_type": "public" }
[]
closed
false
[ "@lhoestq ", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7284). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-11-08T15:04:08Z
2024-11-21T16:09:47Z
2024-11-21T16:09:47Z
CONTRIBUTOR
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7284.diff", "html_url": "https://github.com/huggingface/datasets/pull/7284", "merged_at": "2024-11-21T16:09:47Z", "patch_url": "https://github.com/huggingface/datasets/pull/7284.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7284" }
Fix for https://github.com/huggingface/datasets/issues/7220 as suggested in discussion, in preference to #7221 (only concern would be on effect on type checking with custom feature types that aren't covered by FeatureType?)
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7284/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/pull/7283
7,283
Allow for variation in metadata file names as per issue #7123
{ "avatar_url": "https://avatars.githubusercontent.com/u/38985481?v=4", "events_url": "https://api.github.com/users/egrace479/events{/privacy}", "followers_url": "https://api.github.com/users/egrace479/followers", "following_url": "https://api.github.com/users/egrace479/following{/other_user}", "gists_url": "https://api.github.com/users/egrace479/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/egrace479", "id": 38985481, "login": "egrace479", "node_id": "MDQ6VXNlcjM4OTg1NDgx", "organizations_url": "https://api.github.com/users/egrace479/orgs", "received_events_url": "https://api.github.com/users/egrace479/received_events", "repos_url": "https://api.github.com/users/egrace479/repos", "site_admin": false, "starred_url": "https://api.github.com/users/egrace479/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/egrace479/subscriptions", "type": "User", "url": "https://api.github.com/users/egrace479", "user_view_type": "public" }
[]
open
false
[]
2024-11-08T00:44:47Z
2024-11-08T00:44:47Z
null
NONE
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7283.diff", "html_url": "https://github.com/huggingface/datasets/pull/7283", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7283.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7283" }
Allow metadata files to have an identifying preface. Specifically, it will recognize files with `-metadata.csv` or `_metadata.csv` as metadata files for the purposes of the dataset viewer functionality. Resolves #7123.
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7283/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/issues/7282
7,282
Faulty datasets.exceptions.ExpectedMoreSplitsError
{ "avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4", "events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}", "followers_url": "https://api.github.com/users/meg-huggingface/followers", "following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}", "gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/meg-huggingface", "id": 90473723, "login": "meg-huggingface", "node_id": "MDQ6VXNlcjkwNDczNzIz", "organizations_url": "https://api.github.com/users/meg-huggingface/orgs", "received_events_url": "https://api.github.com/users/meg-huggingface/received_events", "repos_url": "https://api.github.com/users/meg-huggingface/repos", "site_admin": false, "starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions", "type": "User", "url": "https://api.github.com/users/meg-huggingface", "user_view_type": "public" }
[]
open
false
[]
2024-11-07T20:15:01Z
2024-11-07T20:15:42Z
null
CONTRIBUTOR
null
null
### Describe the bug Trying to download only the 'validation' split of my dataset; instead hit the error `datasets.exceptions.ExpectedMoreSplitsError`. Appears to be the same undesired behavior as reported in [#6939](https://github.com/huggingface/datasets/issues/6939), but with `data_files`, not `data_dir`. Here is the Traceback: ``` Traceback (most recent call last): File "/home/user/app/app.py", line 12, in <module> ds = load_dataset('datacomp/imagenet-1k-random0.0', token=GATED_IMAGENET, data_files={'validation': 'data/val*'}, split='validation', trust_remote_code=True) File "/usr/local/lib/python3.10/site-packages/datasets/load.py", line 2154, in load_dataset builder_instance.download_and_prepare( File "/usr/local/lib/python3.10/site-packages/datasets/builder.py", line 924, in download_and_prepare self._download_and_prepare( File "/usr/local/lib/python3.10/site-packages/datasets/builder.py", line 1018, in _download_and_prepare verify_splits(self.info.splits, split_dict) File "/usr/local/lib/python3.10/site-packages/datasets/utils/info_utils.py", line 68, in verify_splits raise ExpectedMoreSplitsError(str(set(expected_splits) - set(recorded_splits))) datasets.exceptions.ExpectedMoreSplitsError: {'train', 'test'} ``` Note: I am using the `data_files` argument only because I am trying to specify that I only want the 'validation' split, and the whole dataset will be downloaded even when the `split='validation'` argument is specified, unless you also specify `data_files`, as described here: https://discuss.huggingface.co/t/how-can-i-download-a-specific-split-of-a-dataset/79027 ### Steps to reproduce the bug 1. Create a Space with the default blank 'gradio' SDK https://huggingface.co/new-space 2. Create a file 'app.py' that loads a dataset to only extract a 'validation' split: `ds = load_dataset('datacomp/imagenet-1k-random0.0', token=GATED_IMAGENET, data_files={'validation': 'data/val*'}, split='validation', trust_remote_code=True)` ### Expected behavior Downloading validation split. ### Environment info Default environment for creating a new Space. Relevant to this bug, that is: ``` FROM docker.io/library/python:3.10@sha256:fd0fa50d997eb56ce560c6e5ca6a1f5cf8fdff87572a16ac07fb1f5ca01eb608 --> RUN pip install --no-cache-dir pip==22.3.1 && pip install --no-cache-dir datasets "huggingface-hub>=0.19" "hf-transfer>=0.1.4" "protobuf<4" "click<8.1" ```
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7282/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/issues/7281
7,281
File not found error
{ "avatar_url": "https://avatars.githubusercontent.com/u/37507786?v=4", "events_url": "https://api.github.com/users/MichielBontenbal/events{/privacy}", "followers_url": "https://api.github.com/users/MichielBontenbal/followers", "following_url": "https://api.github.com/users/MichielBontenbal/following{/other_user}", "gists_url": "https://api.github.com/users/MichielBontenbal/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/MichielBontenbal", "id": 37507786, "login": "MichielBontenbal", "node_id": "MDQ6VXNlcjM3NTA3Nzg2", "organizations_url": "https://api.github.com/users/MichielBontenbal/orgs", "received_events_url": "https://api.github.com/users/MichielBontenbal/received_events", "repos_url": "https://api.github.com/users/MichielBontenbal/repos", "site_admin": false, "starred_url": "https://api.github.com/users/MichielBontenbal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MichielBontenbal/subscriptions", "type": "User", "url": "https://api.github.com/users/MichielBontenbal", "user_view_type": "public" }
[]
open
false
[ "Link to the dataset: https://huggingface.co/datasets/MichielBontenbal/UrbanSounds " ]
2024-11-07T09:04:49Z
2024-11-07T09:22:43Z
null
NONE
null
null
### Describe the bug I get a FileNotFoundError: <img width="944" alt="image" src="https://github.com/user-attachments/assets/1336bc08-06f6-4682-a3c0-071ff65efa87"> ### Steps to reproduce the bug See screenshot. ### Expected behavior I want to load one audiofile from the dataset. ### Environment info MacOs Intel 14.6.1 (23G93) Python 3.10.9 Numpy 1.23 Datasets latest version
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7281/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/issues/7280
7,280
Add filename in error message when ReadError or similar occur
{ "avatar_url": "https://avatars.githubusercontent.com/u/37046039?v=4", "events_url": "https://api.github.com/users/elisa-aleman/events{/privacy}", "followers_url": "https://api.github.com/users/elisa-aleman/followers", "following_url": "https://api.github.com/users/elisa-aleman/following{/other_user}", "gists_url": "https://api.github.com/users/elisa-aleman/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/elisa-aleman", "id": 37046039, "login": "elisa-aleman", "node_id": "MDQ6VXNlcjM3MDQ2MDM5", "organizations_url": "https://api.github.com/users/elisa-aleman/orgs", "received_events_url": "https://api.github.com/users/elisa-aleman/received_events", "repos_url": "https://api.github.com/users/elisa-aleman/repos", "site_admin": false, "starred_url": "https://api.github.com/users/elisa-aleman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/elisa-aleman/subscriptions", "type": "User", "url": "https://api.github.com/users/elisa-aleman", "user_view_type": "public" }
[]
open
false
[ "Hi Elisa, please share the error traceback here, and if you manage to find the location in the `datasets` code where the error occurs, feel free to open a PR to add the necessary logging / improve the error message.", "> please share the error traceback\n\nI don't have access to it but it should be during [this exception](https://github.com/huggingface/datasets/blob/2049c00921c59cdeb835137a1c49639cf175af07/src/datasets/builder.py#L1643) which happens during the loading of a dataset. If one of the downloaded files is corrupted, the for loop will not yield correctly, and the error will come from, say, in the case of tar files, [this iterable](https://github.com/huggingface/datasets/blob/2049c00921c59cdeb835137a1c49639cf175af07/src/datasets/utils/file_utils.py#L1293) which has no explicit error handling that leaves clues as to which file has failed.\n\nI only know the case for tar files, but I consider this issue could be happening across different file types too.", "I think having a better error handling for this tar iterable would be useful already, maybe a simple try/except in `_iter_from_urlpath` that checks for `tarfile.ReadError` and raises an error with the `urlpath` mentioned in the error ?", "I think not just from higher calls like the `_iter_from_urlpath` but directly wherever a file is attempted to be opened would be the best case, as the traceback would simply lead to that.", "so maybe there should be better error messages in each dataset builder definition ? e.g. in https://github.com/huggingface/datasets/blob/main/src/datasets/packaged_modules/webdataset/webdataset.py for webdataset TAR archives" ]
2024-11-07T06:00:53Z
2024-11-20T13:23:12Z
null
NONE
null
null
Please update error messages to include relevant information for debugging when loading datasets with `load_dataset()` that may have a few corrupted files. Whenever downloading a full dataset, some files might be corrupted (either at the source or from downloading corruption). However the errors often only let me know it was a tar file if `tarfile.ReadError` appears on the traceback, and I imagine similarly for other file types. This makes it really hard to debug which file is corrupted, and when dealing with very large datasets, it shouldn't be necessary to force download everything again.
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7280/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/pull/7279
7,279
Feature proposal: Stacking, potentially heterogeneous, datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/96243987?v=4", "events_url": "https://api.github.com/users/TimCares/events{/privacy}", "followers_url": "https://api.github.com/users/TimCares/followers", "following_url": "https://api.github.com/users/TimCares/following{/other_user}", "gists_url": "https://api.github.com/users/TimCares/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/TimCares", "id": 96243987, "login": "TimCares", "node_id": "U_kgDOBbyREw", "organizations_url": "https://api.github.com/users/TimCares/orgs", "received_events_url": "https://api.github.com/users/TimCares/received_events", "repos_url": "https://api.github.com/users/TimCares/repos", "site_admin": false, "starred_url": "https://api.github.com/users/TimCares/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TimCares/subscriptions", "type": "User", "url": "https://api.github.com/users/TimCares", "user_view_type": "public" }
[]
open
false
[]
2024-11-05T15:40:50Z
2024-11-05T15:40:50Z
null
NONE
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7279.diff", "html_url": "https://github.com/huggingface/datasets/pull/7279", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7279.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7279" }
### Introduction Hello there, I noticed that there are two ways to combine multiple datasets: Either through `datasets.concatenate_datasets` or `datasets.interleave_datasets`. However, to my knowledge (please correct me if I am wrong) both approaches require the datasets that are combined to have the same features. I think it would be a great idea to add support for combining multiple datasets that might not follow the same schema (i.e. have different features), for example an image and text dataset. That is why I propose a third function of the `datasets.combine` module called `stack_datasets`, which can be used to combine a list of datasets with (potentially) different features. This would look as follows: ```python >>> from datasets import stack_datasets >>> image_dataset = ... >>> next(iter(image_dataset)) {'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=555x416 at 0x313E79CD0> } >>> text_dataset = ... >>> next(iter(text_dataset)) {'text': "This is a test."} >>> stacked = stack_datasets(datasets={'i_ds': image_dataset, 't_ds': text_dataset}, stopping_strategy='all_exhausted') >>> next(iter(stacked)) { 'i_ds': {'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=555x416 at 0x313E79CD0> } 't_ds': {'text': "This is a test."} } ``` <br /> ### Motivation I motivate this by: **A**: The fact that Pytorch offers a similar functionality under `torch.utils.data.StackDataset` ([link](https://pytorch.org/docs/stable/data.html#torch.utils.data.StackDataset)). **B**: In settings where one would like to e.g. train a Vision-Language model using an image-text dataset, an image dataset, and a text dataset, this functionality would offer a clean and intuitive solution to create multimodal datasets. I am aware that the aforementioned is also feasible without my proposed function, but I believe this offers a nice approach that aligns with existing functionality and is directly provided within the `datasets` package. ### API `stack_datasets` has two arguments: `datasets` and `stopping_strategy `. <br /> `datasets` is a dictionary of either type `Dict[str, Dataset]` or `Dict[str, IterableDatasets]`, a mixture is not allowed. It contains the names of the datasets (the keys) and the datasets themselves (the values) that should be stacked. Each item returned is a dictionary with one key-value pair for each dataset. The keys are the names of the datasets as provided in the argument `datasets`, and the values are the respective examples from the datasets. <br /> `stopping_strategy` is the same as for `interleave_datasets`. If it is `first_exhausted` we stop if the smallest dataset runs out of examples, if it is `all_exhausted` we stop if all datasets ran out of examples at least once. For `all_exhausted` that means that we may visit examples from datasets multiple times. ### Docs I saw that there are multiple documentations and guides on the HuggingFace website that introduce `concatenate_datasets` and `interleave_datasets`, for example [here](https://huggingface.co/docs/datasets/process#concatenate). If this request is merged I would be willing to add the new functionality at the appropriate points in the documentation (if desired). ### Tests I also added some tests to ensure correctness. Some tests I wrote in [tests/test_iterable_dataset.py](https://github.com/TimCares/datasets/blob/fadc1159debf2a65d44e40cbf7758f2bd2cc8b08/tests/test_iterable_dataset.py#L2169) run for both `Dataset` and `IterableDataset` even though tests for `Dataset` technically do not belong in this script, but I found that this was a nice way to cover more cases with mostly the same code. ### Additional information I tried to write the code in a way so that it is similar to that of `concatenate_datasets` and `interleave_datasets`. I’m open to feedback and willing to make adjustments based on your suggestions, so feel free to give me your take. :)
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7279/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/pull/7278
7,278
Let soundfile directly read local audio files
{ "avatar_url": "https://avatars.githubusercontent.com/u/20347013?v=4", "events_url": "https://api.github.com/users/fawazahmed0/events{/privacy}", "followers_url": "https://api.github.com/users/fawazahmed0/followers", "following_url": "https://api.github.com/users/fawazahmed0/following{/other_user}", "gists_url": "https://api.github.com/users/fawazahmed0/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/fawazahmed0", "id": 20347013, "login": "fawazahmed0", "node_id": "MDQ6VXNlcjIwMzQ3MDEz", "organizations_url": "https://api.github.com/users/fawazahmed0/orgs", "received_events_url": "https://api.github.com/users/fawazahmed0/received_events", "repos_url": "https://api.github.com/users/fawazahmed0/repos", "site_admin": false, "starred_url": "https://api.github.com/users/fawazahmed0/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fawazahmed0/subscriptions", "type": "User", "url": "https://api.github.com/users/fawazahmed0", "user_view_type": "public" }
[]
open
false
[]
2024-11-04T17:41:13Z
2024-11-18T14:01:25Z
null
NONE
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7278.diff", "html_url": "https://github.com/huggingface/datasets/pull/7278", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7278.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7278" }
- [x] Fixes #7276
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7278/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/pull/7277
7,277
Add link to video dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/NielsRogge", "id": 48327001, "login": "NielsRogge", "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "repos_url": "https://api.github.com/users/NielsRogge/repos", "site_admin": false, "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "type": "User", "url": "https://api.github.com/users/NielsRogge", "user_view_type": "public" }
[]
closed
false
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7277). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-11-04T10:45:12Z
2024-11-04T17:05:06Z
2024-11-04T17:05:06Z
CONTRIBUTOR
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7277.diff", "html_url": "https://github.com/huggingface/datasets/pull/7277", "merged_at": "2024-11-04T17:05:06Z", "patch_url": "https://github.com/huggingface/datasets/pull/7277.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7277" }
This PR updates https://huggingface.co/docs/datasets/loading to also link to the new video loading docs. cc @mfarre
{ "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stevhliu", "id": 59462357, "login": "stevhliu", "node_id": "MDQ6VXNlcjU5NDYyMzU3", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "repos_url": "https://api.github.com/users/stevhliu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "type": "User", "url": "https://api.github.com/users/stevhliu", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7277/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/issues/7276
7,276
Accessing audio dataset value throws Format not recognised error
{ "avatar_url": "https://avatars.githubusercontent.com/u/20347013?v=4", "events_url": "https://api.github.com/users/fawazahmed0/events{/privacy}", "followers_url": "https://api.github.com/users/fawazahmed0/followers", "following_url": "https://api.github.com/users/fawazahmed0/following{/other_user}", "gists_url": "https://api.github.com/users/fawazahmed0/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/fawazahmed0", "id": 20347013, "login": "fawazahmed0", "node_id": "MDQ6VXNlcjIwMzQ3MDEz", "organizations_url": "https://api.github.com/users/fawazahmed0/orgs", "received_events_url": "https://api.github.com/users/fawazahmed0/received_events", "repos_url": "https://api.github.com/users/fawazahmed0/repos", "site_admin": false, "starred_url": "https://api.github.com/users/fawazahmed0/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fawazahmed0/subscriptions", "type": "User", "url": "https://api.github.com/users/fawazahmed0", "user_view_type": "public" }
[]
open
false
[ "Hi ! can you try if this works ?\r\n\r\n```python\r\nimport soundfile as sf\r\n\r\nwith open('C:\\\\Users\\\\Nawaz-Server\\\\.cache\\\\huggingface\\\\hub\\\\datasets--fawazahmed0--bug-audio\\\\snapshots\\\\fab1398431fed1c0a2a7bff0945465bab8b5daef\\\\data\\\\Ghamadi\\\\037136.mp3', 'rb') as f:\r\n print(sf.read(f))\r\n```", "@lhoestq Same error, here is the output:\r\n\r\n```bash\r\n(mypy) C:\\Users\\Nawaz-Server\\Documents\\ml>python myest.py\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\Nawaz-Server\\Documents\\ml\\myest.py\", line 5, in <module>\r\n print(sf.read(f))\r\n ^^^^^^^^^^\r\n File \"C:\\Users\\Nawaz-Server\\.conda\\envs\\mypy\\Lib\\site-packages\\soundfile.py\", line 285, in read\r\n with SoundFile(file, 'r', samplerate, channels,\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\Nawaz-Server\\.conda\\envs\\mypy\\Lib\\site-packages\\soundfile.py\", line 658, in __init__\r\n self._file = self._open(file, mode_int, closefd)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\Nawaz-Server\\.conda\\envs\\mypy\\Lib\\site-packages\\soundfile.py\", line 1216, in _open\r\n raise LibsndfileError(err, prefix=\"Error opening {0!r}: \".format(self.name))\r\nsoundfile.LibsndfileError: Error opening <_io.BufferedReader name='C:\\\\Users\\\\Nawaz-Server\\\\.cache\\\\huggingface\\\\hub\\\\datasets--fawazahmed0--bug-audio\\\\snapshots\\\\fab1398431fed1c0a2a7bff0945465bab8b5daef\\\\data\\\\Ghamadi\\\\037136.mp3'>: Format not recognised.\r\n\r\n```", "upstream bug: https://github.com/bastibe/python-soundfile/issues/439" ]
2024-11-04T05:59:13Z
2024-11-09T18:51:52Z
null
NONE
null
null
### Describe the bug Accessing audio dataset value throws `Format not recognised error` ### Steps to reproduce the bug **code:** ```py from datasets import load_dataset dataset = load_dataset("fawazahmed0/bug-audio") for data in dataset["train"]: print(data) ``` **output:** ```bash (mypy) C:\Users\Nawaz-Server\Documents\ml>python myest.py [C:\vcpkg\buildtrees\mpg123\src\0d8db63f9b-3db975bc05.clean\src\libmpg123\layer3.c:INT123_do_layer3():1801] error: dequantization failed! {'audio': {'path': 'C:\\Users\\Nawaz-Server\\.cache\\huggingface\\hub\\datasets--fawazahmed0--bug-audio\\snapshots\\fab1398431fed1c0a2a7bff0945465bab8b5daef\\data\\Ghamadi\\037135.mp3', 'array': array([ 0.00000000e+00, -2.86519935e-22, -2.56504911e-21, ..., -1.94239747e-02, -2.42924765e-02, -2.99104657e-02]), 'sampling_rate': 22050}, 'reciter': 'Ghamadi', 'transcription': 'الا عجوز ا في الغبرين', 'line': 3923, 'chapter': 37, 'verse': 135, 'text': 'إِلَّا عَجُوزࣰ ا فِي ٱلۡغَٰبِرِينَ'} Traceback (most recent call last): File "C:\Users\Nawaz-Server\Documents\ml\myest.py", line 5, in <module> for data in dataset["train"]: ~~~~~~~^^^^^^^^^ File "C:\Users\Nawaz-Server\.conda\envs\mypy\Lib\site-packages\datasets\arrow_dataset.py", line 2372, in __iter__ formatted_output = format_table( ^^^^^^^^^^^^^ File "C:\Users\Nawaz-Server\.conda\envs\mypy\Lib\site-packages\datasets\formatting\formatting.py", line 639, in format_table return formatter(pa_table, query_type=query_type) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Nawaz-Server\.conda\envs\mypy\Lib\site-packages\datasets\formatting\formatting.py", line 403, in __call__ return self.format_row(pa_table) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Nawaz-Server\.conda\envs\mypy\Lib\site-packages\datasets\formatting\formatting.py", line 444, in format_row row = self.python_features_decoder.decode_row(row) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Nawaz-Server\.conda\envs\mypy\Lib\site-packages\datasets\formatting\formatting.py", line 222, in decode_row return self.features.decode_example(row) if self.features else row ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Nawaz-Server\.conda\envs\mypy\Lib\site-packages\datasets\features\features.py", line 2042, in decode_example column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Nawaz-Server\.conda\envs\mypy\Lib\site-packages\datasets\features\features.py", line 1403, in decode_nested_example return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Nawaz-Server\.conda\envs\mypy\Lib\site-packages\datasets\features\audio.py", line 184, in decode_example array, sampling_rate = sf.read(f) ^^^^^^^^^^ File "C:\Users\Nawaz-Server\.conda\envs\mypy\Lib\site-packages\soundfile.py", line 285, in read with SoundFile(file, 'r', samplerate, channels, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Nawaz-Server\.conda\envs\mypy\Lib\site-packages\soundfile.py", line 658, in __init__ self._file = self._open(file, mode_int, closefd) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Nawaz-Server\.conda\envs\mypy\Lib\site-packages\soundfile.py", line 1216, in _open raise LibsndfileError(err, prefix="Error opening {0!r}: ".format(self.name)) soundfile.LibsndfileError: Error opening <_io.BufferedReader name='C:\\Users\\Nawaz-Server\\.cache\\huggingface\\hub\\datasets--fawazahmed0--bug-audio\\snapshots\\fab1398431fed1c0a2a7bff0945465bab8b5daef\\data\\Ghamadi\\037136.mp3'>: Format not recognised. ``` ### Expected behavior Everything should work fine, as loading the problematic audio file directly with soundfile package works fine **code:** ``` import soundfile as sf print(sf.read('C:\\Users\\Nawaz-Server\\.cache\\huggingface\\hub\\datasets--fawazahmed0--bug-audio\\snapshots\\fab1398431fed1c0a2a7bff0945465bab8b5daef\\data\\Ghamadi\\037136.mp3')) ``` **output:** ```bash (mypy) C:\Users\Nawaz-Server\Documents\ml>python myest.py [C:\vcpkg\buildtrees\mpg123\src\0d8db63f9b-3db975bc05.clean\src\libmpg123\layer3.c:INT123_do_layer3():1801] error: dequantization failed! (array([ 0.00000000e+00, -8.43723821e-22, -2.45370628e-22, ..., -7.71464454e-03, -6.90496899e-03, -8.63333419e-03]), 22050) ``` ### Environment info - `datasets` version: 3.0.2 - Platform: Windows-11-10.0.22621-SP0 - Python version: 3.12.7 - `huggingface_hub` version: 0.26.2 - PyArrow version: 17.0.0 - Pandas version: 2.2.3 - `fsspec` version: 2024.10.0 - soundfile: 0.12.1
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7276/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/issues/7275
7,275
load_dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/46941974?v=4", "events_url": "https://api.github.com/users/santiagobp99/events{/privacy}", "followers_url": "https://api.github.com/users/santiagobp99/followers", "following_url": "https://api.github.com/users/santiagobp99/following{/other_user}", "gists_url": "https://api.github.com/users/santiagobp99/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/santiagobp99", "id": 46941974, "login": "santiagobp99", "node_id": "MDQ6VXNlcjQ2OTQxOTc0", "organizations_url": "https://api.github.com/users/santiagobp99/orgs", "received_events_url": "https://api.github.com/users/santiagobp99/received_events", "repos_url": "https://api.github.com/users/santiagobp99/repos", "site_admin": false, "starred_url": "https://api.github.com/users/santiagobp99/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/santiagobp99/subscriptions", "type": "User", "url": "https://api.github.com/users/santiagobp99", "user_view_type": "public" }
[]
open
false
[]
2024-11-04T03:01:44Z
2024-11-04T03:01:44Z
null
NONE
null
null
### Describe the bug I am performing two operations I see on a hugging face tutorial (Fine-tune a language model), and I am defining every aspect inside the mapped functions, also some imports of the library because it doesnt identify anything not defined outside that function where the dataset elements are being mapped: https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb#scrollTo=iaAJy5Hu3l_B `- lm_datasets = tokenized_datasets.map( group_texts, batched=True, batch_size=batch_size, num_proc=4, ) - tokenized_datasets = datasets.map(tokenize_function, batched=True, num_proc=4, remove_columns=["text"]) def tokenize_function(examples): model_checkpoint = 'gpt2' from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained(model_checkpoint, use_fast=True) return tokenizer(examples["text"])` ### Steps to reproduce the bug Currently handle all the imports inside the function ### Expected behavior The code must work es expected in the notebook, but currently this is not happening. https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb#scrollTo=iaAJy5Hu3l_B ### Environment info print(transformers.__version__) 4.46.1
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7275/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/pull/7274
7,274
[MINOR:TYPO] Fix typo in exception text
{ "avatar_url": "https://avatars.githubusercontent.com/u/3664563?v=4", "events_url": "https://api.github.com/users/cakiki/events{/privacy}", "followers_url": "https://api.github.com/users/cakiki/followers", "following_url": "https://api.github.com/users/cakiki/following{/other_user}", "gists_url": "https://api.github.com/users/cakiki/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cakiki", "id": 3664563, "login": "cakiki", "node_id": "MDQ6VXNlcjM2NjQ1NjM=", "organizations_url": "https://api.github.com/users/cakiki/orgs", "received_events_url": "https://api.github.com/users/cakiki/received_events", "repos_url": "https://api.github.com/users/cakiki/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cakiki/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cakiki/subscriptions", "type": "User", "url": "https://api.github.com/users/cakiki", "user_view_type": "public" }
[]
closed
false
[]
2024-11-01T21:15:29Z
2025-05-21T13:17:20Z
2025-05-21T13:17:20Z
CONTRIBUTOR
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7274.diff", "html_url": "https://github.com/huggingface/datasets/pull/7274", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7274.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7274" }
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/3664563?v=4", "events_url": "https://api.github.com/users/cakiki/events{/privacy}", "followers_url": "https://api.github.com/users/cakiki/followers", "following_url": "https://api.github.com/users/cakiki/following{/other_user}", "gists_url": "https://api.github.com/users/cakiki/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cakiki", "id": 3664563, "login": "cakiki", "node_id": "MDQ6VXNlcjM2NjQ1NjM=", "organizations_url": "https://api.github.com/users/cakiki/orgs", "received_events_url": "https://api.github.com/users/cakiki/received_events", "repos_url": "https://api.github.com/users/cakiki/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cakiki/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cakiki/subscriptions", "type": "User", "url": "https://api.github.com/users/cakiki", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7274/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/pull/7273
7,273
Raise error for incorrect JSON serialization
{ "avatar_url": "https://avatars.githubusercontent.com/u/20443618?v=4", "events_url": "https://api.github.com/users/varadhbhatnagar/events{/privacy}", "followers_url": "https://api.github.com/users/varadhbhatnagar/followers", "following_url": "https://api.github.com/users/varadhbhatnagar/following{/other_user}", "gists_url": "https://api.github.com/users/varadhbhatnagar/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/varadhbhatnagar", "id": 20443618, "login": "varadhbhatnagar", "node_id": "MDQ6VXNlcjIwNDQzNjE4", "organizations_url": "https://api.github.com/users/varadhbhatnagar/orgs", "received_events_url": "https://api.github.com/users/varadhbhatnagar/received_events", "repos_url": "https://api.github.com/users/varadhbhatnagar/repos", "site_admin": false, "starred_url": "https://api.github.com/users/varadhbhatnagar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/varadhbhatnagar/subscriptions", "type": "User", "url": "https://api.github.com/users/varadhbhatnagar", "user_view_type": "public" }
[]
closed
false
[ "PTAL @lhoestq @albertvillanova ", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7273). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-11-01T11:54:35Z
2024-11-18T11:25:01Z
2024-11-18T11:25:01Z
CONTRIBUTOR
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7273.diff", "html_url": "https://github.com/huggingface/datasets/pull/7273", "merged_at": "2024-11-18T11:25:01Z", "patch_url": "https://github.com/huggingface/datasets/pull/7273.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7273" }
Raise error when `lines = False` and `batch_size < Dataset.num_rows` in `Dataset.to_json()`. Issue: #7037 Related PRs: #7039 #7181
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7273/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/pull/7272
7,272
fix conda release worlflow
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7272). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-10-31T15:56:19Z
2024-10-31T15:58:35Z
2024-10-31T15:57:29Z
MEMBER
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7272.diff", "html_url": "https://github.com/huggingface/datasets/pull/7272", "merged_at": "2024-10-31T15:57:29Z", "patch_url": "https://github.com/huggingface/datasets/pull/7272.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7272" }
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7272/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/pull/7271
7,271
Set dev version
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7271). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-10-31T15:22:51Z
2024-10-31T15:25:27Z
2024-10-31T15:22:59Z
MEMBER
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7271.diff", "html_url": "https://github.com/huggingface/datasets/pull/7271", "merged_at": "2024-10-31T15:22:59Z", "patch_url": "https://github.com/huggingface/datasets/pull/7271.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7271" }
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7271/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/pull/7270
7,270
Release: 3.1.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7270). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-10-31T15:10:01Z
2024-10-31T15:14:23Z
2024-10-31T15:14:20Z
MEMBER
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7270.diff", "html_url": "https://github.com/huggingface/datasets/pull/7270", "merged_at": "2024-10-31T15:14:20Z", "patch_url": "https://github.com/huggingface/datasets/pull/7270.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7270" }
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7270/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/issues/7269
7,269
Memory leak when streaming
{ "avatar_url": "https://avatars.githubusercontent.com/u/64205064?v=4", "events_url": "https://api.github.com/users/Jourdelune/events{/privacy}", "followers_url": "https://api.github.com/users/Jourdelune/followers", "following_url": "https://api.github.com/users/Jourdelune/following{/other_user}", "gists_url": "https://api.github.com/users/Jourdelune/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Jourdelune", "id": 64205064, "login": "Jourdelune", "node_id": "MDQ6VXNlcjY0MjA1MDY0", "organizations_url": "https://api.github.com/users/Jourdelune/orgs", "received_events_url": "https://api.github.com/users/Jourdelune/received_events", "repos_url": "https://api.github.com/users/Jourdelune/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Jourdelune/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Jourdelune/subscriptions", "type": "User", "url": "https://api.github.com/users/Jourdelune", "user_view_type": "public" }
[]
open
false
[ "I seem to have encountered the same problem when loading non streaming datasets. load_from_disk. Causing hundreds of GB of memory, but the dataset actually only has 50GB", "FYI when streaming parquet data, only one row group per worker is loaded in memory at a time.\r\n\r\nBtw for datasets of embeddings you can surely optimize your RAM by reading the data as torch tensors directly instead of the default python lists\r\n\r\n```python\r\nfrom datasets import load_dataset\r\nfrom torch.utils.data import DataLoader\r\n\r\ndataset = load_dataset(\"WaveGenAI/dataset\", streaming=True).with_format(\"torch\")\r\n\r\ndataloader = DataLoader(dataset[\"train\"], num_workers=3)\r\n```", "Im also, hitting this issue.....\n\n```python\n # This is what's causing the leak:\n batch_datasets = []\n for file_path in batch_files:\n dataset = load_dataset(..., streaming=True)\n shuffled_dataset = dataset.shuffle(seed=42, buffer_size=1000) # 1000-item buffer\n batch_datasets.append(shuffled_dataset) # Buffer persists\n\n interleaved_dataset = interleave_datasets(batch_datasets, seed=42) \n```\n\nAnd, nothing helps\n```python\n del batch_datasets, interleaved_dataset\n gc.collect() # This doesn't work for HuggingFace internal memory structures\n```\nso my guess is that they wrote this in RUST and forgot to clean up!!!\n\nNow, if i remove the interleaving and process files sequentially... like this it still leaks\n```python\n\n # Process files one by one - no batching, no interleaving\n for file_idx, file_path in enumerate(file_paths):\n dataset = load_dataset(\"parquet\", data_files=file_path, split=\"train\", streaming=True)\n shuffled_dataset = dataset.shuffle(seed=42, buffer_size=1000) \n \n for record in shuffled_dataset:\n # Process record immediately\n pass\n \n del dataset, shuffled_dataset\n gc.collect()\n```\n\n - File 1: 42.4% memory\n - File 2: 42.5% memory\n - File 3: 42.5% memory\n - File 4: 48.4% memory (+6%)\n - File 5: 52.7% memory (+4.3%)\n - File 6: 56.7% memory (+4%)\n - File 7: 59.6% memory (+2.9%)\n - File 8: 62.0% memory (+2.4%)\n\nI had to go back to sequential shuffling (NO Interleaving) and clean up like this\n```python\n dataset.cleanup_cache_files() \n del dataset, shuffled_dataset \n gc.collect() \n pa.default_memory_pool().release_unused() \n libc.malloc_trim(0) # when available \n```", "i have also observed these memory leaks inside the huggingface library when developing bghira/captionflow and had the same outcome of being unable to actually free anything when it occurs. i've worked around it by avoiding some of the more damaging parts of the library, but in doing so i've essentially restricted the compatibility levels of the project.", "Could it be a leak from PyArrow which is used to stream the data from the Parquet files ?", "i believe it's heavily involved yeah" ]
2024-10-31T13:33:52Z
2025-09-02T12:24:31Z
null
NONE
null
null
### Describe the bug I try to use a dataset with streaming=True, the issue I have is that the RAM usage becomes higher and higher until it is no longer sustainable. I understand that huggingface store data in ram during the streaming, and more worker in dataloader there are, more a lot of shard will be stored in ram, but the issue I have is that the ram usage is not constant. So after each new shard loaded, the ram usage will be higher and higher. ### Steps to reproduce the bug You can run this code and see you ram usage, after each shard of 255 examples, your ram usage will be extended. ```py from datasets import load_dataset from torch.utils.data import DataLoader dataset = load_dataset("WaveGenAI/dataset", streaming=True) dataloader = DataLoader(dataset["train"], num_workers=3) for i, data in enumerate(dataloader): print(i, end="\r") ``` ### Expected behavior The Ram usage should be always the same (just 3 shards loaded in the ram). ### Environment info - `datasets` version: 3.0.1 - Platform: Linux-6.10.5-arch1-1-x86_64-with-glibc2.40 - Python version: 3.12.4 - `huggingface_hub` version: 0.26.0 - PyArrow version: 17.0.0 - Pandas version: 2.2.3 - `fsspec` version: 2024.6.1
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7269/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/issues/7268
7,268
load_from_disk
{ "avatar_url": "https://avatars.githubusercontent.com/u/71670961?v=4", "events_url": "https://api.github.com/users/ghaith-mq/events{/privacy}", "followers_url": "https://api.github.com/users/ghaith-mq/followers", "following_url": "https://api.github.com/users/ghaith-mq/following{/other_user}", "gists_url": "https://api.github.com/users/ghaith-mq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ghaith-mq", "id": 71670961, "login": "ghaith-mq", "node_id": "MDQ6VXNlcjcxNjcwOTYx", "organizations_url": "https://api.github.com/users/ghaith-mq/orgs", "received_events_url": "https://api.github.com/users/ghaith-mq/received_events", "repos_url": "https://api.github.com/users/ghaith-mq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ghaith-mq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ghaith-mq/subscriptions", "type": "User", "url": "https://api.github.com/users/ghaith-mq", "user_view_type": "public" }
[]
open
false
[ "Hello, It's an interesting issue here. I have the same problem, I have a local dataset and I want to push the dataset to the hub but huggingface does a copy of it.\r\n\r\n```py\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"webdataset\", data_files=\"/media/works/data/*.tar\") # copy here\r\ndataset.push_to_hub(\"WaveGenAI/audios2\")\r\n```\r\n\r\nEdit: I can use HfApi for my use case\r\n", "Is there any update on this issue? I found the same behavior too.\nMy datasets version is `2.13.2`", "Updating to the newest version of datasets lib resolved the issue. " ]
2024-10-31T11:51:56Z
2025-07-01T08:42:17Z
null
NONE
null
null
### Describe the bug I have data saved with save_to_disk. The data is big (700Gb). When I try loading it, the only option is load_from_disk, and this function copies the data to a tmp directory, causing me to run out of disk space. Is there an alternative solution to that? ### Steps to reproduce the bug when trying to load data using load_From_disk after being saved using save_to_disk ### Expected behavior run out of disk space ### Environment info lateest version
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7268/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/issues/7267
7,267
Source installation fails on Macintosh with python 3.10
{ "avatar_url": "https://avatars.githubusercontent.com/u/39498938?v=4", "events_url": "https://api.github.com/users/mayankagarwals/events{/privacy}", "followers_url": "https://api.github.com/users/mayankagarwals/followers", "following_url": "https://api.github.com/users/mayankagarwals/following{/other_user}", "gists_url": "https://api.github.com/users/mayankagarwals/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mayankagarwals", "id": 39498938, "login": "mayankagarwals", "node_id": "MDQ6VXNlcjM5NDk4OTM4", "organizations_url": "https://api.github.com/users/mayankagarwals/orgs", "received_events_url": "https://api.github.com/users/mayankagarwals/received_events", "repos_url": "https://api.github.com/users/mayankagarwals/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mayankagarwals/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mayankagarwals/subscriptions", "type": "User", "url": "https://api.github.com/users/mayankagarwals", "user_view_type": "public" }
[]
open
false
[ "I encountered the same problem on M1, a workaround I did was to simply comment out the dependency:\r\n\r\n```python\r\n...\r\n \"zstandard\",\r\n \"polars[timezone]>=0.20.0\",\r\n # \"decord==0.6.0\",\r\n]\r\n```\r\n\r\nThis worked for me as the adjustments I did to the code do not use the dependency, but I do not know if the same holds for you.\r\n\r\nI also do not think it is a good idea to rely on a dependency (I mean decord) that has not been maintained for 2 years, but I saw that even eva-decord hasn't been maintained since last year.\r\n\r\nDid you get it to work with eva-decord?" ]
2024-10-31T10:18:45Z
2024-11-04T22:18:06Z
null
NONE
null
null
### Describe the bug Hi, Decord is a dev dependency not maintained since couple years. It does not have an ARM package available rendering it uninstallable on non-intel based macs Suggestion is to move to eva-decord (https://github.com/georgia-tech-db/eva-decord) which doesnt have this problem. Happy to raise a PR ### Steps to reproduce the bug Source installation as mentioned in contributinog.md ### Expected behavior Installation without decord failing to be installed. ### Environment info python=3.10, M3 Mac
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7267/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/issues/7266
7,266
The dataset viewer should be available soon. Please retry later.
{ "avatar_url": "https://avatars.githubusercontent.com/u/39821659?v=4", "events_url": "https://api.github.com/users/viiika/events{/privacy}", "followers_url": "https://api.github.com/users/viiika/followers", "following_url": "https://api.github.com/users/viiika/following{/other_user}", "gists_url": "https://api.github.com/users/viiika/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/viiika", "id": 39821659, "login": "viiika", "node_id": "MDQ6VXNlcjM5ODIxNjU5", "organizations_url": "https://api.github.com/users/viiika/orgs", "received_events_url": "https://api.github.com/users/viiika/received_events", "repos_url": "https://api.github.com/users/viiika/repos", "site_admin": false, "starred_url": "https://api.github.com/users/viiika/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/viiika/subscriptions", "type": "User", "url": "https://api.github.com/users/viiika", "user_view_type": "public" }
[]
closed
false
[ "Waiting is all you need. 10 hours later, it works." ]
2024-10-30T16:32:00Z
2024-10-31T03:48:11Z
2024-10-31T03:48:10Z
NONE
null
null
### Describe the bug After waiting for 2 hours, it still presents ``The dataset viewer should be available soon. Please retry later.'' ### Steps to reproduce the bug dataset link: https://huggingface.co/datasets/BryanW/HI_EDIT ### Expected behavior Present the dataset viewer. ### Environment info NA
{ "avatar_url": "https://avatars.githubusercontent.com/u/39821659?v=4", "events_url": "https://api.github.com/users/viiika/events{/privacy}", "followers_url": "https://api.github.com/users/viiika/followers", "following_url": "https://api.github.com/users/viiika/following{/other_user}", "gists_url": "https://api.github.com/users/viiika/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/viiika", "id": 39821659, "login": "viiika", "node_id": "MDQ6VXNlcjM5ODIxNjU5", "organizations_url": "https://api.github.com/users/viiika/orgs", "received_events_url": "https://api.github.com/users/viiika/received_events", "repos_url": "https://api.github.com/users/viiika/repos", "site_admin": false, "starred_url": "https://api.github.com/users/viiika/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/viiika/subscriptions", "type": "User", "url": "https://api.github.com/users/viiika", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7266/reactions" }
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/pull/7265
7,265
Disallow video push_to_hub
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7265). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-10-30T13:21:55Z
2024-10-30T13:36:05Z
2024-10-30T13:36:02Z
MEMBER
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7265.diff", "html_url": "https://github.com/huggingface/datasets/pull/7265", "merged_at": "2024-10-30T13:36:02Z", "patch_url": "https://github.com/huggingface/datasets/pull/7265.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7265" }
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7265/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/pull/7264
7,264
fix docs relative links
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7264). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-10-30T13:07:34Z
2024-10-30T13:10:13Z
2024-10-30T13:09:02Z
MEMBER
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7264.diff", "html_url": "https://github.com/huggingface/datasets/pull/7264", "merged_at": "2024-10-30T13:09:02Z", "patch_url": "https://github.com/huggingface/datasets/pull/7264.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7264" }
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7264/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/pull/7263
7,263
Small addition to video docs
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7263). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-10-29T16:58:37Z
2024-10-29T17:01:05Z
2024-10-29T16:59:10Z
MEMBER
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7263.diff", "html_url": "https://github.com/huggingface/datasets/pull/7263", "merged_at": "2024-10-29T16:59:10Z", "patch_url": "https://github.com/huggingface/datasets/pull/7263.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7263" }
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7263/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/pull/7262
7,262
Allow video with disabeld decoding without decord
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7262). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-10-29T10:54:04Z
2024-10-29T10:56:19Z
2024-10-29T10:55:37Z
MEMBER
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7262.diff", "html_url": "https://github.com/huggingface/datasets/pull/7262", "merged_at": "2024-10-29T10:55:37Z", "patch_url": "https://github.com/huggingface/datasets/pull/7262.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7262" }
for the viewer, this way it can use Video(decode=False) and doesn't need decord (which causes segfaults)
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7262/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/issues/7261
7,261
Cannot load the cache when mapping the dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/43033959?v=4", "events_url": "https://api.github.com/users/zhangn77/events{/privacy}", "followers_url": "https://api.github.com/users/zhangn77/followers", "following_url": "https://api.github.com/users/zhangn77/following{/other_user}", "gists_url": "https://api.github.com/users/zhangn77/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/zhangn77", "id": 43033959, "login": "zhangn77", "node_id": "MDQ6VXNlcjQzMDMzOTU5", "organizations_url": "https://api.github.com/users/zhangn77/orgs", "received_events_url": "https://api.github.com/users/zhangn77/received_events", "repos_url": "https://api.github.com/users/zhangn77/repos", "site_admin": false, "starred_url": "https://api.github.com/users/zhangn77/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zhangn77/subscriptions", "type": "User", "url": "https://api.github.com/users/zhangn77", "user_view_type": "public" }
[]
open
false
[ "@zhangn77 Hi ,have you solved this problem? I encountered the same issue during training. Could we discuss it?", "I also encountered the same problem, why is that?" ]
2024-10-29T08:29:40Z
2025-03-24T13:27:55Z
null
NONE
null
null
### Describe the bug I'm training the flux controlnet. The train_dataset.map() takes long time to finish. However, when I killed one training process and want to restart a new training with the same dataset. I can't reuse the mapped result even I defined the cache dir for the dataset. with accelerator.main_process_first(): from datasets.fingerprint import Hasher # fingerprint used by the cache for the other processes to load the result # details: https://github.com/huggingface/diffusers/pull/4038#discussion_r1266078401 new_fingerprint = Hasher.hash(args) train_dataset = train_dataset.map( compute_embeddings_fn, batched=True, new_fingerprint=new_fingerprint, batch_size=10, ) ### Steps to reproduce the bug train flux controlnet and start again ### Expected behavior will not map again ### Environment info latest diffusers
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7261/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/issues/7260
7,260
cache can't cleaned or disabled
{ "avatar_url": "https://avatars.githubusercontent.com/u/15007828?v=4", "events_url": "https://api.github.com/users/charliedream1/events{/privacy}", "followers_url": "https://api.github.com/users/charliedream1/followers", "following_url": "https://api.github.com/users/charliedream1/following{/other_user}", "gists_url": "https://api.github.com/users/charliedream1/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/charliedream1", "id": 15007828, "login": "charliedream1", "node_id": "MDQ6VXNlcjE1MDA3ODI4", "organizations_url": "https://api.github.com/users/charliedream1/orgs", "received_events_url": "https://api.github.com/users/charliedream1/received_events", "repos_url": "https://api.github.com/users/charliedream1/repos", "site_admin": false, "starred_url": "https://api.github.com/users/charliedream1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/charliedream1/subscriptions", "type": "User", "url": "https://api.github.com/users/charliedream1", "user_view_type": "public" }
[]
open
false
[ "Hey I have a similar problem and found a workaround using [temporary directories](https://docs.python.org/3/library/tempfile.html):\r\n\r\n```python\r\nfrom tempfile import TemporaryDirectory\r\n\r\nwith TemporaryDirectory() as cache_dir:\r\n data = load_dataset('json', data_files=save_local_path, split='train', cache_dir=cache_dir)\r\n```\r\n\r\nBut I do agree that it would be more intuitive if `datasets` supported this directly. Especially `disable_caching` is confusing, since it basically doesn't disable caching." ]
2024-10-29T03:15:28Z
2024-12-11T09:04:52Z
null
NONE
null
null
### Describe the bug I tried following ways, the cache can't be disabled. I got 2T data, but I also got more than 2T cache file. I got pressure on storage. I need to diable cache or cleaned immediately after processed. Following ways are all not working, please give some help! ```python from datasets import disable_caching from transformers import AutoTokenizer disable_caching() tokenizer = AutoTokenizer.from_pretrained(args.tokenizer_path) def tokenization_fn(examples): column_name = 'text' if 'text' in examples else 'data' tokenized_inputs = tokenizer( examples[column_name], return_special_tokens_mask=True, truncation=False, max_length=tokenizer.model_max_length ) return tokenized_inputs data = load_dataset('json', data_files=save_local_path, split='train', cache_dir=None) data.cleanup_cache_files() updated_dataset = data.map(tokenization_fn, load_from_cache_file=False) updated_dataset .cleanup_cache_files() ``` ### Expected behavior no cache file generated ### Environment info Ubuntu 20.04.6 LTS datasets 3.0.2
null
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/7260/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/pull/7259
7,259
Don't embed videos
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7259). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-10-28T16:25:10Z
2024-10-28T16:27:34Z
2024-10-28T16:26:01Z
MEMBER
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7259.diff", "html_url": "https://github.com/huggingface/datasets/pull/7259", "merged_at": "2024-10-28T16:26:01Z", "patch_url": "https://github.com/huggingface/datasets/pull/7259.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7259" }
don't include video bytes when running download_and_prepare(format="parquet") this also affects push_to_hub which will just upload the local paths of the videos though
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7259/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/pull/7258
7,258
Always set non-null writer batch size
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7258). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-10-28T15:26:14Z
2024-10-28T15:28:41Z
2024-10-28T15:26:29Z
MEMBER
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7258.diff", "html_url": "https://github.com/huggingface/datasets/pull/7258", "merged_at": "2024-10-28T15:26:29Z", "patch_url": "https://github.com/huggingface/datasets/pull/7258.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7258" }
bug introduced in #7230, it was preventing the Viewer limit writes to work
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7258/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/pull/7257
7,257
fix ci for pyarrow 18
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7257). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-10-28T14:31:34Z
2024-10-28T14:34:05Z
2024-10-28T14:31:44Z
MEMBER
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7257.diff", "html_url": "https://github.com/huggingface/datasets/pull/7257", "merged_at": "2024-10-28T14:31:44Z", "patch_url": "https://github.com/huggingface/datasets/pull/7257.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7257" }
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7257/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/pull/7256
7,256
Retry all requests timeouts
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7256). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-10-28T14:23:16Z
2024-10-28T14:56:28Z
2024-10-28T14:56:26Z
MEMBER
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7256.diff", "html_url": "https://github.com/huggingface/datasets/pull/7256", "merged_at": "2024-10-28T14:56:26Z", "patch_url": "https://github.com/huggingface/datasets/pull/7256.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7256" }
as reported in https://github.com/huggingface/datasets/issues/6843
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7256/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/pull/7255
7,255
fix decord import
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7255). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-10-28T14:08:19Z
2024-10-28T14:10:43Z
2024-10-28T14:09:14Z
MEMBER
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7255.diff", "html_url": "https://github.com/huggingface/datasets/pull/7255", "merged_at": "2024-10-28T14:09:14Z", "patch_url": "https://github.com/huggingface/datasets/pull/7255.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7255" }
delay the import until Video() is instantiated + also import duckdb first (otherwise importing duckdb later causes a segfault)
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7255/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/issues/7254
7,254
mismatch for datatypes when providing `Features` with `Array2D` and user specified `dtype` and using with_format("numpy")
{ "avatar_url": "https://avatars.githubusercontent.com/u/97193607?v=4", "events_url": "https://api.github.com/users/Akhil-CM/events{/privacy}", "followers_url": "https://api.github.com/users/Akhil-CM/followers", "following_url": "https://api.github.com/users/Akhil-CM/following{/other_user}", "gists_url": "https://api.github.com/users/Akhil-CM/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Akhil-CM", "id": 97193607, "login": "Akhil-CM", "node_id": "U_kgDOBcsOhw", "organizations_url": "https://api.github.com/users/Akhil-CM/orgs", "received_events_url": "https://api.github.com/users/Akhil-CM/received_events", "repos_url": "https://api.github.com/users/Akhil-CM/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Akhil-CM/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Akhil-CM/subscriptions", "type": "User", "url": "https://api.github.com/users/Akhil-CM", "user_view_type": "public" }
[]
open
false
[ "It seems that https://github.com/huggingface/datasets/issues/5517 is exactly the same issue.\r\n\r\nIt was mentioned there that this would be fixed in version 3.x" ]
2024-10-26T22:06:27Z
2024-10-26T22:07:37Z
null
NONE
null
null
### Describe the bug If the user provides a `Features` type value to `datasets.Dataset` with members having `Array2D` with a value for `dtype`, it is not respected during `with_format("numpy")` which should return a `np.array` with `dtype` that the user provided for `Array2D`. It seems for floats, it will be set to `float32` and for ints it will be set to `int64` ### Steps to reproduce the bug ```python import numpy as np import datasets from datasets import Dataset, Features, Array2D print(f"datasets version: {datasets.__version__}") data_info = { "arr_float" : "float64", "arr_int" : "int32" } sample = {key : [np.zeros([4, 5], dtype=dtype)] for key, dtype in data_info.items()} features = {key : Array2D(shape=(None, 5), dtype=dtype) for key, dtype in data_info.items()} features = Features(features) dataset = Dataset.from_dict(sample, features=features) ds = dataset.with_format("numpy") for key in features: print(f"{key} feature dtype: ", ds.features[key].dtype) print(f"{key} dtype:", ds[key].dtype) ``` Output: ```bash datasets version: 3.0.2 arr_float feature dtype: float64 arr_float dtype: float32 arr_int feature dtype: int32 arr_int dtype: int64 ``` ### Expected behavior It should return a `np.array` with `dtype` that the user provided for the corresponding member in the `Features` type value ### Environment info - `datasets` version: 3.0.2 - Platform: Linux-6.11.5-arch1-1-x86_64-with-glibc2.40 - Python version: 3.12.7 - `huggingface_hub` version: 0.26.1 - PyArrow version: 16.1.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.5.0
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7254/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/issues/7253
7,253
Unable to upload a large dataset zip either from command line or UI
{ "avatar_url": "https://avatars.githubusercontent.com/u/159609047?v=4", "events_url": "https://api.github.com/users/vakyansh/events{/privacy}", "followers_url": "https://api.github.com/users/vakyansh/followers", "following_url": "https://api.github.com/users/vakyansh/following{/other_user}", "gists_url": "https://api.github.com/users/vakyansh/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/vakyansh", "id": 159609047, "login": "vakyansh", "node_id": "U_kgDOCYNw1w", "organizations_url": "https://api.github.com/users/vakyansh/orgs", "received_events_url": "https://api.github.com/users/vakyansh/received_events", "repos_url": "https://api.github.com/users/vakyansh/repos", "site_admin": false, "starred_url": "https://api.github.com/users/vakyansh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vakyansh/subscriptions", "type": "User", "url": "https://api.github.com/users/vakyansh", "user_view_type": "public" }
[]
open
false
[]
2024-10-26T13:17:06Z
2024-10-26T13:17:06Z
null
NONE
null
null
### Describe the bug Unable to upload a large dataset zip from command line or UI. UI simply says error. I am trying to a upload a tar.gz file of 17GB. <img width="550" alt="image" src="https://github.com/user-attachments/assets/f9d29024-06c8-49c4-a109-0492cff79d34"> <img width="755" alt="image" src="https://github.com/user-attachments/assets/a8d4acda-7f02-4279-9c2d-b2e0282b4faa"> ### Steps to reproduce the bug Upload a large file ### Expected behavior The file should upload without any issue. ### Environment info None
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7253/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/pull/7252
7,252
Add IterableDataset.shard()
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7252). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Is there some way to get this to work for pytorch dataloader workers?\r\n\r\neg. start with a single sharded IterableDataset.from_generator(), then reshard before calling map() to do expensive processing over multiple workers" ]
2024-10-25T11:07:12Z
2025-03-21T03:58:43Z
2024-10-25T15:45:22Z
MEMBER
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7252.diff", "html_url": "https://github.com/huggingface/datasets/pull/7252", "merged_at": "2024-10-25T15:45:21Z", "patch_url": "https://github.com/huggingface/datasets/pull/7252.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7252" }
Will be useful to distribute a dataset across workers (other than pytorch) like spark I also renamed `.n_shards` -> `.num_shards` for consistency and kept the old name for backward compatibility. And a few changes in internal functions for consistency as well (rank, world_size -> num_shards, index) Breaking change: the new default for `contiguous` in `Dataset.shard()` is `True`, but imo not a big deal since I couldn't find any usage of `contiguous=False` internally (we always do contiguous=True for map-style datasets since its more optimized) or in the wild
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7252/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/pull/7251
7,251
Missing video docs
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7251). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-10-24T16:45:12Z
2024-10-24T16:48:29Z
2024-10-24T16:48:27Z
MEMBER
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7251.diff", "html_url": "https://github.com/huggingface/datasets/pull/7251", "merged_at": "2024-10-24T16:48:27Z", "patch_url": "https://github.com/huggingface/datasets/pull/7251.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7251" }
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7251/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/pull/7250
7,250
Basic XML support (mostly copy pasted from text)
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7250). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-10-24T16:14:50Z
2024-10-24T16:19:18Z
2024-10-24T16:19:16Z
MEMBER
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7250.diff", "html_url": "https://github.com/huggingface/datasets/pull/7250", "merged_at": "2024-10-24T16:19:16Z", "patch_url": "https://github.com/huggingface/datasets/pull/7250.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7250" }
enable the viewer for datasets like https://huggingface.co/datasets/FrancophonIA/e-calm (there will be more and more apparently)
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7250/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/issues/7249
7,249
How to debugging
{ "avatar_url": "https://avatars.githubusercontent.com/u/49576595?v=4", "events_url": "https://api.github.com/users/ShDdu/events{/privacy}", "followers_url": "https://api.github.com/users/ShDdu/followers", "following_url": "https://api.github.com/users/ShDdu/following{/other_user}", "gists_url": "https://api.github.com/users/ShDdu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ShDdu", "id": 49576595, "login": "ShDdu", "node_id": "MDQ6VXNlcjQ5NTc2NTk1", "organizations_url": "https://api.github.com/users/ShDdu/orgs", "received_events_url": "https://api.github.com/users/ShDdu/received_events", "repos_url": "https://api.github.com/users/ShDdu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ShDdu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ShDdu/subscriptions", "type": "User", "url": "https://api.github.com/users/ShDdu", "user_view_type": "public" }
[]
open
false
[]
2024-10-24T01:03:51Z
2024-10-24T01:03:51Z
null
NONE
null
null
### Describe the bug I wanted to use my own script to handle the processing, and followed the tutorial documentation by rewriting the MyDatasetConfig and MyDatasetBuilder (which contains the _info,_split_generators and _generate_examples methods) classes. Testing with simple data was able to output the results of the processing, but when I wished to do more complex processing, I found that I was unable to debug (even the simple samples were inaccessible). There are no errors reported, and I am able to print the _info,_split_generators and _generate_examples messages, but I am unable to access the breakpoints. ### Steps to reproduce the bug # my_dataset.py import json import datasets class MyDatasetConfig(datasets.BuilderConfig): def __init__(self, **kwargs): super(MyDatasetConfig, self).__init__(**kwargs) class MyDataset(datasets.GeneratorBasedBuilder): VERSION = datasets.Version("1.0.0") BUILDER_CONFIGS = [ MyDatasetConfig( name="default", version=VERSION, description="myDATASET" ), ] def _info(self): print("info") # breakpoints return datasets.DatasetInfo( description="myDATASET", features=datasets.Features( { "id": datasets.Value("int32"), "text": datasets.Value("string"), "label": datasets.ClassLabel(names=["negative", "positive"]), } ), supervised_keys=("text", "label"), ) def _split_generators(self, dl_manager): print("generate") # breakpoints data_file = "data.json" return [ datasets.SplitGenerator( name=datasets.Split.TRAIN, gen_kwargs={"filepath": data_file} ), ] def _generate_examples(self, filepath): print("example") # breakpoints with open(filepath, encoding="utf-8") as f: data = json.load(f) for idx, sample in enumerate(data): yield idx, { "id": sample["id"], "text": sample["text"], "label": sample["label"], } #main.py import os os.environ["TRANSFORMERS_NO_MULTIPROCESSING"] = "1" from datasets import load_dataset dataset = load_dataset("my_dataset.py", split="train", cache_dir=None) print(dataset[:5]) ### Expected behavior Pause at breakpoints while running debugging ### Environment info pycharm
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7249/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/issues/7248
7,248
ModuleNotFoundError: No module named 'datasets.tasks'
{ "avatar_url": "https://avatars.githubusercontent.com/u/93593941?v=4", "events_url": "https://api.github.com/users/shoowadoo/events{/privacy}", "followers_url": "https://api.github.com/users/shoowadoo/followers", "following_url": "https://api.github.com/users/shoowadoo/following{/other_user}", "gists_url": "https://api.github.com/users/shoowadoo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/shoowadoo", "id": 93593941, "login": "shoowadoo", "node_id": "U_kgDOBZQhVQ", "organizations_url": "https://api.github.com/users/shoowadoo/orgs", "received_events_url": "https://api.github.com/users/shoowadoo/received_events", "repos_url": "https://api.github.com/users/shoowadoo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/shoowadoo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shoowadoo/subscriptions", "type": "User", "url": "https://api.github.com/users/shoowadoo", "user_view_type": "public" }
[]
open
false
[ "tasks was removed in v3: #6999 \r\n\r\nI also don't see why TextClassification is imported, since it's not used after. So the fix is simple: delete this line.", "I opened https://huggingface.co/datasets/knowledgator/events_classification_biotech/discussions/7 to remove the line, hopefully the dataset owner will merge it soon" ]
2024-10-23T21:58:25Z
2024-10-24T17:00:19Z
null
NONE
null
null
### Describe the bug --------------------------------------------------------------------------- ModuleNotFoundError Traceback (most recent call last) [<ipython-input-9-13b5f31bd391>](https://bcb6shpazyu-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab_20241022-060119_RC00_688494744#) in <cell line: 1>() ----> 1 dataset = load_dataset('knowledgator/events_classification_biotech') 11 frames [/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://bcb6shpazyu-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab_20241022-060119_RC00_688494744#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, keep_in_memory, save_infos, revision, token, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs) 2130 2131 # Create a dataset builder -> 2132 builder_instance = load_dataset_builder( 2133 path=path, 2134 name=name, [/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://bcb6shpazyu-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab_20241022-060119_RC00_688494744#) in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, token, storage_options, trust_remote_code, _require_default_config_name, **config_kwargs) 1886 raise ValueError(error_msg) 1887 -> 1888 builder_cls = get_dataset_builder_class(dataset_module, dataset_name=dataset_name) 1889 # Instantiate the dataset builder 1890 builder_instance: DatasetBuilder = builder_cls( [/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://bcb6shpazyu-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab_20241022-060119_RC00_688494744#) in get_dataset_builder_class(dataset_module, dataset_name) 246 dataset_module.importable_file_path 247 ) if dataset_module.importable_file_path else nullcontext(): --> 248 builder_cls = import_main_class(dataset_module.module_path) 249 if dataset_module.builder_configs_parameters.builder_configs: 250 dataset_name = dataset_name or dataset_module.builder_kwargs.get("dataset_name") [/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://bcb6shpazyu-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab_20241022-060119_RC00_688494744#) in import_main_class(module_path) 167 def import_main_class(module_path) -> Optional[Type[DatasetBuilder]]: 168 """Import a module at module_path and return its main class: a DatasetBuilder""" --> 169 module = importlib.import_module(module_path) 170 # Find the main class in our imported module 171 module_main_cls = None [/usr/lib/python3.10/importlib/__init__.py](https://bcb6shpazyu-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab_20241022-060119_RC00_688494744#) in import_module(name, package) 124 break 125 level += 1 --> 126 return _bootstrap._gcd_import(name[level:], package, level) 127 128 /usr/lib/python3.10/importlib/_bootstrap.py in _gcd_import(name, package, level) /usr/lib/python3.10/importlib/_bootstrap.py in _find_and_load(name, import_) /usr/lib/python3.10/importlib/_bootstrap.py in _find_and_load_unlocked(name, import_) /usr/lib/python3.10/importlib/_bootstrap.py in _load_unlocked(spec) /usr/lib/python3.10/importlib/_bootstrap_external.py in exec_module(self, module) /usr/lib/python3.10/importlib/_bootstrap.py in _call_with_frames_removed(f, *args, **kwds) [~/.cache/huggingface/modules/datasets_modules/datasets/knowledgator--events_classification_biotech/9c8086d498c3104de3a3c5b6640837e18ccd829dcaca49f1cdffe3eb5c4a6361/events_classification_biotech.py](https://bcb6shpazyu-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab_20241022-060119_RC00_688494744#) in <module> 1 import datasets 2 from datasets import load_dataset ----> 3 from datasets.tasks import TextClassification 4 5 DESCRIPTION = """ ModuleNotFoundError: No module named 'datasets.tasks' --------------------------------------------------------------------------- NOTE: If your import is failing due to a missing package, you can manually install dependencies using either !pip or !apt. To view examples of installing some common dependencies, click the "Open Examples" button below. --------------------------------------------------------------------------- ### Steps to reproduce the bug !pip install datasets from datasets import load_dataset dataset = load_dataset('knowledgator/events_classification_biotech') ### Expected behavior no ModuleNotFoundError ### Environment info google colab
null
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/7248/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/issues/7247
7,247
Adding column with dict struction when mapping lead to wrong order
{ "avatar_url": "https://avatars.githubusercontent.com/u/114604968?v=4", "events_url": "https://api.github.com/users/chchch0109/events{/privacy}", "followers_url": "https://api.github.com/users/chchch0109/followers", "following_url": "https://api.github.com/users/chchch0109/following{/other_user}", "gists_url": "https://api.github.com/users/chchch0109/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/chchch0109", "id": 114604968, "login": "chchch0109", "node_id": "U_kgDOBtS7qA", "organizations_url": "https://api.github.com/users/chchch0109/orgs", "received_events_url": "https://api.github.com/users/chchch0109/received_events", "repos_url": "https://api.github.com/users/chchch0109/repos", "site_admin": false, "starred_url": "https://api.github.com/users/chchch0109/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chchch0109/subscriptions", "type": "User", "url": "https://api.github.com/users/chchch0109", "user_view_type": "public" }
[]
open
false
[]
2024-10-22T18:55:11Z
2024-10-22T18:55:23Z
null
NONE
null
null
### Describe the bug in `map()` function, I want to add a new column with a dict structure. ``` def map_fn(example): example['text'] = {'user': ..., 'assistant': ...} return example ``` However this leads to a wrong order `{'assistant':..., 'user':...}` in the dataset. Thus I can't concatenate two datasets due to the different feature structures. [Here](https://colab.research.google.com/drive/1zeaWq9Ith4DKWP_EiBNyLfc8S8I68LyY?usp=sharing) is a minimal reproducible example This seems an issue in low level pyarrow library instead of datasets, however, I think datasets should allow concatenate two datasets actually in the same structure. ### Steps to reproduce the bug [Here](https://colab.research.google.com/drive/1zeaWq9Ith4DKWP_EiBNyLfc8S8I68LyY?usp=sharing) is a minimal reproducible example ### Expected behavior two datasets could be concatenated. ### Environment info N/A
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7247/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/pull/7246
7,246
Set dev version
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7246). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-10-22T15:04:47Z
2024-10-22T15:07:31Z
2024-10-22T15:04:58Z
MEMBER
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7246.diff", "html_url": "https://github.com/huggingface/datasets/pull/7246", "merged_at": "2024-10-22T15:04:58Z", "patch_url": "https://github.com/huggingface/datasets/pull/7246.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7246" }
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7246/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/pull/7245
7,245
Release: 3.0.2
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7245). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-10-22T14:53:34Z
2024-10-22T15:01:50Z
2024-10-22T15:01:47Z
MEMBER
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7245.diff", "html_url": "https://github.com/huggingface/datasets/pull/7245", "merged_at": "2024-10-22T15:01:47Z", "patch_url": "https://github.com/huggingface/datasets/pull/7245.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7245" }
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7245/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/pull/7244
7,244
use huggingface_hub offline mode
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7244). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-10-22T13:27:16Z
2024-10-22T14:10:45Z
2024-10-22T14:10:20Z
MEMBER
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7244.diff", "html_url": "https://github.com/huggingface/datasets/pull/7244", "merged_at": "2024-10-22T14:10:20Z", "patch_url": "https://github.com/huggingface/datasets/pull/7244.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7244" }
and better handling of LocalEntryNotfoundError cc @Wauplin follow up to #7234
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/7244/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/issues/7243
7,243
ArrayXD with None as leading dim incompatible with DatasetCardData
{ "avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4", "events_url": "https://api.github.com/users/alex-hh/events{/privacy}", "followers_url": "https://api.github.com/users/alex-hh/followers", "following_url": "https://api.github.com/users/alex-hh/following{/other_user}", "gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alex-hh", "id": 5719745, "login": "alex-hh", "node_id": "MDQ6VXNlcjU3MTk3NDU=", "organizations_url": "https://api.github.com/users/alex-hh/orgs", "received_events_url": "https://api.github.com/users/alex-hh/received_events", "repos_url": "https://api.github.com/users/alex-hh/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions", "type": "User", "url": "https://api.github.com/users/alex-hh", "user_view_type": "public" }
[]
open
false
[ "It looks like `CardData` in `huggingface_hub` removes None values where it shouldn't. Indeed it calls `_remove_none` on the return of `to_dict()`:\r\n\r\n```python\r\n def to_dict(self) -> Dict[str, Any]:\r\n \"\"\"Converts CardData to a dict.\r\n\r\n Returns:\r\n `dict`: CardData represented as a dictionary ready to be dumped to a YAML\r\n block for inclusion in a README.md file.\r\n \"\"\"\r\n\r\n data_dict = copy.deepcopy(self.__dict__)\r\n self._to_dict(data_dict)\r\n return _remove_none(data_dict)\r\n```\r\n\r\nWould it be ok to remove `list()` from being scanned in `_remove_none` ? it could also be a specific behavior to DatasetCardData if necessary @Wauplin ", "I have actually no idea why none values are removed in model and dataset card data... :see_no_evil:\r\nLooks like `_remove_none` has been introduced at the same time as the entire repocard module (see https://github.com/huggingface/huggingface_hub/pull/940). I would be tempted to remove `_remove_none` entirely actually and only remove \"top-level\" None values (i.e. if something like `pipeline_tag=None` due to a default value in kwargs => we remove it). Hard to tell what could be the side effects but I'm not against trying.\r\n\r\n\r\nHowever, I'm not really in favor in making an exception only for lists. It would mean that tuples, sets and dicts are filtered but not lists, which is pretty inconsistent.", "let's do it for top level attributes yes", "I opened https://github.com/huggingface/huggingface_hub/pull/2626 to address it :)", "thanks !" ]
2024-10-21T15:08:13Z
2024-10-22T14:18:10Z
null
CONTRIBUTOR
null
null
### Describe the bug Creating a dataset with ArrayXD features leads to errors when downloading from hub due to DatasetCardData removing the Nones @lhoestq ### Steps to reproduce the bug ```python import numpy as np from datasets import Array2D, Dataset, Features, load_dataset def examples_generator(): for i in range(4): yield { "array_1d": np.zeros((10,1), dtype="uint16"), "array_2d": np.zeros((10, 1), dtype="uint16"), } features = Features(array_1d=Array2D((None,1), "uint16"), array_2d=Array2D((None, 1), "uint16")) dataset = Dataset.from_generator(examples_generator, features=features) dataset.push_to_hub("alex-hh/test_array_1d2d") ds = load_dataset("alex-hh/test_array_1d2d") ``` Source of error appears to be DatasetCardData.to_dict invoking DatasetCardData._remove_none ```python from huggingface_hub import DatasetCardData from datasets.info import DatasetInfosDict dataset_card_data = DatasetCardData() DatasetInfosDict({"default": dataset.info.copy()}).to_dataset_card_data(dataset_card_data) print(dataset_card_data.to_dict()) # removes Nones in shape ``` ### Expected behavior Should be possible to load datasets saved with shape None in leading dimension ### Environment info 3.0.2 and latest huggingface_hub
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/7243/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/issues/7241
7,241
`push_to_hub` overwrite argument
{ "avatar_url": "https://avatars.githubusercontent.com/u/60838378?v=4", "events_url": "https://api.github.com/users/ceferisbarov/events{/privacy}", "followers_url": "https://api.github.com/users/ceferisbarov/followers", "following_url": "https://api.github.com/users/ceferisbarov/following{/other_user}", "gists_url": "https://api.github.com/users/ceferisbarov/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ceferisbarov", "id": 60838378, "login": "ceferisbarov", "node_id": "MDQ6VXNlcjYwODM4Mzc4", "organizations_url": "https://api.github.com/users/ceferisbarov/orgs", "received_events_url": "https://api.github.com/users/ceferisbarov/received_events", "repos_url": "https://api.github.com/users/ceferisbarov/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ceferisbarov/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ceferisbarov/subscriptions", "type": "User", "url": "https://api.github.com/users/ceferisbarov", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
[ "Hi ! Do you mean deleting all the files ? or erasing the repository git history before push_to_hub ?", "Hi! I meant the latter.", "I don't think there is a `huggingface_hub` utility to erase the git history, cc @Wauplin maybe ?", "What is the goal exactly of deleting all the git history without deleting the repo? ", "You can use [`super_squash_commit`](https://huggingface.co/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.super_squash_history) to squash all the commits into a single one, hence deleting the git history. This is not exactly what you asked for since it squashes the commits for a specific revision (example: \"all commits on main\"). This means that if other branches exists, they are kept the same. Also if some PRs are already opened on the repo, they will become unmergeable since the commits will have diverted.", "So the solution is:\r\n\r\n```python\r\nfrom huggingface_hub import HfApi\r\nrepo_id = \"username/dataset_name\"\r\nds.push_to_hub(repo_id)\r\nHfApi().super_squash_commit(repo_id)\r\n```\r\n\r\nThis way you erase previous git history to end up with only 1 commit containing your dataset.\r\nStill, I'd be curious why it's important in your case. Is it to save storage space ? or to disallow loading old versions of the data ?", "Thanks, everyone! I am building a new dataset and playing around with column names, splits, etc. Sometimes I push to the hub to share it with other teammates, I don't want those variations to be part of the repo. Deleting the repo from the website takes a little time, but it also loses repo settings that I have set, since I always set it to public with manually approved requests.\r\n\r\nBTW, I had to write `HfApi().super_squash_history(repo_id, repo_type=\"dataset\")`, but otherwise it works.", "@ceferisbarov just to let you know, recreating a gated repo + granting access to your teammates is something that you can automate with something like this (not fully tested but should work):\r\n\r\n```py\r\nfrom huggingface_hub import HfApi\r\n\r\napi = HfApi()\r\napi.delete_repo(repo_id, repo_type=\"dataset\", missing_ok=True)\r\napi.create_repo(repo_id, repo_type=\"dataset\", private=False)\r\napi.update_repo_settings(repo_id, repo_type=\"dataset\", gated=\"manual\")\r\nfor user in [\"user1\", \"user2\"] # list of teammates\r\n api.grant_access(repo_id, user, repo_type=\"dataset\")\r\n```\r\n\r\nI think it'd be a better solution than squashing commits (which is more of a hack), typically if you are using the dataset viewer.", "This is great, @Wauplin. If we can achieve this with HfApi, then we probably don't need to add another parameter to push_to_hub. I am closing the issue." ]
2024-10-20T03:23:26Z
2024-10-24T17:39:08Z
2024-10-24T17:39:08Z
NONE
null
null
### Feature request Add an `overwrite` argument to the `push_to_hub` method. ### Motivation I want to overwrite a repo without deleting it on Hugging Face. Is this possible? I couldn't find anything in the documentation or tutorials. ### Your contribution I can create a PR.
{ "avatar_url": "https://avatars.githubusercontent.com/u/60838378?v=4", "events_url": "https://api.github.com/users/ceferisbarov/events{/privacy}", "followers_url": "https://api.github.com/users/ceferisbarov/followers", "following_url": "https://api.github.com/users/ceferisbarov/following{/other_user}", "gists_url": "https://api.github.com/users/ceferisbarov/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ceferisbarov", "id": 60838378, "login": "ceferisbarov", "node_id": "MDQ6VXNlcjYwODM4Mzc4", "organizations_url": "https://api.github.com/users/ceferisbarov/orgs", "received_events_url": "https://api.github.com/users/ceferisbarov/received_events", "repos_url": "https://api.github.com/users/ceferisbarov/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ceferisbarov/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ceferisbarov/subscriptions", "type": "User", "url": "https://api.github.com/users/ceferisbarov", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7241/reactions" }
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/pull/7240
7,240
Feature Request: Add functionality to pass split types like train, test in DatasetDict.map
{ "avatar_url": "https://avatars.githubusercontent.com/u/93233241?v=4", "events_url": "https://api.github.com/users/jp1924/events{/privacy}", "followers_url": "https://api.github.com/users/jp1924/followers", "following_url": "https://api.github.com/users/jp1924/following{/other_user}", "gists_url": "https://api.github.com/users/jp1924/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jp1924", "id": 93233241, "login": "jp1924", "node_id": "U_kgDOBY6gWQ", "organizations_url": "https://api.github.com/users/jp1924/orgs", "received_events_url": "https://api.github.com/users/jp1924/received_events", "repos_url": "https://api.github.com/users/jp1924/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jp1924/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jp1924/subscriptions", "type": "User", "url": "https://api.github.com/users/jp1924", "user_view_type": "public" }
[]
closed
false
[]
2024-10-19T09:59:12Z
2025-01-06T08:04:08Z
2025-01-06T08:04:08Z
CONTRIBUTOR
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7240.diff", "html_url": "https://github.com/huggingface/datasets/pull/7240", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7240.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7240" }
Hello datasets! We often encounter situations where we need to preprocess data differently depending on split types such as train, valid, and test. However, while DatasetDict.map has features to pass rank or index, there's no functionality to pass split types. Therefore, I propose adding a 'with_splits' parameter to DatasetDict, which would allow passing the split type through fn_kwargs.
{ "avatar_url": "https://avatars.githubusercontent.com/u/93233241?v=4", "events_url": "https://api.github.com/users/jp1924/events{/privacy}", "followers_url": "https://api.github.com/users/jp1924/followers", "following_url": "https://api.github.com/users/jp1924/following{/other_user}", "gists_url": "https://api.github.com/users/jp1924/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jp1924", "id": 93233241, "login": "jp1924", "node_id": "U_kgDOBY6gWQ", "organizations_url": "https://api.github.com/users/jp1924/orgs", "received_events_url": "https://api.github.com/users/jp1924/received_events", "repos_url": "https://api.github.com/users/jp1924/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jp1924/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jp1924/subscriptions", "type": "User", "url": "https://api.github.com/users/jp1924", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7240/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/issues/7238
7,238
incompatibily issue when using load_dataset with datasets==3.0.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/74985234?v=4", "events_url": "https://api.github.com/users/jupiterMJM/events{/privacy}", "followers_url": "https://api.github.com/users/jupiterMJM/followers", "following_url": "https://api.github.com/users/jupiterMJM/following{/other_user}", "gists_url": "https://api.github.com/users/jupiterMJM/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jupiterMJM", "id": 74985234, "login": "jupiterMJM", "node_id": "MDQ6VXNlcjc0OTg1MjM0", "organizations_url": "https://api.github.com/users/jupiterMJM/orgs", "received_events_url": "https://api.github.com/users/jupiterMJM/received_events", "repos_url": "https://api.github.com/users/jupiterMJM/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jupiterMJM/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jupiterMJM/subscriptions", "type": "User", "url": "https://api.github.com/users/jupiterMJM", "user_view_type": "public" }
[]
open
false
[ "Hi! I'm also getting the same issue - have you been able to find a solution to this? ", "From what I remember, I stayed at the \"downgraded\" version of dataset (2.21.0)" ]
2024-10-18T21:25:23Z
2024-12-09T09:49:32Z
null
NONE
null
null
### Describe the bug There is a bug when using load_dataset with dataset version at 3.0.1 . Please see below in the "steps to reproduce the bug". To resolve the bug, I had to downgrade to version 2.21.0 OS: Ubuntu 24 (AWS instance) Python: same bug under 3.12 and 3.10 The error I had was: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/ubuntu/miniconda3/envs/maxence_env/lib/python3.10/site-packages/datasets/load.py", line 2096, in load_dataset builder_instance.download_and_prepare( File "/home/ubuntu/miniconda3/envs/maxence_env/lib/python3.10/site-packages/datasets/builder.py", line 924, in download_and_prepare self._download_and_prepare( File "/home/ubuntu/miniconda3/envs/maxence_env/lib/python3.10/site-packages/datasets/builder.py", line 1647, in _download_and_prepare super()._download_and_prepare( File "/home/ubuntu/miniconda3/envs/maxence_env/lib/python3.10/site-packages/datasets/builder.py", line 977, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/ubuntu/.cache/huggingface/modules/datasets_modules/datasets/mozilla-foundation--common_voice_6_0/cb17afd34f5799f97e8f48398748f83006335b702bd785f9880797838d541b81/common_voice_6_0.py", line 159, in _split_generators archive_path = dl_manager.download(self._get_bundle_url(self.config.name, bundle_url_template)) File "/home/ubuntu/miniconda3/envs/maxence_env/lib/python3.10/site-packages/datasets/download/download_manager.py", line 150, in download download_config = self.download_config.copy() File "/home/ubuntu/miniconda3/envs/maxence_env/lib/python3.10/site-packages/datasets/download/download_config.py", line 73, in copy return self.__class__(**{k: copy.deepcopy(v) for k, v in self.__dict__.items()}) TypeError: DownloadConfig.__init__() got an unexpected keyword argument 'ignore_url_params' ### Steps to reproduce the bug 1. install dataset with ```pip install datasets --upgrade``` 2. launch python; from datasets import loaad_dataset 3. run load_dataset("mozilla-foundation/common_voice_6_0") 4. exit python 5. uninstall datasets; then ```pip install datasets==2.21.0``` 6. launch python; from datasets import loaad_dataset 7. run load_dataset("mozilla-foundation/common_voice_6_0") 8. Everything runs great now ### Expected behavior Be able to download a dataset without error ### Environment info Copy-and-paste the text below in your GitHub issue. - `datasets` version: 3.0.1 - Platform: Linux-6.8.0-1017-aws-x86_64-with-glibc2.39 - Python version: 3.12.4 - `huggingface_hub` version: 0.26.0 - PyArrow version: 17.0.0 - Pandas version: 2.2.3 - `fsspec` version: 2024.6.1
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7238/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/pull/7236
7,236
[MINOR:TYPO] Update arrow_dataset.py
{ "avatar_url": "https://avatars.githubusercontent.com/u/3664563?v=4", "events_url": "https://api.github.com/users/cakiki/events{/privacy}", "followers_url": "https://api.github.com/users/cakiki/followers", "following_url": "https://api.github.com/users/cakiki/following{/other_user}", "gists_url": "https://api.github.com/users/cakiki/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cakiki", "id": 3664563, "login": "cakiki", "node_id": "MDQ6VXNlcjM2NjQ1NjM=", "organizations_url": "https://api.github.com/users/cakiki/orgs", "received_events_url": "https://api.github.com/users/cakiki/received_events", "repos_url": "https://api.github.com/users/cakiki/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cakiki/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cakiki/subscriptions", "type": "User", "url": "https://api.github.com/users/cakiki", "user_view_type": "public" }
[]
closed
false
[]
2024-10-18T12:10:03Z
2024-10-24T15:06:43Z
2024-10-24T15:06:43Z
CONTRIBUTOR
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7236.diff", "html_url": "https://github.com/huggingface/datasets/pull/7236", "merged_at": "2024-10-24T15:06:43Z", "patch_url": "https://github.com/huggingface/datasets/pull/7236.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7236" }
Fix wrong link. csv kwargs docstring link was pointing to pandas json docs.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7236/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/pull/7234
7,234
No need for dataset_info
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7234). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "merging this one for now, let me know if you'd like to see additional changes for error handling (I'll take care of them before doing a release)" ]
2024-10-17T09:54:03Z
2024-10-22T12:30:40Z
2024-10-21T16:44:34Z
MEMBER
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7234.diff", "html_url": "https://github.com/huggingface/datasets/pull/7234", "merged_at": "2024-10-21T16:44:34Z", "patch_url": "https://github.com/huggingface/datasets/pull/7234.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7234" }
save a useless call to /api/datasets/repo_id
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7234/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/issues/7233
7,233
数据集数量问题
{ "avatar_url": "https://avatars.githubusercontent.com/u/180297268?v=4", "events_url": "https://api.github.com/users/want-well/events{/privacy}", "followers_url": "https://api.github.com/users/want-well/followers", "following_url": "https://api.github.com/users/want-well/following{/other_user}", "gists_url": "https://api.github.com/users/want-well/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/want-well", "id": 180297268, "login": "want-well", "node_id": "U_kgDOCr8eNA", "organizations_url": "https://api.github.com/users/want-well/orgs", "received_events_url": "https://api.github.com/users/want-well/received_events", "repos_url": "https://api.github.com/users/want-well/repos", "site_admin": false, "starred_url": "https://api.github.com/users/want-well/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/want-well/subscriptions", "type": "User", "url": "https://api.github.com/users/want-well", "user_view_type": "public" }
[]
open
false
[]
2024-10-17T07:41:44Z
2024-10-17T07:41:44Z
null
NONE
null
null
### Describe the bug 这里我进行大模型微调,当数据集数量为718时,模型可以正常微调,但是当我添加一个在前718个数据集中的数据或者新增一个数据就会报错 ### Steps to reproduce the bug 1. 这里我的数据集可以微调的最后两个数据集是: { "messages": [ { "role": "user", "content": "完成校正装置设计后需要进行哪些工作?" }, { "role": "assistant", "content": "一旦完成校正装置设计后,需要进行系统实际调校工作,或者进行MATLAB仿真以检查系统的时间响应特性。这时,需将系统建模时省略的部分尽可能加入系统,以保证仿真结果的逼真度。如果由于系统各种固有非线性因素影响,或者由于系统噪声和负载效应等因素的影响,使已校正系统不能满足全部性能指标要求,则需要适当调整校正装置的形式或参数,直到已校正系统满足全部性能指标为止。" } ] } { "messages": [ { "role": "user", "content": "为什么闭环带宽要求会影响串联超前校正的效果?" }, { "role": "assistant", "content": "如果待校正系统不稳定,为了满足相角裕度的要求,超前网络需要提供很大的相角超前量。这会导致超前网络的a值必须选得很大,从而使已校正系统的带宽过大。带宽过大会使通过系统的高频噪声电平变高,可能导致系统失控。因此,闭环带宽的要求会影响串联超前校正的效果。" } ] } 2. 当我把倒数第二个数据集再次添加到数据集中时,此时我的最后三个数据集是 { "messages": [ { "role": "user", "content": "完成校正装置设计后需要进行哪些工作?" }, { "role": "assistant", "content": "一旦完成校正装置设计后,需要进行系统实际调校工作,或者进行MATLAB仿真以检查系统的时间响应特性。这时,需将系统建模时省略的部分尽可能加入系统,以保证仿真结果的逼真度。如果由于系统各种固有非线性因素影响,或者由于系统噪声和负载效应等因素的影响,使已校正系统不能满足全部性能指标要求,则需要适当调整校正装置的形式或参数,直到已校正系统满足全部性能指标为止。" } ] } { "messages": [ { "role": "user", "content": "为什么闭环带宽要求会影响串联超前校正的效果?" }, { "role": "assistant", "content": "如果待校正系统不稳定,为了满足相角裕度的要求,超前网络需要提供很大的相角超前量。这会导致超前网络的a值必须选得很大,从而使已校正系统的带宽过大。带宽过大会使通过系统的高频噪声电平变高,可能导致系统失控。因此,闭环带宽的要求会影响串联超前校正的效果。" } ] } { "messages": [ { "role": "user", "content": "完成校正装置设计后需要进行哪些工作?" }, { "role": "assistant", "content": "一旦完成校正装置设计后,需要进行系统实际调校工作,或者进行MATLAB仿真以检查系统的时间响应特性。这时,需将系统建模时省略的部分尽可能加入系统,以保证仿真结果的逼真度。如果由于系统各种固有非线性因素影响,或者由于系统噪声和负载效应等因素的影响,使已校正系统不能满足全部性能指标要求,则需要适当调整校正装置的形式或参数,直到已校正系统满足全部性能指标为止。" } ] } 这时系统会显示bug: root@autodl-container-027f4cad3d-6baf4e64:~/autodl-tmp# python GLM-4/finetune_demo/finetune.py datasets/ ZhipuAI/glm-4-9b-chat GLM-4/finetune_demo/configs/lora.yaml Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:02<00:00, 4.04it/s] The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable. trainable params: 2,785,280 || all params: 9,402,736,640 || trainable%: 0.0296 Generating train split: 0 examples [00:00, ? examples/s]Failed to load JSON from file '/root/autodl-tmp/datasets/train.jsonl' with error <class 'pyarrow.lib.ArrowInvalid'>: JSON parse error: Missing a name for object member. in row 718 Generating train split: 0 examples [00:00, ? examples/s] ╭──────────────────────────────────────────────────────────────────────────────────────────────────────── Traceback (most recent call last) ─────────────────────────────────────────────────────────────────────────────────────────────────────────╮ │ /root/miniconda3/lib/python3.10/site-packages/datasets/packaged_modules/json/json.py:153 in _generate_tables │ │ │ │ 150 │ │ │ │ │ │ │ │ with open( │ │ 151 │ │ │ │ │ │ │ │ │ file, encoding=self.config.encoding, errors=self.con │ │ 152 │ │ │ │ │ │ │ │ ) as f: │ │ ❱ 153 │ │ │ │ │ │ │ │ │ df = pd.read_json(f, dtype_backend="pyarrow") │ │ 154 │ │ │ │ │ │ │ except ValueError: │ │ 155 │ │ │ │ │ │ │ │ logger.error(f"Failed to load JSON from file '{file}' wi │ │ 156 │ │ │ │ │ │ │ │ raise e │ │ │ │ /root/miniconda3/lib/python3.10/site-packages/pandas/io/json/_json.py:815 in read_json │ │ │ │ 812 │ if chunksize: │ │ 813 │ │ return json_reader │ │ 814 │ else: │ │ ❱ 815 │ │ return json_reader.read() │ │ 816 │ │ 817 │ │ 818 class JsonReader(abc.Iterator, Generic[FrameSeriesStrT]): │ │ │ │ /root/miniconda3/lib/python3.10/site-packages/pandas/io/json/_json.py:1025 in read │ │ │ │ 1022 │ │ │ │ │ │ data_lines = data.split("\n") │ │ 1023 │ │ │ │ │ │ obj = self._get_object_parser(self._combine_lines(data_lines)) │ │ 1024 │ │ │ │ else: │ │ ❱ 1025 │ │ │ │ │ obj = self._get_object_parser(self.data) │ │ 1026 │ │ │ │ if self.dtype_backend is not lib.no_default: │ │ 1027 │ │ │ │ │ return obj.convert_dtypes( │ │ 1028 │ │ │ │ │ │ infer_objects=False, dtype_backend=self.dtype_backend │ │ │ │ /root/miniconda3/lib/python3.10/site-packages/pandas/io/json/_json.py:1051 in _get_object_parser │ │ │ │ 1048 │ │ } │ │ 1049 │ │ obj = None │ │ 1050 │ │ if typ == "frame": │ │ ❱ 1051 │ │ │ obj = FrameParser(json, **kwargs).parse() │ │ 1052 │ │ │ │ 1053 │ │ if typ == "series" or obj is None: │ │ 1054 │ │ │ if not isinstance(dtype, bool): │ │ │ │ /root/miniconda3/lib/python3.10/site-packages/pandas/io/json/_json.py:1187 in parse │ │ │ │ 1184 │ │ │ 1185 │ @final │ │ 1186 │ def parse(self): │ │ ❱ 1187 │ │ self._parse() │ │ 1188 │ │ │ │ 1189 │ │ if self.obj is None: │ │ 1190 │ │ │ return None │ │ │ │ /root/miniconda3/lib/python3.10/site-packages/pandas/io/json/_json.py:1403 in _parse │ │ │ │ 1400 │ │ │ │ 1401 │ │ if orient == "columns": │ │ 1402 │ │ │ self.obj = DataFrame( │ │ ❱ 1403 │ │ │ │ ujson_loads(json, precise_float=self.precise_float), dtype=None │ │ 1404 │ │ │ ) │ │ 1405 │ │ elif orient == "split": │ │ 1406 │ │ │ decoded = { │ ╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ ValueError: Trailing data During handling of the above exception, another exception occurred: ╭──────────────────────────────────────────────────────────────────────────────────────────────────────── Traceback (most recent call last) ─────────────────────────────────────────────────────────────────────────────────────────────────────────╮ │ /root/miniconda3/lib/python3.10/site-packages/datasets/builder.py:1997 in _prepare_split_single │ │ │ │ 1994 │ │ │ ) │ │ 1995 │ │ │ try: │ │ 1996 │ │ │ │ _time = time.time() │ │ ❱ 1997 │ │ │ │ for _, table in generator: │ │ 1998 │ │ │ │ │ if max_shard_size is not None and writer._num_bytes > max_shard_size │ │ 1999 │ │ │ │ │ │ num_examples, num_bytes = writer.finalize() │ │ 2000 │ │ │ │ │ │ writer.close() │ │ │ │ /root/miniconda3/lib/python3.10/site-packages/datasets/packaged_modules/json/json.py:156 in _generate_tables │ │ │ │ 153 │ │ │ │ │ │ │ │ │ df = pd.read_json(f, dtype_backend="pyarrow") │ │ 154 │ │ │ │ │ │ │ except ValueError: │ │ 155 │ │ │ │ │ │ │ │ logger.error(f"Failed to load JSON from file '{file}' wi │ │ ❱ 156 │ │ │ │ │ │ │ │ raise e │ │ 157 │ │ │ │ │ │ │ if df.columns.tolist() == [0]: │ │ 158 │ │ │ │ │ │ │ │ df.columns = list(self.config.features) if self.config.f │ │ 159 │ │ │ │ │ │ │ try: │ │ │ │ /root/miniconda3/lib/python3.10/site-packages/datasets/packaged_modules/json/json.py:130 in _generate_tables │ │ │ │ 127 │ │ │ │ │ │ try: │ │ 128 │ │ │ │ │ │ │ while True: │ │ 129 │ │ │ │ │ │ │ │ try: │ │ ❱ 130 │ │ │ │ │ │ │ │ │ pa_table = paj.read_json( │ │ 131 │ │ │ │ │ │ │ │ │ │ io.BytesIO(batch), read_options=paj.ReadOptions( │ │ 132 │ │ │ │ │ │ │ │ │ ) │ │ 133 │ │ │ │ │ │ │ │ │ break │ │ │ │ in pyarrow._json.read_json:308 │ │ │ │ in pyarrow.lib.pyarrow_internal_check_status:154 │ │ │ │ in pyarrow.lib.check_status:91 │ ╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ ArrowInvalid: JSON parse error: Missing a name for object member. in row 718 The above exception was the direct cause of the following exception: ╭──────────────────────────────────────────────────────────────────────────────────────────────────────── Traceback (most recent call last) ─────────────────────────────────────────────────────────────────────────────────────────────────────────╮ │ /root/autodl-tmp/GLM-4/finetune_demo/finetune.py:406 in main │ │ │ │ 403 ): │ │ 404 │ ft_config = FinetuningConfig.from_file(config_file) │ │ 405 │ tokenizer, model = load_tokenizer_and_model(model_dir, peft_config=ft_config.peft_co │ │ ❱ 406 │ data_manager = DataManager(data_dir, ft_config.data_config) │ │ 407 │ │ │ 408 │ train_dataset = data_manager.get_dataset( │ │ 409 │ │ Split.TRAIN, │ │ │ │ /root/autodl-tmp/GLM-4/finetune_demo/finetune.py:204 in __init__ │ │ │ │ 201 │ def __init__(self, data_dir: str, data_config: DataConfig): │ │ 202 │ │ self._num_proc = data_config.num_proc │ │ 203 │ │ │ │ ❱ 204 │ │ self._dataset_dct = _load_datasets( │ │ 205 │ │ │ data_dir, │ │ 206 │ │ │ data_config.data_format, │ │ 207 │ │ │ data_config.data_files, │ │ │ │ /root/autodl-tmp/GLM-4/finetune_demo/finetune.py:189 in _load_datasets │ │ │ │ 186 │ │ num_proc: Optional[int], │ │ 187 ) -> DatasetDict: │ │ 188 │ if data_format == '.jsonl': │ │ ❱ 189 │ │ dataset_dct = load_dataset( │ │ 190 │ │ │ data_dir, │ │ 191 │ │ │ data_files=data_files, │ │ 192 │ │ │ split=None, │ │ │ │ /root/miniconda3/lib/python3.10/site-packages/datasets/load.py:2616 in load_dataset │ │ │ │ 2613 │ │ return builder_instance.as_streaming_dataset(split=split) │ │ 2614 │ │ │ 2615 │ # Download and prepare data │ │ ❱ 2616 │ builder_instance.download_and_prepare( │ │ 2617 │ │ download_config=download_config, │ │ 2618 │ │ download_mode=download_mode, │ │ 2619 │ │ verification_mode=verification_mode, │ │ │ │ /root/miniconda3/lib/python3.10/site-packages/datasets/builder.py:1029 in download_and_prepare │ │ │ │ 1026 │ │ │ │ │ │ │ prepare_split_kwargs["max_shard_size"] = max_shard_size │ │ 1027 │ │ │ │ │ │ if num_proc is not None: │ │ 1028 │ │ │ │ │ │ │ prepare_split_kwargs["num_proc"] = num_proc │ │ ❱ 1029 │ │ │ │ │ │ self._download_and_prepare( │ │ 1030 │ │ │ │ │ │ │ dl_manager=dl_manager, │ │ 1031 │ │ │ │ │ │ │ verification_mode=verification_mode, │ │ 1032 │ │ │ │ │ │ │ **prepare_split_kwargs, │ │ │ │ /root/miniconda3/lib/python3.10/site-packages/datasets/builder.py:1124 in _download_and_prepare │ │ │ │ 1121 │ │ │ │ │ 1122 │ │ │ try: │ │ 1123 │ │ │ │ # Prepare split will record examples associated to the split │ │ ❱ 1124 │ │ │ │ self._prepare_split(split_generator, **prepare_split_kwargs) │ │ 1125 │ │ │ except OSError as e: │ │ 1126 │ │ │ │ raise OSError( │ │ 1127 │ │ │ │ │ "Cannot find data file. " │ │ │ │ /root/miniconda3/lib/python3.10/site-packages/datasets/builder.py:1884 in _prepare_split │ │ │ │ 1881 │ │ │ gen_kwargs = split_generator.gen_kwargs │ │ 1882 │ │ │ job_id = 0 │ │ 1883 │ │ │ with pbar: │ │ ❱ 1884 │ │ │ │ for job_id, done, content in self._prepare_split_single( │ │ 1885 │ │ │ │ │ gen_kwargs=gen_kwargs, job_id=job_id, **_prepare_split_args │ │ 1886 │ │ │ │ ): │ │ 1887 │ │ │ │ │ if done: │ │ │ │ /root/miniconda3/lib/python3.10/site-packages/datasets/builder.py:2040 in _prepare_split_single │ │ │ │ 2037 │ │ │ │ e = e.__context__ │ │ 2038 │ │ │ if isinstance(e, DatasetGenerationError): │ │ 2039 │ │ │ │ raise │ │ ❱ 2040 │ │ │ raise DatasetGenerationError("An error occurred while generating the dataset │ │ 2041 │ │ │ │ 2042 │ │ yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_ │ │ 2043 │ ╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ DatasetGenerationError: An error occurred while generating the dataset 3.请问是否可以帮我解决 ### Expected behavior 希望问题可以得到解决 ### Environment info Copy-and-paste the text below in your GitHub issue. - `datasets` version: 2.20.0 - Platform: Linux-4.19.90-2107.6.0.0192.8.oe1.bclinux.x86_64-x86_64-with-glibc2.35 - Python version: 3.10.8 - `huggingface_hub` version: 0.24.6 - PyArrow version: 16.1.0 - Pandas version: 2.2.2 - `fsspec` version: 2023.12.2
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7233/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/pull/7232
7,232
(Super tiny doc update) Mention to_polars
{ "avatar_url": "https://avatars.githubusercontent.com/u/5236035?v=4", "events_url": "https://api.github.com/users/fzyzcjy/events{/privacy}", "followers_url": "https://api.github.com/users/fzyzcjy/followers", "following_url": "https://api.github.com/users/fzyzcjy/following{/other_user}", "gists_url": "https://api.github.com/users/fzyzcjy/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/fzyzcjy", "id": 5236035, "login": "fzyzcjy", "node_id": "MDQ6VXNlcjUyMzYwMzU=", "organizations_url": "https://api.github.com/users/fzyzcjy/orgs", "received_events_url": "https://api.github.com/users/fzyzcjy/received_events", "repos_url": "https://api.github.com/users/fzyzcjy/repos", "site_admin": false, "starred_url": "https://api.github.com/users/fzyzcjy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fzyzcjy/subscriptions", "type": "User", "url": "https://api.github.com/users/fzyzcjy", "user_view_type": "public" }
[]
closed
false
[ "You are welcome!" ]
2024-10-17T06:08:53Z
2024-10-24T23:11:05Z
2024-10-24T15:06:16Z
CONTRIBUTOR
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7232.diff", "html_url": "https://github.com/huggingface/datasets/pull/7232", "merged_at": "2024-10-24T15:06:16Z", "patch_url": "https://github.com/huggingface/datasets/pull/7232.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7232" }
polars is also quite popular now, thus this tiny update can tell users polars is supported
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7232/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/pull/7231
7,231
Fix typo in image dataset docs
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[]
closed
false
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7231). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-10-16T14:05:46Z
2024-10-16T17:06:21Z
2024-10-16T17:06:19Z
MEMBER
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7231.diff", "html_url": "https://github.com/huggingface/datasets/pull/7231", "merged_at": "2024-10-16T17:06:19Z", "patch_url": "https://github.com/huggingface/datasets/pull/7231.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7231" }
Fix typo in image dataset docs. Typo reported by @datavistics.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7231/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/pull/7230
7,230
Video support
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7230). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-10-15T18:17:29Z
2024-10-24T16:39:51Z
2024-10-24T16:39:50Z
MEMBER
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7230.diff", "html_url": "https://github.com/huggingface/datasets/pull/7230", "merged_at": "2024-10-24T16:39:50Z", "patch_url": "https://github.com/huggingface/datasets/pull/7230.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7230" }
(wip and experimental) adding the `Video` type based on `VideoReader` from `decord` ```python >>>from datasets import load_dataset >>> ds = load_dataset("path/to/videos", split="train").with_format("torch") >>> print(ds[0]["video"]) <decord.video_reader.VideoReader object at 0x337a47910> >>> print(ds[0]["video"][0]) tensor([[[73, 73, 73], [73, 73, 73], [73, 73, 73], ..., [23, 23, 23], [23, 23, 23], [23, 23, 23]]], dtype=torch.uint8) ``` the storage is the same as for audio and images: `{"path": pa.string(), "bytes": pa.binary()}` and I did a small to keep the hf:// URL in the "path" field if possible, this way the viewer can link to fiels on the hub if possible
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7230/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/pull/7229
7,229
handle config_name=None in push_to_hub
{ "avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4", "events_url": "https://api.github.com/users/alex-hh/events{/privacy}", "followers_url": "https://api.github.com/users/alex-hh/followers", "following_url": "https://api.github.com/users/alex-hh/following{/other_user}", "gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alex-hh", "id": 5719745, "login": "alex-hh", "node_id": "MDQ6VXNlcjU3MTk3NDU=", "organizations_url": "https://api.github.com/users/alex-hh/orgs", "received_events_url": "https://api.github.com/users/alex-hh/received_events", "repos_url": "https://api.github.com/users/alex-hh/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions", "type": "User", "url": "https://api.github.com/users/alex-hh", "user_view_type": "public" }
[]
closed
false
[ "not sure it's a good idea, we always need a config name so better have the correct default and not support None (which could lead to think it doesn't have a config name, while it does)" ]
2024-10-15T13:48:57Z
2024-10-24T17:51:52Z
2024-10-24T17:51:52Z
CONTRIBUTOR
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7229.diff", "html_url": "https://github.com/huggingface/datasets/pull/7229", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7229.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7229" }
This caught me out - thought it might be better to explicitly handle None?
{ "avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4", "events_url": "https://api.github.com/users/alex-hh/events{/privacy}", "followers_url": "https://api.github.com/users/alex-hh/followers", "following_url": "https://api.github.com/users/alex-hh/following{/other_user}", "gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alex-hh", "id": 5719745, "login": "alex-hh", "node_id": "MDQ6VXNlcjU3MTk3NDU=", "organizations_url": "https://api.github.com/users/alex-hh/orgs", "received_events_url": "https://api.github.com/users/alex-hh/received_events", "repos_url": "https://api.github.com/users/alex-hh/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions", "type": "User", "url": "https://api.github.com/users/alex-hh", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7229/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/issues/7228
7,228
Composite (multi-column) features
{ "avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4", "events_url": "https://api.github.com/users/alex-hh/events{/privacy}", "followers_url": "https://api.github.com/users/alex-hh/followers", "following_url": "https://api.github.com/users/alex-hh/following{/other_user}", "gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alex-hh", "id": 5719745, "login": "alex-hh", "node_id": "MDQ6VXNlcjU3MTk3NDU=", "organizations_url": "https://api.github.com/users/alex-hh/orgs", "received_events_url": "https://api.github.com/users/alex-hh/received_events", "repos_url": "https://api.github.com/users/alex-hh/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions", "type": "User", "url": "https://api.github.com/users/alex-hh", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
[]
2024-10-14T23:59:19Z
2024-10-15T11:17:15Z
null
CONTRIBUTOR
null
null
### Feature request Structured data types (graphs etc.) might often be most efficiently stored as multiple columns, which then need to be combined during feature decoding Although it is currently possible to nest features as structs, my impression is that in particular when dealing with e.g. a feature composed of multiple numpy array / ArrayXD's, it would be more efficient to store each ArrayXD as a separate column (though I'm not sure by how much) Perhaps specification / implementation could be supported by something like: ``` features=Features(**{("feature0", "feature1")=Features(feature0=Array2D((None,10), dtype="float32"), feature1=Array2D((None,10), dtype="float32")) ``` ### Motivation Defining efficient composite feature types based on numpy arrays for representing data such as graphs with multiple node and edge attributes is currently challenging. ### Your contribution Possibly able to contribute
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7228/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://github.com/huggingface/datasets/pull/7227
7,227
fast array extraction
{ "avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4", "events_url": "https://api.github.com/users/alex-hh/events{/privacy}", "followers_url": "https://api.github.com/users/alex-hh/followers", "following_url": "https://api.github.com/users/alex-hh/following{/other_user}", "gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alex-hh", "id": 5719745, "login": "alex-hh", "node_id": "MDQ6VXNlcjU3MTk3NDU=", "organizations_url": "https://api.github.com/users/alex-hh/orgs", "received_events_url": "https://api.github.com/users/alex-hh/received_events", "repos_url": "https://api.github.com/users/alex-hh/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions", "type": "User", "url": "https://api.github.com/users/alex-hh", "user_view_type": "public" }
[]
open
false
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7227). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "I've updated the most straightforward failing test cases - lmk if you agree with those.\r\n\r\nMight need some help / pointers on the remaining new failing tests, which seem a little bit more subtle.", "@lhoestq I've had a go at fixing a few more test cases but getting quite uncertain about the remaining ones (as well as about some of the array writing ones that I tried to fix in my last commit). There are still 27 failures vs 21 on main. I'm not completely sure in some cases what intended behaviour is and my understanding of the flow for typed writing is a bit vague.", "@lhoestq do you have any thoughts on this? I wasn't able to resolve all the test issues but the basic functionality seemed useful?" ]
2024-10-14T20:51:32Z
2025-01-28T09:39:26Z
null
CONTRIBUTOR
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/7227.diff", "html_url": "https://github.com/huggingface/datasets/pull/7227", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7227.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7227" }
Implements #7210 using method suggested in https://github.com/huggingface/datasets/pull/7207#issuecomment-2411789307 ```python import numpy as np from datasets import Dataset, Features, Array3D features=Features(**{"array0": Array3D((None, 10, 10), dtype="float32"), "array1": Array3D((None,10,10), dtype="float32")}) dataset = Dataset.from_dict({f"array{i}": [np.zeros((x,10,10), dtype=np.float32) for x in [2000,1000]*25] for i in range(2)}, features=features) ``` ~0.02 s vs 0.9s on main ```python ds = dataset.to_iterable_dataset() t0 = time.time() for ex in ds: pass t1 = time.time() ``` < 0.01 s vs 1.3 s on main @lhoestq I can see this breaks a bunch of array-related tests but can update the test cases if you would support making this change? I also added an Array1D feature which will always be decoded into a numpy array and likewise improves extraction performance: ```python from datasets import Dataset, Features, Array1D, Sequence, Value array_features=Features(**{"array0": Array1D((None,), dtype="float32"), "array1": Array1D((None,), dtype="float32")}) sequence_features=Features(**{"array0": Sequence(feature=Value("float32"), length=-1), "array1": Sequence(feature=Value("float32"), length=-1)}) array_dataset = Dataset.from_dict({f"array{i}": [np.zeros((x,), dtype=np.float32) for x in [20000,10000]*25] for i in range(2)}, features=array_features) sequence_dataset = Dataset.from_dict({f"array{i}": [np.zeros((x,), dtype=np.float32) for x in [20000,10000]*25] for i in range(2)}, features=sequence_features) ```python t0 = time.time() for ex in array_dataset.to_iterable_dataset(): pass t1 = time.time() ``` < 0.01 s ```python t0 = time.time() for ex in sequence_dataset.to_iterable_dataset(): pass t1 = time.time() ``` ~1.1s And also added support for extracting structs of arrays as dicts of numpy arrays: ```python import numpy as np from datasets import Dataset, Features, Array3D, Sequence features=Features(struct={"array0": Array3D((None,10,10), dtype="float32"), "array1": Array3D((None,10,10), dtype="float32")}, _list=Sequence(feature=Array3D((None,10,10), dtype="float32"))) dataset = Dataset.from_dict({"struct": [{f"array{i}": np.zeros((x,10,10), dtype=np.float32) for i in range(2)} for x in [2000,1000]*25], "_list": [[np.zeros((x,10,10), dtype=np.float32) for i in range(2)] for x in [2000,1000]*25]}, features=features) ``` ```python t0 = time.time() for ex in dataset.to_iterable_dataset(): pass t1 = time.time() assert isinstance(ex["struct"]["array0"], np.ndarray) and ex["struct"]["array0"].ndim == 3 ``` ~0.02 s and no exception vs ~7s with an exception on main
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7227/reactions" }
null
null
null
true
https://github.com/huggingface/datasets/issues/7226
7,226
Add R as a How to use from the Polars (R) Library as an option
{ "avatar_url": "https://avatars.githubusercontent.com/u/45013044?v=4", "events_url": "https://api.github.com/users/ran-codes/events{/privacy}", "followers_url": "https://api.github.com/users/ran-codes/followers", "following_url": "https://api.github.com/users/ran-codes/following{/other_user}", "gists_url": "https://api.github.com/users/ran-codes/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ran-codes", "id": 45013044, "login": "ran-codes", "node_id": "MDQ6VXNlcjQ1MDEzMDQ0", "organizations_url": "https://api.github.com/users/ran-codes/orgs", "received_events_url": "https://api.github.com/users/ran-codes/received_events", "repos_url": "https://api.github.com/users/ran-codes/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ran-codes/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ran-codes/subscriptions", "type": "User", "url": "https://api.github.com/users/ran-codes", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
[]
2024-10-14T19:56:07Z
2024-10-14T19:57:13Z
null
NONE
null
null
### Feature request The boiler plate code to access a dataset via the hugging face file system is very useful. Please addd ## Add Polars (R) option The equivailent code works, because the [Polars-R](https://github.com/pola-rs/r-polars) wrapper has hugging faces funcitonaliy as well. ```r library(polars) df <- pl$read_parquet("hf://datasets/SALURBAL/core__admin_cube_public/core__admin_cube_public.parquet") ``` ## Polars (python) option ![image](https://github.com/user-attachments/assets/8f1bcd19-e578-4b18-b324-7cc00b80ac0a) ## Libraries Currently ![image](https://github.com/user-attachments/assets/0cf50063-f9db-443c-97b4-3ef0664b6e6e) ### Motivation There are many data/analysis/research/statistics teams (particularly in academia and pharma) that use R as the default language. R has great integration with most of the newer data techs (arrow, parquet, polars) and having this included could really help in bringing this community into the hugging faces ecosystem. **This is a small/low-hanging-fruit front end change but would make a big impact expanding the community** ### Your contribution I am not sure which repositroy this should be in, but I have experience in R, Python and JS and happy to submit a PR in the appropriate repository.
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7226/reactions" }
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false