url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 48
51
| id
int64 600M
3.67B
| node_id
stringlengths 18
24
| number
int64 2
7.88k
| title
stringlengths 1
290
| user
dict | labels
listlengths 0
4
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
4
| comments
listlengths 0
30
| created_at
timestamp[s]date 2020-04-14 18:18:51
2025-11-26 16:16:56
| updated_at
timestamp[s]date 2020-04-29 09:23:05
2025-11-30 03:52:07
| closed_at
timestamp[s]date 2020-04-29 09:23:05
2025-11-21 12:31:19
⌀ | author_association
stringclasses 4
values | type
null | active_lock_reason
null | draft
null | pull_request
null | body
stringlengths 0
228k
⌀ | closed_by
dict | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | state_reason
stringclasses 4
values | sub_issues_summary
dict | issue_dependencies_summary
dict | is_pull_request
bool 1
class | closed_at_time_taken
duration[s] |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/2243
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2243/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2243/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2243/events
|
https://github.com/huggingface/datasets/issues/2243
| 862,909,389
|
MDU6SXNzdWU4NjI5MDkzODk=
| 2,243
|
Map is slow and processes batches one after another
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/2743060?v=4",
"events_url": "https://api.github.com/users/villmow/events{/privacy}",
"followers_url": "https://api.github.com/users/villmow/followers",
"following_url": "https://api.github.com/users/villmow/following{/other_user}",
"gists_url": "https://api.github.com/users/villmow/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/villmow",
"id": 2743060,
"login": "villmow",
"node_id": "MDQ6VXNlcjI3NDMwNjA=",
"organizations_url": "https://api.github.com/users/villmow/orgs",
"received_events_url": "https://api.github.com/users/villmow/received_events",
"repos_url": "https://api.github.com/users/villmow/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/villmow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/villmow/subscriptions",
"type": "User",
"url": "https://api.github.com/users/villmow",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] |
[
"Hi @villmow, thanks for reporting.\r\n\r\nCould you please try with the Datasets version 1.6? We released it yesterday and it fixes some issues about the processing speed. You can see the fix implemented by @lhoestq here: #2122.\r\n\r\nOnce you update Datasets, please confirm if the problem persists.",
"Hi @albertvillanova, thanks for the reply. I just tried the new version and the problem still persists. \r\n\r\nDo I need to rebuild the saved dataset (which I load from disk) with the 1.6.0 version of datasets? My script loads this dataset and creates new datasets from it. I tried it without rebuilding.\r\n\r\nSee this short video of what happens. It does not create all processes at the same time:\r\n\r\nhttps://user-images.githubusercontent.com/2743060/115720139-0da3a500-a37d-11eb-833a-9bbacc70868d.mp4\r\n\r\n",
"There can be a bit of delay between the creations of the processes but this delay should be the same for both your `map` calls. We should look into this.\r\nAlso if you hav some code that reproduces this issue on google colab that'd be really useful !\r\n\r\nRegarding the speed differences:\r\nThis looks like a similar issue as https://github.com/huggingface/datasets/issues/1992 who is experiencing the same speed differences between processes.\r\nThis is a known bug that we are investigating. As of now I've never managed to reproduce it on my machine so it's pretty hard for me to find where this issue comes from.\r\n",
"Upgrade to 1.6.1 solved my problem somehow. I did not change any of my code, but now it starts all processes around the same time.",
"Nice ! I'm glad this works now.\r\nClosing for now, but feel free to re-open if you experience this issue again."
] | 2021-04-20T14:58:20
| 2021-05-03T17:54:33
| 2021-05-03T17:54:32
|
NONE
| null | null | null | null |
## Describe the bug
I have a somewhat unclear bug to me, where I can't figure out what the problem is. The code works as expected on a small subset of my dataset (2000 samples) on my local machine, but when I execute the same code with a larger dataset (1.4 million samples) this problem occurs. Thats why I can't give exact steps to reproduce, I'm sorry.
I process a large dataset in a two step process. I first call map on a dataset I load from disk and create a new dataset from it. This works like expected and `map` uses all workers I started it with. Then I process the dataset created by the first step, again with `map`, which is really slow and starting only one or two process at a time. Number of processes is the same for both steps.
pseudo code:
```python
ds = datasets.load_from_disk("path")
new_dataset = ds.map(work, batched=True, ...) # fast uses all processes
final_dataset = new_dataset.map(work2, batched=True, ...) # slow starts one process after another
```
## Expected results
Second stage should be as fast as the first stage.
## Versions
Paste the output of the following code:
- Datasets: 1.5.0
- Python: 3.8.8 (default, Feb 24 2021, 21:46:12)
- Platform: Linux-5.4.0-60-generic-x86_64-with-glibc2.10
Do you guys have any idea? Thanks a lot!
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2243/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2243/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 13 days, 2:56:12
|
https://api.github.com/repos/huggingface/datasets/issues/2242
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2242/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2242/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2242/events
|
https://github.com/huggingface/datasets/issues/2242
| 862,870,205
|
MDU6SXNzdWU4NjI4NzAyMDU=
| 2,242
|
Link to datasets viwer on Quick Tour page returns "502 Bad Gateway"
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/6735707?v=4",
"events_url": "https://api.github.com/users/martavillegas/events{/privacy}",
"followers_url": "https://api.github.com/users/martavillegas/followers",
"following_url": "https://api.github.com/users/martavillegas/following{/other_user}",
"gists_url": "https://api.github.com/users/martavillegas/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/martavillegas",
"id": 6735707,
"login": "martavillegas",
"node_id": "MDQ6VXNlcjY3MzU3MDc=",
"organizations_url": "https://api.github.com/users/martavillegas/orgs",
"received_events_url": "https://api.github.com/users/martavillegas/received_events",
"repos_url": "https://api.github.com/users/martavillegas/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/martavillegas/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/martavillegas/subscriptions",
"type": "User",
"url": "https://api.github.com/users/martavillegas",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] |
[
"This should be fixed now!\r\n\r\ncc @srush "
] | 2021-04-20T14:19:51
| 2021-04-20T15:02:45
| 2021-04-20T15:02:45
|
NONE
| null | null | null | null |
Link to datasets viwer (https://huggingface.co/datasets/viewer/) on Quick Tour page (https://huggingface.co/docs/datasets/quicktour.html) returns "502 Bad Gateway"
The same error with https://huggingface.co/datasets/viewer/?dataset=glue&config=mrpc
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yjernite",
"id": 10469459,
"login": "yjernite",
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"repos_url": "https://api.github.com/users/yjernite/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yjernite",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2242/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2242/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 0:42:54
|
https://api.github.com/repos/huggingface/datasets/issues/2239
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2239/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2239/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2239/events
|
https://github.com/huggingface/datasets/issues/2239
| 861,904,306
|
MDU6SXNzdWU4NjE5MDQzMDY=
| 2,239
|
Error loading wikihow dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/4686956?v=4",
"events_url": "https://api.github.com/users/odellus/events{/privacy}",
"followers_url": "https://api.github.com/users/odellus/followers",
"following_url": "https://api.github.com/users/odellus/following{/other_user}",
"gists_url": "https://api.github.com/users/odellus/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/odellus",
"id": 4686956,
"login": "odellus",
"node_id": "MDQ6VXNlcjQ2ODY5NTY=",
"organizations_url": "https://api.github.com/users/odellus/orgs",
"received_events_url": "https://api.github.com/users/odellus/received_events",
"repos_url": "https://api.github.com/users/odellus/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/odellus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/odellus/subscriptions",
"type": "User",
"url": "https://api.github.com/users/odellus",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] |
[
"Hi @odellus, thanks for reporting.\r\n\r\nThe `wikihow` dataset has 2 versions:\r\n- `all`: Consisting of the concatenation of all paragraphs as the articles and the bold lines as the reference summaries.\r\n- `sep`: Consisting of each paragraph and its summary.\r\n\r\nTherefore, in order to load it, you have to specify which version you would like, for example:\r\n```python\r\ndataset = load_dataset('wikihow', 'all')\r\n```\r\n\r\nPlease, tell me if this solves your problem.",
"Good call out. I did try that and that's when it told me to download the\ndataset. Don't believe I have tried it with local files. Will try first\nthing in the morning and get back to you.\n\nOn Mon, Apr 19, 2021, 11:17 PM Albert Villanova del Moral <\n***@***.***> wrote:\n\n> Hi @odellus <https://github.com/odellus>, thanks for reporting.\n>\n> The wikihow dataset has 2 versions:\n>\n> - all: Consisting of the concatenation of all paragraphs as the\n> articles and the bold lines as the reference summaries.\n> - sep: Consisting of each paragraph and its summary.\n>\n> Therefore, in order to load it, you have to specify which version you\n> would like, for example:\n>\n> dataset = load_dataset('wikihow', 'all')\n>\n> Please, tell me if this solves your problem.\n>\n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/issues/2239#issuecomment-823004146>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ABDYI3HVRTBI2QT3BOG262DTJUL57ANCNFSM43GV5BZQ>\n> .\n>\n",
"Hi @odellus, yes you are right.\r\n\r\nDue to the server where the `wikihow` dataset is hosted, the dataset can't be downloaded automatically by `huggingface` and you have to download it manually as you did.\r\n\r\nNevertheless, you have to specify which dataset version you would like to load anyway:\r\n```python\r\ndataset = load_dataset('wikihow', 'all', data_dir='./wikihow')\r\n```\r\nor\r\n```python\r\ndataset = load_dataset('wikihow', 'sep', data_dir='./wikihow')\r\n```\r\nI find that the instructions given by `huggingface` are not clear enough: I am going to fix this.\r\nPlease tell me if this eventually works for you.",
"That was it. Thank you Albert!"
] | 2021-04-19T21:02:31
| 2021-04-20T16:33:11
| 2021-04-20T16:33:11
|
CONTRIBUTOR
| null | null | null | null |
## Describe the bug
When attempting to load wikihow into a dataset with
```python
from datasets import load_dataset
dataset = load_dataset('wikihow', data_dir='./wikihow')
```
I get the message:
```
AttributeError: 'BuilderConfig' object has no attribute 'filename'
```
at the end of a [full stack trace](https://gist.github.com/odellus/602c3b2de52f541d353b1022f320ffc2).
## Steps to reproduce the bug
I have followed the instructions for creating a wikihow dataset. The [wikihow dataset site](https://huggingface.co/datasets/wikihow) says to use
```python
from datasets import load_dataset
dataset = load_dataset('wikihow')
```
to load the dataset. I do so and I get the message
```
AssertionError: The dataset wikihow with config all requires manual data.
Please follow the manual download instructions: You need to manually download two wikihow files. An overview of which files to download can be seen at https://github.com/mahnazkoupaee/WikiHow-Dataset.
You need to download the following two files manually:
1) https://ucsb.app.box.com/s/ap23l8gafpezf4tq3wapr6u8241zz358 and save the file under <path/to/folder>/wikihowAll.csv
2) https://ucsb.app.box.com/s/7yq601ijl1lzvlfu4rjdbbxforzd2oag and save the file under <path/to/folder>/wikihowSep.csv
The <path/to/folder> can e.g. be "~/manual_wikihow_data".
Wikihow can then be loaded using the following command `datasets.load_dataset("wikihow", data_dir="<path/to/folder>")`.
.
Manual data can be loaded with `datasets.load_dataset(wikihow, data_dir='<path/to/manual/data>')
```
So I create a directory `./wikihow` and download `wikihowAll.csv` and `wikihowSep.csv` into the new directory.
Then I run
```python
from datasets import load_dataset
dataset = load_dataset('wikihow', data_dir='./wikihow')
```
that's when I get the [stack trace](https://gist.github.com/odellus/602c3b2de52f541d353b1022f320ffc2)
## Expected results
I expected it to load the downloaded files into a dataset.
## Actual results
```python
Using custom data configuration default-data_dir=.%2Fwikihow
Downloading and preparing dataset wikihow/default (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/azureuser/.cache/huggingface/datasets/wikihow/default-data_dir=.%2Fwikihow/0.0.0/58f42f8f0e4d459811a0f69aaab35870093830ccd58006769e7e1eb3e0e686c2... ---------------------------------------------------------------------------
AttributeError
Traceback (most recent call last)
<ipython-input-9-5e4d40142f30> in <module>
----> 1 dataset = load_dataset('wikihow',data_dir='./wikihow')
~/.local/lib/python3.6/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, **config_kwargs)
745 try_from_hf_gcs=try_from_hf_gcs,
746 base_path=base_path,-->
747 use_auth_token=use_auth_token,
748 )
749
~/.local/lib/python3.6/site-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
577 if not downloaded_from_gcs:
578 self._download_and_prepare( -->
579 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
580 )
581 # Sync info
~/.local/lib/python3.6/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
632 split_dict = SplitDict(dataset_name=self.name)
633 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs) -->
634 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
635
636 # Checksums verification
~/.cache/huggingface/modules/datasets_modules/datasets/wikihow/58f42f8f0e4d459811a0f69aaab35870093830ccd58006769e7e1eb3e0e686c2/wikihow.py in _split_generators(self, dl_manager)
132
133 path_to_manual_file = os.path.join(
--> 134 os.path.abspath(os.path.expanduser(dl_manager.manual_dir)), self.config.filename
135 )
136
AttributeError: 'BuilderConfig' object has no attribute 'filename'
```
## Versions
Paste the output of the following code:
```python
import datasets
import sys
import platform
print(f"""
- Datasets: {datasets.__version__}
- Python: {sys.version}
- Platform: {platform.platform()}
""")
```
```
- Datasets: 1.5.0
- Python: 3.6.9 (default, Jan 26 2021, 15:33:00) [GCC 8.4.0]
- Platform: Linux-5.4.0-1046-azure-x86_64-with-Ubuntu-18.04-bionic
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/4686956?v=4",
"events_url": "https://api.github.com/users/odellus/events{/privacy}",
"followers_url": "https://api.github.com/users/odellus/followers",
"following_url": "https://api.github.com/users/odellus/following{/other_user}",
"gists_url": "https://api.github.com/users/odellus/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/odellus",
"id": 4686956,
"login": "odellus",
"node_id": "MDQ6VXNlcjQ2ODY5NTY=",
"organizations_url": "https://api.github.com/users/odellus/orgs",
"received_events_url": "https://api.github.com/users/odellus/received_events",
"repos_url": "https://api.github.com/users/odellus/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/odellus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/odellus/subscriptions",
"type": "User",
"url": "https://api.github.com/users/odellus",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2239/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2239/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 19:30:40
|
https://api.github.com/repos/huggingface/datasets/issues/2237
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2237/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2237/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2237/events
|
https://github.com/huggingface/datasets/issues/2237
| 861,427,439
|
MDU6SXNzdWU4NjE0Mjc0Mzk=
| 2,237
|
Update Dataset.dataset_size after transformed with map
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] |
[
"@albertvillanova I would like to take this up. It would be great if you could point me as to how the dataset size is calculated in HF. Thanks!"
] | 2021-04-19T15:19:38
| 2021-04-20T14:22:05
| null |
MEMBER
| null | null | null | null |
After loading a dataset, if we transform it by using `.map` its `dataset_size` attirbute is not updated.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2237/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2237/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/2236
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2236/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2236/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2236/events
|
https://github.com/huggingface/datasets/issues/2236
| 861,388,145
|
MDU6SXNzdWU4NjEzODgxNDU=
| 2,236
|
Request to add StrategyQA dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8027676?v=4",
"events_url": "https://api.github.com/users/sarahwie/events{/privacy}",
"followers_url": "https://api.github.com/users/sarahwie/followers",
"following_url": "https://api.github.com/users/sarahwie/following{/other_user}",
"gists_url": "https://api.github.com/users/sarahwie/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sarahwie",
"id": 8027676,
"login": "sarahwie",
"node_id": "MDQ6VXNlcjgwMjc2NzY=",
"organizations_url": "https://api.github.com/users/sarahwie/orgs",
"received_events_url": "https://api.github.com/users/sarahwie/received_events",
"repos_url": "https://api.github.com/users/sarahwie/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sarahwie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sarahwie/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sarahwie",
"user_view_type": "public"
}
|
[
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] |
open
| false
| null |
[] |
[] | 2021-04-19T14:46:26
| 2021-04-19T14:46:26
| null |
NONE
| null | null | null | null |
## Request to add StrategyQA dataset
- **Name:** StrategyQA
- **Description:** open-domain QA [(project page)](https://allenai.org/data/strategyqa)
- **Paper:** [url](https://arxiv.org/pdf/2101.02235.pdf)
- **Data:** [here](https://allenai.org/data/strategyqa)
- **Motivation:** uniquely-formulated dataset that also includes a question-decomposition breakdown and associated Wikipedia annotations for each step. Good for multi-hop reasoning modeling.
| null |
{
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2236/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2236/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/2230
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2230/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2230/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2230/events
|
https://github.com/huggingface/datasets/issues/2230
| 859,817,159
|
MDU6SXNzdWU4NTk4MTcxNTk=
| 2,230
|
Keys yielded while generating dataset are not being checked
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42388668?v=4",
"events_url": "https://api.github.com/users/NikhilBartwal/events{/privacy}",
"followers_url": "https://api.github.com/users/NikhilBartwal/followers",
"following_url": "https://api.github.com/users/NikhilBartwal/following{/other_user}",
"gists_url": "https://api.github.com/users/NikhilBartwal/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/NikhilBartwal",
"id": 42388668,
"login": "NikhilBartwal",
"node_id": "MDQ6VXNlcjQyMzg4NjY4",
"organizations_url": "https://api.github.com/users/NikhilBartwal/orgs",
"received_events_url": "https://api.github.com/users/NikhilBartwal/received_events",
"repos_url": "https://api.github.com/users/NikhilBartwal/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/NikhilBartwal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NikhilBartwal/subscriptions",
"type": "User",
"url": "https://api.github.com/users/NikhilBartwal",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
| null |
[] |
[
"Hi ! Indeed there's no verification on the uniqueness nor the types of the keys.\r\nDo you already have some ideas of what you would like to implement and how ?",
"Hey @lhoestq, thank you so much for the opportunity.\r\nAlthough I haven't had much experience with the HF Datasets code, after a careful look at how the `ArrowWriter` functions, I think we can implement this as follows:\r\n\r\n1. First, we would have to update the `ArrowWriter.write()` function here:\r\nhttps://github.com/huggingface/datasets/blob/fcd3c3c8e3b1d9a2f3686a496082e21f06591380/src/datasets/arrow_writer.py#L296\r\nso that it accepts an additional argument `key` which would be appended along with the example here after hashing.\r\n\r\n2. Then, we would need to create a `Hasher` class which will take the key as its input and return a hash for it (We might need to use some hash salt which can be passed to the ArrowWriter.writer() with value equal to the `split_name` for differentiating between same keys of different splits)\r\n\r\n We can use the `hashlib.md5` function for hashing which will conert each key to its byte code before hashing (depending on the data type of the key) **Thus, the `key` type will be verified here**.\r\n\r\n3. Now, we would have to edit this\r\nhttps://github.com/huggingface/datasets/blob/fcd3c3c8e3b1d9a2f3686a496082e21f06591380/src/datasets/arrow_writer.py#L257\r\n so that it iterates over each `(hash, example)` pair (sorted according to hash). We can then simply **check whether each hash is different from the previous hash** (since they will be sorted)\r\n\r\nHowever, since I'm not very familiar with how the data is being written on disk in the form of a table, I might need some guidance for Step 3. \r\nPlease let me know your thought on this. Thanks!",
"Interesting !\r\nWe keep the dataset sorted in the order examples are generated by the builder (we expect the dataset builders to generate examples in deterministic order). Therefore I don't think we should shuffle the examples with the hashing. Let me know what you think.\r\nOther that that, I really like the idea of checking for keys duplicates in `write_examples_on_file` :)\r\n\r\nThis looks like a great plan ! Feel free to open a PR and ping me if you have questions or if I can help\r\n",
"@lhoestq I'm glad you liked the idea!\r\nI think that since the keys will be unique and deterministic in the nature themselves, so even if we shuffle the examples according to the hash, a deterministic order would still be maintained (as the keys will always have the same hash, whenever the dataset is generated). \r\nAnd since, we are not dealing with time series data (which would require the data to be in original order), I don't think the order of examples would matter much, as long as the order is deterministic and constant for all users.\r\n\r\nI think that this is also what was originally envisioned as mentioned in the documentation here:\r\nhttps://github.com/huggingface/datasets/blob/6775661b19d2ec339784f3d84553a3996a1d86c3/src/datasets/builder.py#L973\r\n\r\nAlso, if we avoid this, we would need to keep track of all the hashed keys in some place and compare each individual key with all others. This can cause some major overhead as each dataset consists of tens of thousands of examples.\r\nLet me know your thoughts in it! I would be opening a PR soon :)",
"When users load their own data, they expect the order to stay the same. I think that shuffling the data can make things inconvenient.\r\n\r\n> I think that this is also what was originally envisioned as mentioned in the documentation here:\r\n\r\nThis part was originally developed by tensorflow datasets, and tensorflow datasets indeed does the shuffling. However in this library this is probably not what we want in the general case. But if @albertvillanova and @thomwolf you have opinions on this please let us know.\r\n\r\n> Also, if we avoid this, we would need to keep track of all the hashed keys in some place and compare each individual key with all others. This can cause some major overhead as each dataset consists of tens of thousands of examples.\r\n\r\nMaybe we cam simply keep track of the hashes of of each batch being written ? The size of the batch when the data are save in arrow is 10 000 examples. This would only ensure that we don't have duplicates in each batch, but there might still be duplicates across batches. For 10 000 examples the hashes can just be stored as a python `set`.\r\n\r\nOtherwise if we want full deduplication, we need an extra tool that allows to temporarily save and query hashes that may need to use disk space rather than memory.",
"Yes I think we want to keep the original order by default and only shuffle when the user ask for it (for instance by calling `dataset.shuffle()`). That’s how I had it in mind originally.",
"Hey @lhoestq, I just had a more in-depth look at the original TFDS code about why the keys and hash were used in the first place.\r\n\r\nIn my opinion, the only use that the `hash(key)` serves is that it allows us to shuffle the examples in a deterministic order (as each example will always yield the same key and thus, the same hash on every system) so that the same dataset is generated for each user, irrespective of the order the examples are yielded by the dataset builder on different user systems.\r\n\r\nOtherwise, if we are not shuffling, then while yielding and writing the data, after getting the key and hashing it for an example, I can't quite see the use of the hash or the key. The hash will simply be generated for each example but not actually used anywhere?\r\n\r\n@lhoestq @thomwolf It would be great if you could explain a bit more about the usage of keys. Thanks!\r\n",
"In `datasets` the keys are currently ignored.\r\nFor shuffling we don't use the keys. Instead we shuffle an array of indices. Since both the original order of the dataset and the indices shuffling are deterministic, then `dataset.shuffle` is deterministic as well.\r\nWe can use it to:\r\n1. detect duplicates\r\n2. verify that the generation order is indeed deterministic\r\n3. maybe more ?",
"Thanks a lot @lhoestq. I think I understand what we need to do now. The keys can indeed be used for detecting duplicates in generated examples as well as ensuring the order.\r\n\r\n> Maybe we cam simply keep track of the hashes of of each batch being written ? The size of the batch when the data are save in arrow is 10 000 examples. This would only ensure that we don't have duplicates in each batch,\r\n\r\nI think that checking for duplicates in every batch independently would be sufficient as the probability of collisions using something like `MD5` is very low. I would be opening a draft PR soon. It would be great to have your guidance. Thanks!"
] | 2021-04-16T13:29:47
| 2021-05-10T17:31:21
| 2021-05-10T17:31:21
|
CONTRIBUTOR
| null | null | null | null |
The keys used in the dataset generation script to ensure the same order is generated on every user's end should be checked for their types (i.e either `str` or `int`) as well as whether they are unique or not.
Currently, the keys are not being checked for any of these, as evident from `xnli' dataset generation:
https://github.com/huggingface/datasets/blob/56346791aed417306d054d89bd693d6b7eab17f7/datasets/xnli/xnli.py#L196
Even after having a tuple as key, the dataset is generated without any warning.
Also, as tested in the case of `anli` dataset (I tweeked the dataset script to use `1` as a key for every example):
```
>>> import datasets
>>> nik = datasets.load_dataset('anli')
Downloading and preparing dataset anli/plain_text (download: 17.76 MiB, generated: 73.55 MiB, post-processed: Unknown size, total: 91.31 MiB) to C:\Users\nikhil\.cache\huggingface\datasets\anli\plain_text\0.1.0\43fa2c99c10bf8478f1fa0860f7b122c6b277c4c41306255b7641257cf4e3299...
0 examples [00:00, ? examples/s]1 {'uid': '0fd0abfb-659e-4453-b196-c3a64d2d8267', 'premise': 'The Parma trolleybus system (Italian: "Rete filoviaria di Parma" ) forms part of the public transport network of the city and "comune" of Parma, in the region of Emilia-Romagna, northern Italy. In operation since 1953, the system presently comprises four urban routes.', 'hypothesis': 'The trolleybus system has over 2 urban routes', 'label': 'entailment', 'reason': ''}
2021-04-16 12:38:14.483968: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cudart64_110.dll
1 examples [00:01, 1.87s/ examples]1 {'uid': '7ed72ff4-40b7-4f8a-b1b9-6c612aa62c84', 'premise': 'Alexandra Lendon Bastedo (9 March 1946 – 12 January 2014) was a British actress, best known for her role as secret agent Sharron Macready in the 1968 British espionage/science fiction adventure series "The Champions". She has been cited as a sex symbol of the 1960s and 1970s. Bastedo was a vegetarian and animal welfare advocate.', 'hypothesis': "Sharron Macready was a popular character through the 1980's.", 'label': 'neutral', 'reason': ''}
1 {'uid': '5d2930a3-62ac-485d-94d7-4e36cbbcd7b5', 'premise': 'Alexandra Lendon Bastedo (9 March 1946 – 12 January 2014) was a British actress, best known for her role as secret agent Sharron Macready in the 1968 British espionage/science fiction adventure series "The Champions". She has been cited as a sex symbol of the 1960s and 1970s. Bastedo was a vegetarian and animal welfare advocate.', 'hypothesis': "Bastedo didn't keep any pets because of her views on animal rights.", 'label': 'neutral', 'reason': ''}
1 {'uid': '324db753-ddc9-4a85-a825-f09e2e5aebdd', 'premise': 'Alexandra Lendon Bastedo (9 March 1946 – 12 January 2014) was a British actress, best known for her role as secret agent Sharron Macready in the 1968 British espionage/science fiction adventure series "The Champions". She has been cited as a sex symbol of the 1960s and 1970s. Bastedo was a vegetarian and animal welfare advocate.', 'hypothesis': 'Alexandra Bastedo was named by her mother.', 'label': 'neutral', 'reason': ''}
1 {'uid': '4874f429-da0e-406a-90c7-22240ff3ddf8', 'premise': 'Alexandra Lendon Bastedo (9 March 1946 – 12 January 2014) was a British actress, best known for her role as secret agent Sharron Macready in the 1968 British espionage/science fiction adventure series "The Champions". She has been cited as a sex symbol of the 1960s and 1970s. Bastedo was a vegetarian and animal welfare advocate.', 'hypothesis': 'Bastedo cared for all the animals that inhabit the earth.', 'label': 'neutral', 'reason': ''}
```
Here also, the dataset was generated successfuly even hough it had same keys without any warning.
The reason appears to stem from here:
https://github.com/huggingface/datasets/blob/56346791aed417306d054d89bd693d6b7eab17f7/src/datasets/builder.py#L988
Here, although it has access to every key, but it is not being checked and the example is written directly:
https://github.com/huggingface/datasets/blob/56346791aed417306d054d89bd693d6b7eab17f7/src/datasets/builder.py#L992
I would like to take this issue if you allow me. Thank You!
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2230/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2230/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 24 days, 4:01:34
|
https://api.github.com/repos/huggingface/datasets/issues/2229
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2229/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2229/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2229/events
|
https://github.com/huggingface/datasets/issues/2229
| 859,810,602
|
MDU6SXNzdWU4NTk4MTA2MDI=
| 2,229
|
`xnli` dataset creating a tuple key while yielding instead of `str` or `int`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42388668?v=4",
"events_url": "https://api.github.com/users/NikhilBartwal/events{/privacy}",
"followers_url": "https://api.github.com/users/NikhilBartwal/followers",
"following_url": "https://api.github.com/users/NikhilBartwal/following{/other_user}",
"gists_url": "https://api.github.com/users/NikhilBartwal/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/NikhilBartwal",
"id": 42388668,
"login": "NikhilBartwal",
"node_id": "MDQ6VXNlcjQyMzg4NjY4",
"organizations_url": "https://api.github.com/users/NikhilBartwal/orgs",
"received_events_url": "https://api.github.com/users/NikhilBartwal/received_events",
"repos_url": "https://api.github.com/users/NikhilBartwal/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/NikhilBartwal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NikhilBartwal/subscriptions",
"type": "User",
"url": "https://api.github.com/users/NikhilBartwal",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi ! Sure sounds good. Also if you find other datasets that use tuples instead of str/int, you can also fix them !\r\nthanks :)",
"@lhoestq I have sent a PR for fixing the issue. Would be great if you could have a look! Thanks!"
] | 2021-04-16T13:21:53
| 2021-04-19T08:56:42
| 2021-04-19T08:56:42
|
CONTRIBUTOR
| null | null | null | null |
When using `ds = datasets.load_dataset('xnli', 'ar')`, the dataset generation script uses the following section of code in the egging, which yields a tuple key instead of the specified `str` or `int` key:
https://github.com/huggingface/datasets/blob/56346791aed417306d054d89bd693d6b7eab17f7/datasets/xnli/xnli.py#L196
Since, community datasets in Tensorflow Datasets also use HF datasets, this causes a Tuple key error while loading HF's `xnli` dataset.
I'm up for sending a fix for this, I think we can simply use `file_idx + "_" + row_idx` as a unique key instead of a tuple.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2229/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2229/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 2 days, 19:34:49
|
https://api.github.com/repos/huggingface/datasets/issues/2226
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2226/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2226/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2226/events
|
https://github.com/huggingface/datasets/issues/2226
| 859,720,302
|
MDU6SXNzdWU4NTk3MjAzMDI=
| 2,226
|
Batched map fails when removing all columns
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/2743060?v=4",
"events_url": "https://api.github.com/users/villmow/events{/privacy}",
"followers_url": "https://api.github.com/users/villmow/followers",
"following_url": "https://api.github.com/users/villmow/following{/other_user}",
"gists_url": "https://api.github.com/users/villmow/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/villmow",
"id": 2743060,
"login": "villmow",
"node_id": "MDQ6VXNlcjI3NDMwNjA=",
"organizations_url": "https://api.github.com/users/villmow/orgs",
"received_events_url": "https://api.github.com/users/villmow/received_events",
"repos_url": "https://api.github.com/users/villmow/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/villmow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/villmow/subscriptions",
"type": "User",
"url": "https://api.github.com/users/villmow",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
] |
[
"I found the problem. I called `set_format` on some columns before. This makes it crash. Here is a complete example to reproduce:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\nsst = load_dataset(\"sst\")\r\nsst.set_format(\"torch\", columns=[\"label\"], output_all_columns=True)\r\nds = sst[\"train\"]\r\n\r\n# crashes\r\nds.map(\r\n lambda x: {\"a\": list(range(20))},\r\n remove_columns=ds.column_names,\r\n load_from_cache_file=False,\r\n num_proc=1,\r\n batched=True,\r\n)\r\n```",
"Thanks for reporting and for providing this code to reproduce the issue, this is really helpful !",
"I merged a fix, it should work on `master` now :)\r\nWe'll do a new release soon !"
] | 2021-04-16T11:17:01
| 2022-10-05T17:32:15
| 2022-10-05T17:32:15
|
NONE
| null | null | null | null |
Hi @lhoestq ,
I'm hijacking this issue, because I'm currently trying to do the approach you recommend:
> Currently the optimal setup for single-column computations is probably to do something like
>
> ```python
> result = dataset.map(f, input_columns="my_col", remove_columns=dataset.column_names)
> ```
Here is my code: (see edit, in which I added a simplified version
```
This is the error:
```bash
pyarrow.lib.ArrowInvalid: Column 1 named tokens expected length 8964 but got length 1000
```
I wonder why this error occurs, when I delete every column? Can you give me a hint?
### Edit:
I preprocessed my dataset before (using map with the features argument) and saved it to disk. May this be part of the error? I can iterate over the
complete dataset and print every sample before calling map. There seems to be no other problem with the dataset.
I tried to simplify the code that crashes:
```python
# works
log.debug(dataset.column_names)
log.debug(dataset)
for i, sample in enumerate(dataset):
log.debug(i, sample)
# crashes
counted_dataset = dataset.map(
lambda x: {"a": list(range(20))},
input_columns=column,
remove_columns=dataset.column_names,
load_from_cache_file=False,
num_proc=num_workers,
batched=True,
)
```
```
pyarrow.lib.ArrowInvalid: Column 1 named tokens expected length 20 but got length 1000
```
Edit2:
May this be a problem with a schema I set when preprocessing the dataset before? I tried to add the `features` argument to the function and then I get a new error:
```python
# crashes
counted_dataset = dataset.map(
lambda x: {"a": list(range(20))},
input_columns=column,
remove_columns=dataset.column_names,
load_from_cache_file=False,
num_proc=num_workers,
batched=True,
features=datasets.Features(
{
"a": datasets.Sequence(datasets.Value("int32"))
}
)
)
```
```
File "env/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1704, in _map_single
writer.write_batch(batch)
File "env/lib/python3.8/site-packages/datasets/arrow_writer.py", line 312, in write_batch
col_type = schema.field(col).type if schema is not None else None
File "pyarrow/types.pxi", line 1341, in pyarrow.lib.Schema.field
KeyError: 'Column tokens does not exist in schema'
```
_Originally posted by @villmow in https://github.com/huggingface/datasets/issues/2193#issuecomment-820230874_
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2226/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2226/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 537 days, 6:15:14
|
https://api.github.com/repos/huggingface/datasets/issues/2224
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2224/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2224/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2224/events
|
https://github.com/huggingface/datasets/issues/2224
| 857,983,361
|
MDU6SXNzdWU4NTc5ODMzNjE=
| 2,224
|
Raise error if Windows max path length is not disabled
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[] | 2021-04-14T14:57:20
| 2021-04-14T14:59:13
| null |
MEMBER
| null | null | null | null |
On startup, raise an error if Windows max path length is not disabled; ask the user to disable it.
Linked to discussion in #2220.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2224/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2224/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/2218
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2218/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2218/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2218/events
|
https://github.com/huggingface/datasets/issues/2218
| 857,238,435
|
MDU6SXNzdWU4NTcyMzg0MzU=
| 2,218
|
Duplicates in the LAMA dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/7276193?v=4",
"events_url": "https://api.github.com/users/amarasovic/events{/privacy}",
"followers_url": "https://api.github.com/users/amarasovic/followers",
"following_url": "https://api.github.com/users/amarasovic/following{/other_user}",
"gists_url": "https://api.github.com/users/amarasovic/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/amarasovic",
"id": 7276193,
"login": "amarasovic",
"node_id": "MDQ6VXNlcjcyNzYxOTM=",
"organizations_url": "https://api.github.com/users/amarasovic/orgs",
"received_events_url": "https://api.github.com/users/amarasovic/received_events",
"repos_url": "https://api.github.com/users/amarasovic/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/amarasovic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amarasovic/subscriptions",
"type": "User",
"url": "https://api.github.com/users/amarasovic",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[
"Hi,\r\n\r\ncurrently the datasets API doesn't have a dedicated function to remove duplicate rows, but since the LAMA dataset is not too big (it fits in RAM), we can leverage pandas to help us remove duplicates:\r\n```python\r\n>>> from datasets import load_dataset, Dataset\r\n>>> dataset = load_dataset('lama', split='train')\r\n>>> dataset = Dataset.from_pandas(dataset.to_pandas().drop_duplicates(subset=...)) # specify a subset of the columns to consider in a list or use all of the columns if None\r\n```\r\n\r\nNote that the same can be achieved with the `Dataset.filter` method but this would requrie some extra work (filter function, speed?).",
"Oh, seems like my question wasn't specified well. I'm _not_ asking how to remove duplicates, but whether duplicates should be removed if I want to do the evaluation on the LAMA dataset as it was proposed in the original paper/repository? In other words, will I get the same result if evaluate on the de-duplicated dataset loaded from HF's `datasets` as the results I'd get if I use the original data format and data processing script in https://github.com/facebookresearch/LAMA? ",
"So it looks like the person who added LAMA to the library chose to have one item per piece of evidence rather than one per relation - and in this case, there are duplicate pieces of evidence for the target relation\r\n\r\nIf I understand correctly, to reproduce reported results, you would have to aggregate predictions for the several pieces of evidence provided for each relation (each unique `uuid`), but the original authors will know better \r\n\r\ncc @fabiopetroni "
] | 2021-04-13T18:59:49
| 2021-04-14T21:42:27
| null |
NONE
| null | null | null | null |
I observed duplicates in the LAMA probing dataset, see a minimal code below.
```
>>> import datasets
>>> dataset = datasets.load_dataset('lama')
No config specified, defaulting to: lama/trex
Reusing dataset lama (/home/anam/.cache/huggingface/datasets/lama/trex/1.1.0/97deffae13eca0a18e77dfb3960bb31741e973586f5c1fe1ec0d6b5eece7bddc)
>>> train_dataset = dataset['train']
>>> train_dataset[0]
{'description': 'language or languages a person has learned from early childhood', 'label': 'native language', 'masked_sentence': 'Louis Jules Trochu ([lwi ʒyl tʁɔʃy]; 12 March 1815 – 7 October 1896) was a [MASK] military leader and politician.', 'obj_label': 'French', 'obj_surface': 'French', 'obj_uri': 'Q150', 'predicate_id': 'P103', 'sub_label': 'Louis Jules Trochu', 'sub_surface': 'Louis Jules Trochu', 'sub_uri': 'Q441235', 'template': 'The native language of [X] is [Y] .', 'template_negated': '[X] is not owned by [Y] .', 'type': 'N-1', 'uuid': '40b2ed1c-0961-482e-844e-32596b6117c8'}
>>> train_dataset[1]
{'description': 'language or languages a person has learned from early childhood', 'label': 'native language', 'masked_sentence': 'Louis Jules Trochu ([lwi ʒyl tʁɔʃy]; 12 March 1815 – 7 October 1896) was a [MASK] military leader and politician.', 'obj_label': 'French', 'obj_surface': 'French', 'obj_uri': 'Q150', 'predicate_id': 'P103', 'sub_label': 'Louis Jules Trochu', 'sub_surface': 'Louis Jules Trochu', 'sub_uri': 'Q441235', 'template': 'The native language of [X] is [Y] .', 'template_negated': '[X] is not owned by [Y] .', 'type': 'N-1', 'uuid': '40b2ed1c-0961-482e-844e-32596b6117c8'}
```
I checked the original data available at https://dl.fbaipublicfiles.com/LAMA/data.zip. This particular duplicated comes from:
```
{"uuid": "40b2ed1c-0961-482e-844e-32596b6117c8", "obj_uri": "Q150", "obj_label": "French", "sub_uri": "Q441235", "sub_label": "Louis Jules Trochu", "predicate_id": "P103", "evidences": [{"sub_surface": "Louis Jules Trochu", "obj_surface": "French", "masked_sentence": "Louis Jules Trochu ([lwi \u0292yl t\u0281\u0254\u0283y]; 12 March 1815 \u2013 7 October 1896) was a [MASK] military leader and politician."}, {"sub_surface": "Louis Jules Trochu", "obj_surface": "French", "masked_sentence": "Louis Jules Trochu ([lwi \u0292yl t\u0281\u0254\u0283y]; 12 March 1815 \u2013 7 October 1896) was a [MASK] military leader and politician."}]}
```
What is the best way to deal with these duplicates if I want to use `datasets` to probe with LAMA?
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2218/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2218/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/2214
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2214/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2214/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2214/events
|
https://github.com/huggingface/datasets/issues/2214
| 856,333,657
|
MDU6SXNzdWU4NTYzMzM2NTc=
| 2,214
|
load_metric error: module 'datasets.utils.file_utils' has no attribute 'add_start_docstrings'
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/414788?v=4",
"events_url": "https://api.github.com/users/nsaphra/events{/privacy}",
"followers_url": "https://api.github.com/users/nsaphra/followers",
"following_url": "https://api.github.com/users/nsaphra/following{/other_user}",
"gists_url": "https://api.github.com/users/nsaphra/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/nsaphra",
"id": 414788,
"login": "nsaphra",
"node_id": "MDQ6VXNlcjQxNDc4OA==",
"organizations_url": "https://api.github.com/users/nsaphra/orgs",
"received_events_url": "https://api.github.com/users/nsaphra/received_events",
"repos_url": "https://api.github.com/users/nsaphra/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/nsaphra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nsaphra/subscriptions",
"type": "User",
"url": "https://api.github.com/users/nsaphra",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] |
[
"Hi @nsaphra, thanks for reporting.\r\n\r\nThis issue was fixed in `datasets` version 1.3.0. Could you please update `datasets` and tell me if the problem persists?\r\n```shell\r\npip install -U datasets\r\n```",
"There might be a bug in the conda version of `datasets` 1.2.1 where the datasets/metric scripts are downloaded from `master` instead of the `1.2.1` repo.\r\n\r\nYou can try setting the env var `HF_SCRIPTS_VERSION=\"1.2.1\"` as a workaround. Let me know if that helps.",
"I just faced the same issue. I was using 1.2.1 from conda and received the same AttributeError complaining about 'add_start_docstrings'. Uninstalling the conda installed datasets and then installing the latest datasets (version 1.5.0) using pip install solved the issue for me. I don't like mixing up conda and pip installs in the same environments but this will have to do for now, until 1.5.0 is made available through conda.",
"Yep, seems to have fixed things! The conda package could really do with an update. Thanks!"
] | 2021-04-12T20:26:01
| 2021-04-23T15:20:02
| 2021-04-23T15:20:02
|
NONE
| null | null | null | null |
I'm having the same problem as [Notebooks issue 10](https://github.com/huggingface/notebooks/issues/10) on datasets 1.2.1, and it seems to be an issue with the datasets package.
```python
>>> from datasets import load_metric
>>> metric = load_metric("glue", "sst2")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/ext3/miniconda3/lib/python3.8/site-packages/datasets-1.2.1-py3.8.egg/datasets/load.py", line 502, in load_metric
File "/ext3/miniconda3/lib/python3.8/site-packages/datasets-1.2.1-py3.8.egg/datasets/load.py", line 66, in import_main_class
File "/ext3/miniconda3/lib/python3.8/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 783, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/ns4008/.cache/huggingface/modules/datasets_modules/metrics/glue/e4606ab9804a36bcd5a9cebb2cb65bb14b6ac78ee9e6d5981fa679a495dd55de/glue.py", line 105, in <module>
@datasets.utils.file_utils.add_start_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION)
AttributeError: module 'datasets.utils.file_utils' has no attribute 'add_start_docstrings'
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/414788?v=4",
"events_url": "https://api.github.com/users/nsaphra/events{/privacy}",
"followers_url": "https://api.github.com/users/nsaphra/followers",
"following_url": "https://api.github.com/users/nsaphra/following{/other_user}",
"gists_url": "https://api.github.com/users/nsaphra/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/nsaphra",
"id": 414788,
"login": "nsaphra",
"node_id": "MDQ6VXNlcjQxNDc4OA==",
"organizations_url": "https://api.github.com/users/nsaphra/orgs",
"received_events_url": "https://api.github.com/users/nsaphra/received_events",
"repos_url": "https://api.github.com/users/nsaphra/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/nsaphra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nsaphra/subscriptions",
"type": "User",
"url": "https://api.github.com/users/nsaphra",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2214/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2214/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 10 days, 18:54:01
|
https://api.github.com/repos/huggingface/datasets/issues/2212
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2212/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2212/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2212/events
|
https://github.com/huggingface/datasets/issues/2212
| 855,999,133
|
MDU6SXNzdWU4NTU5OTkxMzM=
| 2,212
|
Can't reach "https://storage.googleapis.com/illuin/fquad/train.json.zip" when trying to load fquad dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/21348833?v=4",
"events_url": "https://api.github.com/users/hanss0n/events{/privacy}",
"followers_url": "https://api.github.com/users/hanss0n/followers",
"following_url": "https://api.github.com/users/hanss0n/following{/other_user}",
"gists_url": "https://api.github.com/users/hanss0n/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hanss0n",
"id": 21348833,
"login": "hanss0n",
"node_id": "MDQ6VXNlcjIxMzQ4ODMz",
"organizations_url": "https://api.github.com/users/hanss0n/orgs",
"received_events_url": "https://api.github.com/users/hanss0n/received_events",
"repos_url": "https://api.github.com/users/hanss0n/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hanss0n/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hanss0n/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hanss0n",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi ! Apparently the data are not available from this url anymore. We'll replace it with the new url when it's available",
"I saw this on their website when we request to download the dataset:\r\n\r\n\r\nCan we still request them link for the dataset and make a PR? @lhoestq @yjernite ",
"I've contacted Martin (first author of the fquad paper) regarding a possible new url. Hopefully we can get one soon !",
"They now made a website to force people who want to use the dataset for commercial purposes to seek a commercial license from them ...",
"The script has been adopted to support manual download from the website, so I'm closing this issue."
] | 2021-04-12T13:49:56
| 2023-10-03T16:09:19
| 2023-10-03T16:09:18
|
NONE
| null | null | null | null |
I'm trying to load the [fquad dataset](https://huggingface.co/datasets/fquad) by running:
```Python
fquad = load_dataset("fquad")
```
which produces the following error:
```
Using custom data configuration default
Downloading and preparing dataset fquad/default (download: 3.14 MiB, generated: 6.62 MiB, post-processed: Unknown size, total: 9.76 MiB) to /root/.cache/huggingface/datasets/fquad/default/0.1.0/778dc2c85813d05ddd0c17087294d5f8f24820752340958070876b677af9f061...
---------------------------------------------------------------------------
ConnectionError Traceback (most recent call last)
<ipython-input-48-a2721797e23b> in <module>()
----> 1 fquad = load_dataset("fquad")
11 frames
/usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token)
614 raise FileNotFoundError("Couldn't find file at {}".format(url))
615 _raise_if_offline_mode_is_enabled(f"Tried to reach {url}")
--> 616 raise ConnectionError("Couldn't reach {}".format(url))
617
618 # Try a second time
ConnectionError: Couldn't reach https://storage.googleapis.com/illuin/fquad/train.json.zip
```
Does anyone know why that is and how to fix it?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2212/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2212/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 904 days, 2:19:22
|
https://api.github.com/repos/huggingface/datasets/issues/2211
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2211/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2211/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2211/events
|
https://github.com/huggingface/datasets/issues/2211
| 855,988,410
|
MDU6SXNzdWU4NTU5ODg0MTA=
| 2,211
|
Getting checksum error when trying to load lc_quad dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/21348833?v=4",
"events_url": "https://api.github.com/users/hanss0n/events{/privacy}",
"followers_url": "https://api.github.com/users/hanss0n/followers",
"following_url": "https://api.github.com/users/hanss0n/following{/other_user}",
"gists_url": "https://api.github.com/users/hanss0n/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hanss0n",
"id": 21348833,
"login": "hanss0n",
"node_id": "MDQ6VXNlcjIxMzQ4ODMz",
"organizations_url": "https://api.github.com/users/hanss0n/orgs",
"received_events_url": "https://api.github.com/users/hanss0n/received_events",
"repos_url": "https://api.github.com/users/hanss0n/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hanss0n/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hanss0n/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hanss0n",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi,\r\n\r\nI've already opened a PR with the fix. If you are in a hurry, just build the project from source and run:\r\n```bash\r\ndatasets-cli test datasets/lc_quad --save_infos --all_configs --ignore_verifications\r\n```\r\n\r\n",
"Ah sorry, I tried searching but couldn't find any related PR. \r\n\r\nThank you! "
] | 2021-04-12T13:38:58
| 2021-04-14T13:42:25
| 2021-04-14T13:42:25
|
NONE
| null | null | null | null |
I'm having issues loading the [lc_quad](https://huggingface.co/datasets/fquad) dataset by running:
```Python
lc_quad = load_dataset("lc_quad")
```
which is giving me the following error:
```
Using custom data configuration default
Downloading and preparing dataset lc_quad/default (download: 3.69 MiB, generated: 19.77 MiB, post-processed: Unknown size, total: 23.46 MiB) to /root/.cache/huggingface/datasets/lc_quad/default/2.0.0/5a98fe174603f5dec6df07edf1c2b4d2317210d2ad61f5a393839bca4d64e5a7...
---------------------------------------------------------------------------
NonMatchingChecksumError Traceback (most recent call last)
<ipython-input-42-404ace83f73c> in <module>()
----> 1 lc_quad = load_dataset("lc_quad")
3 frames
/usr/local/lib/python3.7/dist-packages/datasets/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name)
37 if len(bad_urls) > 0:
38 error_msg = "Checksums didn't match" + for_verification_name + ":\n"
---> 39 raise NonMatchingChecksumError(error_msg + str(bad_urls))
40 logger.info("All the checksums matched successfully" + for_verification_name)
41
NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://github.com/AskNowQA/LC-QuAD2.0/archive/master.zip']
```
Does anyone know why this could be and how I fix it?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2211/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2211/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 2 days, 0:03:27
|
https://api.github.com/repos/huggingface/datasets/issues/2210
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2210/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2210/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2210/events
|
https://github.com/huggingface/datasets/issues/2210
| 855,709,400
|
MDU6SXNzdWU4NTU3MDk0MDA=
| 2,210
|
dataloading slow when using HUGE dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/29157715?v=4",
"events_url": "https://api.github.com/users/hwijeen/events{/privacy}",
"followers_url": "https://api.github.com/users/hwijeen/followers",
"following_url": "https://api.github.com/users/hwijeen/following{/other_user}",
"gists_url": "https://api.github.com/users/hwijeen/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hwijeen",
"id": 29157715,
"login": "hwijeen",
"node_id": "MDQ6VXNlcjI5MTU3NzE1",
"organizations_url": "https://api.github.com/users/hwijeen/orgs",
"received_events_url": "https://api.github.com/users/hwijeen/received_events",
"repos_url": "https://api.github.com/users/hwijeen/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hwijeen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hwijeen/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hwijeen",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi ! Yes this is an issue with `datasets<=1.5.0`\r\nThis issue has been fixed by #2122 , we'll do a new release soon :)\r\nFor now you can test it on the `master` branch.",
"Hi, thank you for your answer. I did not realize that my issue stems from the same problem. "
] | 2021-04-12T08:33:02
| 2021-04-13T02:03:05
| 2021-04-13T02:03:05
|
NONE
| null | null | null | null |
Hi,
When I use datasets with 600GB data, the dataloading speed increases significantly.
I am experimenting with two datasets, and one is about 60GB and the other 600GB.
Simply speaking, my code uses `datasets.set_format("torch")` function and let pytorch-lightning handle ddp training.
When looking at the pytorch-lightning supported profile of two different runs, I see that fetching a batch(`get_train_batch`) consumes an unreasonable amount of time when data is large. What could be the cause?
* 60GB data
```
Action | Mean duration (s) |Num calls | Total time (s) | Percentage % |
------------------------------------------------------------------------------------------------------------------------------------
Total | - |_ | 200.33 | 100 % |
------------------------------------------------------------------------------------------------------------------------------------
run_training_epoch | 71.994 |1 | 71.994 | 35.937 |
run_training_batch | 0.64373 |100 | 64.373 | 32.133 |
optimizer_step_and_closure_0 | 0.64322 |100 | 64.322 | 32.108 |
training_step_and_backward | 0.61004 |100 | 61.004 | 30.452 |
model_backward | 0.37552 |100 | 37.552 | 18.745 |
model_forward | 0.22813 |100 | 22.813 | 11.387 |
training_step | 0.22759 |100 | 22.759 | 11.361 |
get_train_batch | 0.066385 |100 | 6.6385 | 3.3138 |
```
* 600GB data
```
Action | Mean duration (s) |Num calls | Total time (s) | Percentage % |
------------------------------------------------------------------------------------------------------------------------------------
Total | - |_ | 3285.6 | 100 % |
------------------------------------------------------------------------------------------------------------------------------------
run_training_epoch | 1397.9 |1 | 1397.9 | 42.546 |
run_training_batch | 7.2596 |100 | 725.96 | 22.095 |
optimizer_step_and_closure_0 | 7.2589 |100 | 725.89 | 22.093 |
training_step_and_backward | 7.223 |100 | 722.3 | 21.984 |
model_backward | 6.9662 |100 | 696.62 | 21.202 |
get_train_batch | 6.322 |100 | 632.2 | 19.241 |
model_forward | 0.24902 |100 | 24.902 | 0.75789 |
training_step | 0.2485 |100 | 24.85 | 0.75633 |
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/29157715?v=4",
"events_url": "https://api.github.com/users/hwijeen/events{/privacy}",
"followers_url": "https://api.github.com/users/hwijeen/followers",
"following_url": "https://api.github.com/users/hwijeen/following{/other_user}",
"gists_url": "https://api.github.com/users/hwijeen/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hwijeen",
"id": 29157715,
"login": "hwijeen",
"node_id": "MDQ6VXNlcjI5MTU3NzE1",
"organizations_url": "https://api.github.com/users/hwijeen/orgs",
"received_events_url": "https://api.github.com/users/hwijeen/received_events",
"repos_url": "https://api.github.com/users/hwijeen/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hwijeen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hwijeen/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hwijeen",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2210/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2210/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 17:30:03
|
https://api.github.com/repos/huggingface/datasets/issues/2207
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2207/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2207/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2207/events
|
https://github.com/huggingface/datasets/issues/2207
| 855,267,383
|
MDU6SXNzdWU4NTUyNjczODM=
| 2,207
|
making labels consistent across the datasets
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dorost1234",
"id": 79165106,
"login": "dorost1234",
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dorost1234",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi ! The ClassLabel feature type encodes the labels as integers.\r\nThe integer corresponds to the index of the label name in the `names` list of the ClassLabel.\r\nHere that means that the labels are 'entailment' (0), 'neutral' (1), 'contradiction' (2).\r\n\r\nYou can get the label names back by using `a.features['label'].int2str(i)`.\r\n",
"Hi! You can also easily reorder the label with the [`Dataset.align_labels_with_mapping`](https://huggingface.co/docs/datasets/master/en/process#align) method."
] | 2021-04-11T10:03:56
| 2022-06-01T16:23:08
| 2022-06-01T16:21:10
|
NONE
| null | null | null | null |
Hi
For accessing the labels one can type
```
>>> a.features['label']
ClassLabel(num_classes=3, names=['entailment', 'neutral', 'contradiction'], names_file=None, id=None)
```
The labels however are not consistent with the actual labels sometimes, for instance in case of XNLI, the actual labels are 0,1,2, but if one try to access as above they are entailment, neutral,contradiction,
it would be great to have the labels consistent.
thanks
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2207/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2207/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 416 days, 6:17:14
|
https://api.github.com/repos/huggingface/datasets/issues/2206
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2206/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2206/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2206/events
|
https://github.com/huggingface/datasets/issues/2206
| 855,252,415
|
MDU6SXNzdWU4NTUyNTI0MTU=
| 2,206
|
Got pyarrow error when loading a dataset while adding special tokens into the tokenizer
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/38536635?v=4",
"events_url": "https://api.github.com/users/yana-xuyan/events{/privacy}",
"followers_url": "https://api.github.com/users/yana-xuyan/followers",
"following_url": "https://api.github.com/users/yana-xuyan/following{/other_user}",
"gists_url": "https://api.github.com/users/yana-xuyan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yana-xuyan",
"id": 38536635,
"login": "yana-xuyan",
"node_id": "MDQ6VXNlcjM4NTM2NjM1",
"organizations_url": "https://api.github.com/users/yana-xuyan/orgs",
"received_events_url": "https://api.github.com/users/yana-xuyan/received_events",
"repos_url": "https://api.github.com/users/yana-xuyan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yana-xuyan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yana-xuyan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yana-xuyan",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] |
[
"Hi,\r\n\r\nthe output of the tokenizers is treated specially in the lib to optimize the dataset size (see the code [here](https://github.com/huggingface/datasets/blob/master/src/datasets/arrow_writer.py#L138-L141)). It looks like that one of the values in a dictionary returned by the tokenizer is out of the assumed range.\r\nCan you please provide a minimal reproducible example for more help?",
"Hi @yana-xuyan, thanks for reporting.\r\n\r\nAs clearly @mariosasko explained, `datasets` performs some optimizations in order to reduce the size of the dataset cache files. And one of them is storing the field `special_tokens_mask` as `int8`, which means that this field can only contain integers between `-128` to `127`. As your message error states, one of the values of this field is `50259`, and therefore it cannot be stored as an `int8`.\r\n\r\nMaybe we could implement a way to disable this optimization and allow using any integer value; although the size of the cache files would be much larger.",
"I'm facing same issue @mariosasko @albertvillanova \r\n\r\n```\r\nArrowInvalid: Integer value 50260 not in range: -128 to 127\r\n```\r\n\r\nTo reproduce:\r\n```python\r\nSPECIAL_TOKENS = ['<bos>','<eos>','<speaker1>','<speaker2>','<pad>']\r\nATTR_TO_SPECIAL_TOKEN = {\r\n 'bos_token': '<bos>', \r\n 'eos_token': '<eos>', \r\n 'pad_token': '<pad>',\r\n 'additional_special_tokens': ['<speaker1>', '<speaker2>']\r\n }\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"gpt2\", use_fast=False)\r\nnum_added_tokens =tokenizer.add_special_tokens(ATTR_TO_SPECIAL_TOKEN)\r\nvocab_size = len(self.tokenizer.encoder) + num_added_tokens\r\nvocab =tokenizer.get_vocab()\r\n\r\npad_index = tokenizer.pad_token_id\r\neos_index = tokenizer.eos_token_id\r\nbos_index = tokenizer.bos_token_id\r\nspeaker1_index = vocab[\"<speaker1>\"]\r\nspeaker2_index = vocab[\"<speaker2>\"]\r\n```\r\n\r\n```python\r\ntokenizer.decode(['50260'])\r\n'<speaker1>'\r\n```",
"@mariosasko \r\nI am hitting this bug in the Bert tokenizer too. I see that @albertvillanova labeled this as a bug back in April. Has there been a fix released yet?\r\nWhat I did for now is to just disable the optimization in the HF library. @yana-xuyan and @thomas-happify, is that what you did and did that work for you?\r\n\r\n",
"Hi @gregg-ADP, \r\n\r\nThis is still a bug.\r\n\r\nAs @albertvillanova has suggested, maybe it's indeed worth adding a variable to `config.py` to have a way to disable this behavior.\r\n\r\nIn the meantime, this forced optimization can be disabled by specifying `features` (of the returned examples) in the `map` call:\r\n```python\r\nfrom datasets import *\r\n... # dataset init\r\nds.map(process_example, features=Features({\"special_tokens_mask\": Sequence(Value(\"int32\")), ... rest of the features}) \r\n```\r\n\r\ncc @lhoestq so he is also aware of this issue",
"Thanks for the quick reply @mariosasko. What I did was to changed the optimizer to use int32 instead of int8. \r\nWhat you're suggesting specifies the type for each feature explicitly without changing the HF code. This is definitely a better option. However, we are hitting a new error later:\r\n```\r\n File \"/Users/ccccc/PycharmProjects/aaaa-ml/venv-source/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1051, in _call_impl\r\n return forward_call(*input, **kwargs)\r\nTypeError: forward() got an unexpected keyword argument 'pos'\r\n\r\n```\r\nWhere 'pos' is the name of a new feature we added. Do you agree that your way of fixing the optimizer issue will not fix our new issue? If not, I will continue with this optimizer fix until we resolve our other issue.\r\n",
"Hi @gwc4github,\r\n\r\nthe fix was merged a few minutes ago, and it doesn't require any changes on the user side (e.g. no need for specifying `features`). If you find time, feel free to install `datasets` from master with:\r\n```\r\npip install git+https://github.com/huggingface/datasets.git\r\n```\r\nand let us know if it works for your use case! "
] | 2021-04-11T08:40:09
| 2021-11-10T12:18:30
| 2021-11-10T12:04:28
|
NONE
| null | null | null | null |
I added five more special tokens into the GPT2 tokenizer. But after that, when I try to pre-process the data using my previous code, I got an error shown below:
Traceback (most recent call last):
File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1687, in _map_single
writer.write(example)
File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/datasets/arrow_writer.py", line 296, in write
self.write_on_file()
File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/datasets/arrow_writer.py", line 270, in write_on_file
pa_array = pa.array(typed_sequence)
File "pyarrow/array.pxi", line 222, in pyarrow.lib.array
File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol
File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/datasets/arrow_writer.py", line 108, in __arrow_array__
out = out.cast(pa.list_(self.optimized_int_type))
File "pyarrow/array.pxi", line 810, in pyarrow.lib.Array.cast
File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/pyarrow/compute.py", line 281, in cast
return call_function("cast", [arr], options)
File "pyarrow/_compute.pyx", line 465, in pyarrow._compute.call_function
File "pyarrow/_compute.pyx", line 294, in pyarrow._compute.Function.call
File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Integer value 50259 not in range: -128 to 127
Do you have any idea about it?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2206/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2206/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 213 days, 3:24:19
|
https://api.github.com/repos/huggingface/datasets/issues/2200
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2200/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2200/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2200/events
|
https://github.com/huggingface/datasets/issues/2200
| 854,449,656
|
MDU6SXNzdWU4NTQ0NDk2NTY=
| 2,200
|
_prepare_split will overwrite DatasetBuilder.info.features
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/4157614?v=4",
"events_url": "https://api.github.com/users/Gforky/events{/privacy}",
"followers_url": "https://api.github.com/users/Gforky/followers",
"following_url": "https://api.github.com/users/Gforky/following{/other_user}",
"gists_url": "https://api.github.com/users/Gforky/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Gforky",
"id": 4157614,
"login": "Gforky",
"node_id": "MDQ6VXNlcjQxNTc2MTQ=",
"organizations_url": "https://api.github.com/users/Gforky/orgs",
"received_events_url": "https://api.github.com/users/Gforky/received_events",
"repos_url": "https://api.github.com/users/Gforky/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Gforky/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Gforky/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Gforky",
"user_view_type": "public"
}
|
[] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
] |
[
"Hi ! This might be related to #2153 \r\n\r\nYou're right the ArrowWriter should be initialized with `features=self.info.features` ! Good catch\r\nI'm opening a PR to fix this and also to figure out how it was not caught in the tests\r\n\r\nEDIT: opened #2201",
"> Hi ! This might be related to #2153\r\n> \r\n> You're right the ArrowWriter should be initialized with `features=self.info.features` ! Good catch\r\n> I'm opening a PR to fix this and also to figure out how it was not caught in the tests\r\n> \r\n> EDIT: opened #2201\r\n\r\nGlad to hear that! Thank you for your fix, I'm new to huggingface, it's a fantastic project 😁"
] | 2021-04-09T11:47:13
| 2021-06-04T10:37:35
| 2021-06-04T10:37:35
|
NONE
| null | null | null | null |
Hi, here is my issue:
I initialized a Csv datasetbuilder with specific features:
```
def get_dataset_features(data_args):
features = {}
if data_args.text_features:
features.update({text_feature: hf_features.Value("string") for text_feature in data_args.text_features.strip().split(",")})
if data_args.num_features:
features.update({text_feature: hf_features.Value("float32") for text_feature in data_args.num_features.strip().split(",")})
if data_args.label_classes:
features["label"] = hf_features.ClassLabel(names=data_args.label_classes.strip().split(","))
else:
features["label"] = hf_features.Value("float32")
return hf_features.Features(features)
datasets = load_dataset(extension,
data_files=data_files,
sep=data_args.delimiter,
header=data_args.header,
column_names=data_args.column_names.split(",") if data_args.column_names else None,
features=get_dataset_features(data_args=data_args))
```
The `features` is printout as below before `builder_instance.as_dataset` is called:
```
{'label': ClassLabel(num_classes=2, names=['unacceptable', 'acceptable'], names_file=None, id=None), 'notated': Value(dtype='string', id=None), 'sentence': Value(dtype='string', id=None), 'src_code': Value(dtype='string', id=None)}
````
But after the `builder_instance.as_dataset` is called for Csv dataset builder, the `features` is changed to:
```
{'label': Value(dtype='int64', id=None), 'notated': Value(dtype='string', id=None), 'sentence': Value(dtype='string', id=None), 'src_code': Value(dtype='string', id=None)}
```
After digged into the code, I releazed that in `ArrowBasedBuilder._prepare_split`, the DatasetBuilder's info's features will be overwrited by `ArrowWriter`'s `_features`.
But `ArrowWriter` is initailized without passing `features`.
So my concern is:
It's this overwrite must be done, or, should it be an option to pass features in `_prepare_split` function?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2200/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2200/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 55 days, 22:50:22
|
https://api.github.com/repos/huggingface/datasets/issues/2196
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2196/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2196/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2196/events
|
https://github.com/huggingface/datasets/issues/2196
| 854,126,114
|
MDU6SXNzdWU4NTQxMjYxMTQ=
| 2,196
|
`load_dataset` caches two arrow files?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/29157715?v=4",
"events_url": "https://api.github.com/users/hwijeen/events{/privacy}",
"followers_url": "https://api.github.com/users/hwijeen/followers",
"following_url": "https://api.github.com/users/hwijeen/following{/other_user}",
"gists_url": "https://api.github.com/users/hwijeen/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hwijeen",
"id": 29157715,
"login": "hwijeen",
"node_id": "MDQ6VXNlcjI5MTU3NzE1",
"organizations_url": "https://api.github.com/users/hwijeen/orgs",
"received_events_url": "https://api.github.com/users/hwijeen/received_events",
"repos_url": "https://api.github.com/users/hwijeen/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hwijeen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hwijeen/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hwijeen",
"user_view_type": "public"
}
|
[
{
"color": "d876e3",
"default": true,
"description": "Further information is requested",
"id": 1935892912,
"name": "question",
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/question"
}
] |
closed
| false
| null |
[] |
[
"Hi ! Files that starts with `cache-*` are cached computation files, i.e. they are the cached results of map/filter/cast/etc. operations. For example if you used `map` on your dataset to transform it, then the resulting dataset is going to be stored and cached in a `cache-*` file. These files are used to avoid having to load the dataset in RAM, even after many transforms",
"Thanks @lhoestq! Hmm.. that's strange because I specifically turned off auto caching, and saved mapped result, using `save_to_disk`, to another location. At this location, the following file is created:`355G\tcache-ed205e500a7dc44c.arrow`\r\n\r\nTo my observation, both `load_dataset` and `map` creates `cache-*` files, and I wonder what the `cache-*` file from `load_dataset` is for (as I believe the same information is stored in `json-train.arrow`.",
"This is a wrong report -- `cache-*` files are created only my `map`, not by `load_dataset`. "
] | 2021-04-09T03:49:19
| 2021-04-12T05:25:29
| 2021-04-12T05:25:29
|
NONE
| null | null | null | null |
Hi,
I am using datasets to load large json file of 587G.
I checked the cached folder and found that there are two arrow files created:
* `cache-ed205e500a7dc44c.arrow` - 355G
* `json-train.arrow` - 582G
Why is the first file created?
If I delete it, would I still be able to `load_from_disk`?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/29157715?v=4",
"events_url": "https://api.github.com/users/hwijeen/events{/privacy}",
"followers_url": "https://api.github.com/users/hwijeen/followers",
"following_url": "https://api.github.com/users/hwijeen/following{/other_user}",
"gists_url": "https://api.github.com/users/hwijeen/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hwijeen",
"id": 29157715,
"login": "hwijeen",
"node_id": "MDQ6VXNlcjI5MTU3NzE1",
"organizations_url": "https://api.github.com/users/hwijeen/orgs",
"received_events_url": "https://api.github.com/users/hwijeen/received_events",
"repos_url": "https://api.github.com/users/hwijeen/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hwijeen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hwijeen/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hwijeen",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2196/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2196/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 3 days, 1:36:10
|
https://api.github.com/repos/huggingface/datasets/issues/2195
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2195/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2195/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2195/events
|
https://github.com/huggingface/datasets/issues/2195
| 854,070,194
|
MDU6SXNzdWU4NTQwNzAxOTQ=
| 2,195
|
KeyError: '_indices_files' in `arrow_dataset.py`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/15007950?v=4",
"events_url": "https://api.github.com/users/samsontmr/events{/privacy}",
"followers_url": "https://api.github.com/users/samsontmr/followers",
"following_url": "https://api.github.com/users/samsontmr/following{/other_user}",
"gists_url": "https://api.github.com/users/samsontmr/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/samsontmr",
"id": 15007950,
"login": "samsontmr",
"node_id": "MDQ6VXNlcjE1MDA3OTUw",
"organizations_url": "https://api.github.com/users/samsontmr/orgs",
"received_events_url": "https://api.github.com/users/samsontmr/received_events",
"repos_url": "https://api.github.com/users/samsontmr/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/samsontmr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/samsontmr/subscriptions",
"type": "User",
"url": "https://api.github.com/users/samsontmr",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] |
[
"Thanks for reporting @samsontmr.\r\n\r\nIt seems a backward compatibility issue...",
"Thanks @samsontmr this should be fixed on master now\r\n\r\nFeel free to reopen if you're still having issues"
] | 2021-04-09T01:37:12
| 2021-04-09T09:55:09
| 2021-04-09T09:54:39
|
NONE
| null | null | null | null |
After pulling the latest master, I'm getting a crash when `load_from_disk` tries to load my local dataset.
Trace:
```
Traceback (most recent call last):
File "load_data.py", line 11, in <module>
dataset = load_from_disk(SRC)
File "/opt/conda/envs/py38/lib/python3.8/site-packages/datasets/load.py", line 784, in load_from_disk
return DatasetDict.load_from_disk(dataset_path, fs, keep_in_memory=keep_in_memory)
File "/opt/conda/envs/py38/lib/python3.8/site-packages/datasets/dataset_dict.py", line 692, in load_from_disk
dataset_dict[k] = Dataset.load_from_disk(dataset_dict_split_path, fs, keep_in_memory=keep_in_memory)
File "/opt/conda/envs/py38/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 634, in load_from_disk
if state["_indices_files"]:
KeyError: '_indices_files'
```
I believe this is the line causing the error since there may not be a `_indices_files` key in the older versions:
https://github.com/huggingface/datasets/blob/b70141e3c5149430951773aaa0155555c5fb3e76/src/datasets/arrow_dataset.py#L634
May I suggest using `state.get()` instead of directly indexing the dictionary?
@lhoestq
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2195/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2195/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 8:17:27
|
https://api.github.com/repos/huggingface/datasets/issues/2194
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2194/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2194/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2194/events
|
https://github.com/huggingface/datasets/issues/2194
| 853,909,452
|
MDU6SXNzdWU4NTM5MDk0NTI=
| 2,194
|
py3.7: TypeError: can't pickle _LazyModule objects
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stas00",
"id": 10676103,
"login": "stas00",
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"repos_url": "https://api.github.com/users/stas00/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stas00",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"\r\nThis wasn't a `datasets` problem, but `transformers`' and it was solved here https://github.com/huggingface/transformers/pull/11168\r\n"
] | 2021-04-08T21:02:48
| 2021-04-09T16:56:50
| 2021-04-09T01:52:57
|
CONTRIBUTOR
| null | null | null | null |
While this works fine with py3.8, under py3.7, with a totally new conda env and transformers install:
```
git clone https://github.com/huggingface/transformers
cd transformers
pip install -e .[testing]
export BS=1; rm -rf /tmp/test-clm; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0 python \
examples/language-modeling/run_clm.py --model_name_or_path distilgpt2 --dataset_name wikitext \
--dataset_config_name wikitext-2-raw-v1 --do_train --max_train_samples 1 \
--per_device_train_batch_size $BS --output_dir /tmp/test-clm --block_size 128 --logging_steps 1 \
--fp16
```
```
Traceback (most recent call last):
File "examples/language-modeling/run_clm.py", line 453, in <module>
main()
File "examples/language-modeling/run_clm.py", line 336, in main
load_from_cache_file=not data_args.overwrite_cache,
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/dataset_dict.py", line 303, in map
for k, dataset in self.items()
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/dataset_dict.py", line 303, in <dictcomp>
for k, dataset in self.items()
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1259, in map
update_data=update_data,
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 157, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/fingerprint.py", line 158, in wrapper
self._fingerprint, transform, kwargs_for_fingerprint
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/fingerprint.py", line 105, in update_fingerprint
hasher.update(transform_args[key])
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/fingerprint.py", line 57, in update
self.m.update(self.hash(value).encode("utf-8"))
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/fingerprint.py", line 53, in hash
return cls.hash_default(value)
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/fingerprint.py", line 46, in hash_default
return cls.hash_bytes(dumps(value))
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 389, in dumps
dump(obj, file)
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 361, in dump
Pickler(file, recurse=True).dump(obj)
File "/home/stas/anaconda3/lib/python3.7/site-packages/dill/_dill.py", line 454, in dump
StockPickler.dump(self, obj)
File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 437, in dump
self.save(obj)
File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 556, in save_function
obj=obj,
File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 638, in save_reduce
save(args)
File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 789, in save_tuple
save(element)
File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/home/stas/anaconda3/lib/python3.7/site-packages/dill/_dill.py", line 941, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 859, in save_dict
self._batch_setitems(obj.items())
File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 885, in _batch_setitems
save(v)
File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 524, in save
rv = reduce(self.proto)
TypeError: can't pickle _LazyModule objects
```
```
$ python --version
Python 3.7.4
$ python -m torch.utils.collect_env
Collecting environment information...
PyTorch version: 1.8.0.dev20210110+cu110
Is debug build: False
CUDA used to build PyTorch: 11.0
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.2 LTS (x86_64)
GCC version: (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.16.3
```
Thanks.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stas00",
"id": 10676103,
"login": "stas00",
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"repos_url": "https://api.github.com/users/stas00/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stas00",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2194/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2194/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 4:50:09
|
https://api.github.com/repos/huggingface/datasets/issues/2193
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2193/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2193/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2193/events
|
https://github.com/huggingface/datasets/issues/2193
| 853,725,707
|
MDU6SXNzdWU4NTM3MjU3MDc=
| 2,193
|
Filtering/mapping on one column is very slow
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/39116809?v=4",
"events_url": "https://api.github.com/users/norabelrose/events{/privacy}",
"followers_url": "https://api.github.com/users/norabelrose/followers",
"following_url": "https://api.github.com/users/norabelrose/following{/other_user}",
"gists_url": "https://api.github.com/users/norabelrose/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/norabelrose",
"id": 39116809,
"login": "norabelrose",
"node_id": "MDQ6VXNlcjM5MTE2ODA5",
"organizations_url": "https://api.github.com/users/norabelrose/orgs",
"received_events_url": "https://api.github.com/users/norabelrose/received_events",
"repos_url": "https://api.github.com/users/norabelrose/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/norabelrose/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/norabelrose/subscriptions",
"type": "User",
"url": "https://api.github.com/users/norabelrose",
"user_view_type": "public"
}
|
[
{
"color": "d876e3",
"default": true,
"description": "Further information is requested",
"id": 1935892912,
"name": "question",
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/question"
}
] |
closed
| false
| null |
[] |
[
"Hi ! Yes we are working on making `filter` significantly faster. You can look at related PRs here: #2060 #2178 \r\n\r\nI think you can expect to have the fast version of `filter` available next week.\r\n\r\nWe'll make it only select one column, and we'll also make the overall filtering operation way faster by avoiding many arrow<->python conversions especially during writing.\r\n\r\nI'll let you know how it goes !",
"@lhoestq Thanks for the response— it's great to hear that we'll be getting a much faster `filter` method soon. However, my use case does also involve using `map` over a single column in order to pre-compute roughly uniformly sized batches, and right now that is also very slow. Is there any plan to make `map` faster for single column operations?\r\n\r\nIf that's not a priority for the maintainers right now, I could try my hand at adding the feature, but I can't guarantee I would do a good job given my lack of familiarity with pyarrow.",
"Currently the optimal setup for single-column computations is probably to do something like\r\n```python\r\nresult = dataset.map(f, input_columns=\"my_col\", remove_columns=dataset.column_names)\r\n```\r\nThis has two advantages:\r\n- input_columns=\"my_col\" allows to only read the column \"my_col\"\r\n- remove_columns=dataset.column_names makes `map` only keep the output of your function `f`, and it drops the other columns of the dataset instead of keeping them.\r\n\r\nLet me know if it improves speed on your side.\r\n\r\nYou can also get more speed by using `batched=True` and setting `num_proc=` for multiprocessing",
"Hi @lhoestq ,\r\n\r\nI'm hijacking this issue, because I'm currently trying to do the approach you recommend:\r\n\r\n> Currently the optimal setup for single-column computations is probably to do something like\r\n> \r\n> ```python\r\n> result = dataset.map(f, input_columns=\"my_col\", remove_columns=dataset.column_names)\r\n> ```\r\n\r\nHere is my code: (see edit, in which I added a simplified version\r\n\r\n```\r\nThis is the error:\r\n```bash\r\npyarrow.lib.ArrowInvalid: Column 1 named tokens expected length 8964 but got length 1000\r\n```\r\nI wonder why this error occurs, when I delete every column? Can you give me a hint?\r\n\r\n### Edit:\r\nI preprocessed my dataset before (using map with the features argument) and saved it to disk. May this be part of the error? I can iterate over the\r\ncomplete dataset and print every sample before calling map. There seems to be no other problem with the dataset.\r\n\r\nI tried to simplify the code that crashes:\r\n\r\n```python\r\n# works\r\nlog.debug(dataset.column_names)\r\nlog.debug(dataset)\r\nfor i, sample in enumerate(dataset):\r\n log.debug(i, sample)\r\n\r\n# crashes\r\ncounted_dataset = dataset.map(\r\n lambda x: {\"a\": list(range(20))},\r\n input_columns=column,\r\n remove_columns=dataset.column_names,\r\n load_from_cache_file=False,\r\n num_proc=num_workers,\r\n batched=True,\r\n)\r\n```\r\n\r\n```\r\npyarrow.lib.ArrowInvalid: Column 1 named tokens expected length 20 but got length 1000\r\n```\r\n\r\nEdit2: \r\n\r\nMay this be a problem with a schema I set when preprocessing the dataset before? I tried to add the `features` argument to the function and then I get a new error:\r\n\r\n```python\r\n# crashes\r\ncounted_dataset = dataset.map(\r\n lambda x: {\"a\": list(range(20))},\r\n input_columns=column,\r\n remove_columns=dataset.column_names,\r\n load_from_cache_file=False,\r\n num_proc=num_workers,\r\n batched=True,\r\n features=datasets.Features(\r\n {\r\n \"a\": datasets.Sequence(datasets.Value(\"int32\"))\r\n }\r\n )\r\n)\r\n```\r\n\r\n```\r\n File \"env/lib/python3.8/site-packages/datasets/arrow_dataset.py\", line 1704, in _map_single\r\n writer.write_batch(batch)\r\n File \"env/lib/python3.8/site-packages/datasets/arrow_writer.py\", line 312, in write_batch\r\n col_type = schema.field(col).type if schema is not None else None\r\n File \"pyarrow/types.pxi\", line 1341, in pyarrow.lib.Schema.field\r\nKeyError: 'Column tokens does not exist in schema'\r\n```",
"Hi ! Can you open a separate issue for that ?\r\nAlso if you could provide a google colab or a sample code to reproduce this issue that would be helpful.\r\nOn my side I was not able to reproduce this error.",
"@lhoestq Sorry I'm just responding now. I'm currently using your recommendation for the map on a single column, and I've gotten it to be fast enough to sort of work for my use case by just setting `num_proc=10`, although it's still quite slow. It's clear that it is still loading the entirety of each row into memory and then discarding everything except the selected column, instead of exploiting the columnar data format to only load the selected column.\r\n\r\nMy code is like this:\r\n```\r\n self.dataset = self.dataset.sort('num_tokens')\r\n batch_dataset = self.dataset.map(\r\n\tcompute_uniform_sized_batches,\r\n\tbatched=True, batch_size=10_000, num_proc=10, input_columns=['num_tokens'],\r\n\tremove_columns=get_columns_all_equal(self.dataset),\r\n\twith_indices=True,\r\n\tfn_kwargs=dict(max_size=tokens_per_batch)\r\n)\r\nself.batches = {\r\n\tname: list(zip(split['start'], split['length']))\r\n\tfor name, split in batch_dataset.items()\r\n}\r\n```\r\nI find that the processes with higher IDs take significantly longer to complete, presumably because the dataset is sorted by article length and they're loading the entire article text into memory, instead of just the 'num_tokens' column.\r\n\r\nI should note that my batching procedure would work best if I just used `batch_size=None` and loaded the whole column into memory at once, but I found that this was intolerably slow and gave me no progress information, so I'm using the less than ideal `batch_size=10_000`.",
"Hi @norabelrose ! I'm glad you managed to make this work on your side.\r\nRegarding memory usage, you can try to drop the columns that you don't want to use for your `map` for now.\r\n\r\nIn the future we'll try to find a way to not load unnecessary columns in memory in `map`. Currently the way it works is that it gets the batch as a python dict, then it updates it using the output of your mapping function, and finally it removes columns from `remove_columns`. Therefore for a moment some columns are loaded in memory even if you remove them or don't use them for your mapping function.\r\n\r\nIt would be nice to have a way to optimize memory for cases such as yours !",
"@lhoestq After looking through the source code, it looks like the following solution has at least some chance of working:\r\n- refactor `Dataset.map()` so that the `input_columns` parameter is implemented by using the `self.formatted_as()` context manager with `columns=input_columns`\r\n- change `Dataset._getitem()` so that it passes `self._data.drop(drop_columns)` to the `query_table()` function whenever `format_columns` is non-None and `output_all_columns` is False, instead of `self._data` itself",
"Looks like a great direction :)\r\nNote that `query_table` doesn't bring data into memory. Only `format_table` does.\r\nAlso the dataset may already have a format with `columns=` already defined so we would need to define the formatted `input_dataset` like:\r\n```python\r\n# before the `map` main for loop\r\ninput_columns = input_columns if input_columns is not None else self.column_names\r\nif not self._output_all_columns:\r\n columns = [col for col in input_columns if self._format_columns is None or col in self._format_columns]\r\n input_dataset = self.with_format(\r\n type=self._format_type,\r\n columns=columns\r\n )\r\nelse:\r\n # in this case we could find a way to filter both format_columns and unformatted columns eventually\r\n input_dataset = self\r\n# then input_dataset can be used in the main for loop of `map`\r\n```\r\n\r\nEDIT: oh and regarding streaming format versus file format for arrow, we plan to start using the file format #1933 at one point (though I'm not sure if it would improve performance)",
"Good to know about `query_table` not bringing anything into memory. I was under the impression that it did because a while back I looked at my `map` operation in pdb and it looked like it was spending forever in line 93 of formatting.py, `return pa.concat_tables(....)`, although that was before the `fast_slice` interpolation search was implemented, so it may have had more to do with the slow ChunkedArray slice implementation than anything else.\r\n\r\nIf `query_table` is I/O free then the fix may be as simple as just adding this to line 1779 of arrow_dataset.py:\r\n```python\r\n# Only load the columns we actually need\r\nif input_columns:\r\n stack.enter_context(self.formatted_as(\r\n self._format_type,\r\n columns=input_columns,\r\n output_all_columns=False,\r\n **self._format_kwargs\r\n ))\r\n```\r\nIt's not clear to me why the `[col for col in input_columns if self._format_columns is None or col in self._format_columns]` check would be necessary— it seems like either `input_columns` should simply temporarily override the `_format_columns` within the `map` operation, or we should throw an error if there are any conflicts. Currently it doesn't look like this case is checked for at all within `map`, but maybe I'm just missing it.",
"`query_table` simply slices/concatenates parts of the table. The actual data inside the table is not brought in memory.\r\nAlso I'm more in favor of declaring `input_dataset = self.with_format(...)` since `formatted_as` may update the dataset fingerprint of `self`, which is not expected when someone runs `map`.\r\n\r\n> It's not clear to me why the [col for col in input_columns if self._format_columns is None or col in self._format_columns] check would be necessary— it seems like either input_columns should simply temporarily override the _format_columns within the map operation, or we should throw an error if there are any conflicts. Currently it doesn't look like this case is checked for at all within map, but maybe I'm just missing it.\r\n\r\nActually yes we can just use input_columns. And we do need to add a check to make sure there are not conflicts or this could lead to confusing errors.",
"That sounds good to me! I just submitted a PR (#2246) implementing your approach. I also changed how `_query_table` handles Iterable keys since it still seemed like `pa.concat_tables` was taking a long time to create the table for each batch. Now my whole `map()` operation takes 1 min 46 seconds where it used to take somewhere on the order of 10 minutes."
] | 2021-04-08T18:16:14
| 2021-04-26T16:13:59
| 2021-04-26T16:13:59
|
CONTRIBUTOR
| null | null | null | null |
I'm currently using the `wikipedia` dataset— I'm tokenizing the articles with the `tokenizers` library using `map()` and also adding a new `num_tokens` column to the dataset as part of that map operation.
I want to be able to _filter_ the dataset based on this `num_tokens` column, but even when I specify `input_columns=['num_tokens']`, it seems that the entirety of each row is loaded into memory, which makes the operation take much longer than it should. Indeed, `filter` currently just calls `map`, and I found that in `_map_single` on lines 1690-1704 of `arrow_dataset.py`, the method is just grabbing slices of _all the rows_ of the dataset and then passing only the specified columns to the map function. It seems that, when the user passes a value for `input_columns`, the `map` function should create a temporary pyarrow table by selecting just those columns, and then get slices from that table. Or something like that— I'm not very familiar with the pyarrow API.
I know that in the meantime I can sort of get around this by simply only returning the rows that match my filter criterion from the tokenizing function I pass to `map()`, but I actually _also_ want to map on just the `num_tokens` column in order to compute batches with a roughly uniform number of tokens per batch. I would also ideally like to be able to change my minimum and maximum article lengths without having to re-tokenize the entire dataset.
PS: This is definitely not a "dataset request." I'm realizing that I don't actually know how to remove labels from my own issues on other people's repos, if that is even possible.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2193/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2193/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 17 days, 21:57:45
|
https://api.github.com/repos/huggingface/datasets/issues/2190
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2190/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2190/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2190/events
|
https://github.com/huggingface/datasets/issues/2190
| 853,181,564
|
MDU6SXNzdWU4NTMxODE1NjQ=
| 2,190
|
News_commentary Dataset Translation Pairs are of Incorrect Language Specified Pairs
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8571003?v=4",
"events_url": "https://api.github.com/users/anassalamah/events{/privacy}",
"followers_url": "https://api.github.com/users/anassalamah/followers",
"following_url": "https://api.github.com/users/anassalamah/following{/other_user}",
"gists_url": "https://api.github.com/users/anassalamah/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/anassalamah",
"id": 8571003,
"login": "anassalamah",
"node_id": "MDQ6VXNlcjg1NzEwMDM=",
"organizations_url": "https://api.github.com/users/anassalamah/orgs",
"received_events_url": "https://api.github.com/users/anassalamah/received_events",
"repos_url": "https://api.github.com/users/anassalamah/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/anassalamah/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anassalamah/subscriptions",
"type": "User",
"url": "https://api.github.com/users/anassalamah",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi @anassalamah,\r\n\r\nCould you please try with this:\r\n```python\r\ntrain_ds = load_dataset(\"news_commentary\", lang1=\"ar\", lang2=\"en\", split='train[:98%]')\r\nval_ds = load_dataset(\"news_commentary\", lang1=\"ar\", lang2=\"en\", split='train[98%:]')\r\n```",
"Hello @albertvillanova, \r\n\r\nThanks for the suggestion. I didn't know you could do that. however, it didn't resolve the issue\r\n\r\n\r\n"
] | 2021-04-08T07:53:43
| 2021-05-24T10:03:55
| 2021-05-24T10:03:55
|
NONE
| null | null | null | null |
I used load_dataset to load the news_commentary dataset for "ar-en" translation pairs but found translations from Arabic to Hindi.
```
train_ds = load_dataset("news_commentary", "ar-en", split='train[:98%]')
val_ds = load_dataset("news_commentary", "ar-en", split='train[98%:]')
# filtering out examples that are not ar-en translations but ar-hi
val_ds = val_ds.filter(lambda example, indice: indice not in chain(range(1312,1327) ,range(1384,1399), range(1030,1042)), with_indices=True)
```
* I'm fairly new to using datasets so I might be doing something wrong
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8571003?v=4",
"events_url": "https://api.github.com/users/anassalamah/events{/privacy}",
"followers_url": "https://api.github.com/users/anassalamah/followers",
"following_url": "https://api.github.com/users/anassalamah/following{/other_user}",
"gists_url": "https://api.github.com/users/anassalamah/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/anassalamah",
"id": 8571003,
"login": "anassalamah",
"node_id": "MDQ6VXNlcjg1NzEwMDM=",
"organizations_url": "https://api.github.com/users/anassalamah/orgs",
"received_events_url": "https://api.github.com/users/anassalamah/received_events",
"repos_url": "https://api.github.com/users/anassalamah/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/anassalamah/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anassalamah/subscriptions",
"type": "User",
"url": "https://api.github.com/users/anassalamah",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2190/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2190/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 46 days, 2:10:12
|
https://api.github.com/repos/huggingface/datasets/issues/2189
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2189/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2189/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2189/events
|
https://github.com/huggingface/datasets/issues/2189
| 853,052,891
|
MDU6SXNzdWU4NTMwNTI4OTE=
| 2,189
|
save_to_disk doesn't work when we use concatenate_datasets function before creating the final dataset_object.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"events_url": "https://api.github.com/users/shamanez/events{/privacy}",
"followers_url": "https://api.github.com/users/shamanez/followers",
"following_url": "https://api.github.com/users/shamanez/following{/other_user}",
"gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/shamanez",
"id": 16892570,
"login": "shamanez",
"node_id": "MDQ6VXNlcjE2ODkyNTcw",
"organizations_url": "https://api.github.com/users/shamanez/orgs",
"received_events_url": "https://api.github.com/users/shamanez/received_events",
"repos_url": "https://api.github.com/users/shamanez/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shamanez/subscriptions",
"type": "User",
"url": "https://api.github.com/users/shamanez",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi ! We refactored save_to_disk in #2025 so this doesn't happen.\r\nFeel free to try it on master for now\r\nWe'll do a new release soon"
] | 2021-04-08T04:42:53
| 2022-06-01T16:32:15
| 2022-06-01T16:32:15
|
NONE
| null | null | null | null |
As you can see, it saves the entire dataset.
@lhoestq
You can check by going through the following example,
```
from datasets import load_from_disk,concatenate_datasets
loaded_data=load_from_disk('/home/gsir059/HNSW-ori/my_knowledge_dataset')
n=20
kb_list=[loaded_data.shard(n, i, contiguous=True) for i in range(n)]
final_dataset=concatenate_datasets([kb_list[1],kb_list[2]])
final_dataset.save_to_disk('/home/gsir059/haha/k.arrow')
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2189/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2189/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 419 days, 11:49:22
|
https://api.github.com/repos/huggingface/datasets/issues/2188
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2188/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2188/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2188/events
|
https://github.com/huggingface/datasets/issues/2188
| 853,044,166
|
MDU6SXNzdWU4NTMwNDQxNjY=
| 2,188
|
Duplicate data in Timit dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/78190188?v=4",
"events_url": "https://api.github.com/users/thanh-p/events{/privacy}",
"followers_url": "https://api.github.com/users/thanh-p/followers",
"following_url": "https://api.github.com/users/thanh-p/following{/other_user}",
"gists_url": "https://api.github.com/users/thanh-p/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/thanh-p",
"id": 78190188,
"login": "thanh-p",
"node_id": "MDQ6VXNlcjc4MTkwMTg4",
"organizations_url": "https://api.github.com/users/thanh-p/orgs",
"received_events_url": "https://api.github.com/users/thanh-p/received_events",
"repos_url": "https://api.github.com/users/thanh-p/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/thanh-p/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thanh-p/subscriptions",
"type": "User",
"url": "https://api.github.com/users/thanh-p",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi ! Thanks for reporting\r\nIf I recall correctly this has been recently fixed #1995\r\nCan you try to upgrade your local version of `datasets` ?\r\n```\r\npip install --upgrade datasets\r\n```",
"Hi Ihoestq,\r\n\r\nThank you. It works after upgrading the datasets\r\n"
] | 2021-04-08T04:21:54
| 2021-04-08T12:13:19
| 2021-04-08T12:13:19
|
NONE
| null | null | null | null |
I ran a simple code to list all texts in Timit dataset and the texts were all the same.
Is this dataset corrupted?
**Code:**
timit = load_dataset("timit_asr")
print(*timit['train']['text'], sep='\n')
**Result:**
Would such an act of refusal be useful?
Would such an act of refusal be useful?
Would such an act of refusal be useful?
Would such an act of refusal be useful?
...
...
Would such an act of refusal be useful?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/78190188?v=4",
"events_url": "https://api.github.com/users/thanh-p/events{/privacy}",
"followers_url": "https://api.github.com/users/thanh-p/followers",
"following_url": "https://api.github.com/users/thanh-p/following{/other_user}",
"gists_url": "https://api.github.com/users/thanh-p/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/thanh-p",
"id": 78190188,
"login": "thanh-p",
"node_id": "MDQ6VXNlcjc4MTkwMTg4",
"organizations_url": "https://api.github.com/users/thanh-p/orgs",
"received_events_url": "https://api.github.com/users/thanh-p/received_events",
"repos_url": "https://api.github.com/users/thanh-p/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/thanh-p/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thanh-p/subscriptions",
"type": "User",
"url": "https://api.github.com/users/thanh-p",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2188/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2188/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 7:51:25
|
https://api.github.com/repos/huggingface/datasets/issues/2187
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2187/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2187/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2187/events
|
https://github.com/huggingface/datasets/issues/2187
| 852,939,736
|
MDU6SXNzdWU4NTI5Mzk3MzY=
| 2,187
|
Question (potential issue?) related to datasets caching
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/17202292?v=4",
"events_url": "https://api.github.com/users/ioana-blue/events{/privacy}",
"followers_url": "https://api.github.com/users/ioana-blue/followers",
"following_url": "https://api.github.com/users/ioana-blue/following{/other_user}",
"gists_url": "https://api.github.com/users/ioana-blue/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ioana-blue",
"id": 17202292,
"login": "ioana-blue",
"node_id": "MDQ6VXNlcjE3MjAyMjky",
"organizations_url": "https://api.github.com/users/ioana-blue/orgs",
"received_events_url": "https://api.github.com/users/ioana-blue/received_events",
"repos_url": "https://api.github.com/users/ioana-blue/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ioana-blue/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ioana-blue/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ioana-blue",
"user_view_type": "public"
}
|
[
{
"color": "d876e3",
"default": true,
"description": "Further information is requested",
"id": 1935892912,
"name": "question",
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/question"
}
] |
open
| false
| null |
[] |
[
"An educated guess: does this refer to the fact that depending on the custom column names in the dataset files (csv in this case), there is a dataset loader being created? and this dataset loader - using the \"custom data configuration\" is used among all jobs running using this particular csv files? (thinking out loud here...)\r\n\r\nIf this is the case, it may be ok for my use case (have to think about it more), still a bit surprising given that datasets caching is disabled (or so I hope) by the lines I pasted above. ",
"Hi ! Currently disabling the caching means that all the dataset transform like `map`, `filter` etc. ignore the cache: it doesn't write nor read processed cache files.\r\nHowever `load_dataset` reuses datasets that have already been prepared: it does reload prepared dataset files.\r\n\r\nIndeed from the documentation:\r\n> datasets.set_caching_enabled(boolean: bool)\r\n\r\n> When applying transforms on a dataset, the data are stored in cache files. The caching mechanism allows to reload an existing cache file if it’s already been computed.\r\n> Reloading a dataset is possible since the cache files are named using the dataset fingerprint, which is updated after each transform.\r\n> If disabled, the library will no longer reload cached datasets files when applying transforms to the datasets. More precisely, if the caching is disabled:\r\n> - cache files are always recreated\r\n> - cache files are written to a temporary directory that is deleted when session closes\r\n> - cache files are named using a random hash instead of the dataset fingerprint - use datasets.Dataset.save_to_disk() to save a transformed dataset or it will be deleted when session closes\r\n> - caching doesn’t affect datasets.load_dataset(). If you want to regenerate a dataset from scratch you should use the download_mode parameter in datasets.load_dataset().",
"Thank you for the clarification. \r\n\r\nThis is a bit confusing. On one hand, it says that cache files are always recreated and written to a temporary directory that is removed; on the other hand the last bullet point makes me think that since the default according to the docs for `download_mode (Optional datasets.GenerateMode) – select the download/generate mode - Default to REUSE_DATASET_IF_EXISTS` => it almost sounds that it could reload prepared dataset files. Where are these files stored? I guess not in the temporary directory that is removed... \r\n\r\nI find this type of api design error-prone. When I see as a programmer `datasets.set_caching_enabled(False)` I expect no reuse of anything in the cache. ",
"It would be nice if the documentation elaborated on all the possible values for `download_mode` and/or a link to `datasets.GenerateMode`. \r\nThis info here:\r\n```\r\n \"\"\"`Enum` for how to treat pre-existing downloads and data.\r\n The default mode is `REUSE_DATASET_IF_EXISTS`, which will reuse both\r\n raw downloads and the prepared dataset if they exist.\r\n The generations modes:\r\n | | Downloads | Dataset |\r\n | -----------------------------------|-----------|---------|\r\n | `REUSE_DATASET_IF_EXISTS` (default)| Reuse | Reuse |\r\n | `REUSE_CACHE_IF_EXISTS` | Reuse | Fresh |\r\n | `FORCE_REDOWNLOAD` | Fresh | Fresh |\r\n```",
"I have another question. Assuming that I understood correctly and there is reuse of datasets files when caching is disabled (!), I'm guessing there is a directory that is created based on some information on the dataset file. I'm interested in the situation where I'm loading a (custom) dataset from local disk. What information is used to create the directory/filenames where the files are stored?\r\n\r\nI'm concerned about the following scenario: if I have a file, let's say `train.csv` at path `the_path`, run once, the dataset is prepared, some models are run, etc. Now let's say there is an issue and I recreate `train.csv` at the same path `the_path`. Is there enough information in the temporary name/hash to *not* reload the *old* prepared dataset (e.g., timestamp of the file)? Or is it going to reload the *old* prepared file? ",
"Thanks for the feedback, we'll work in improving this aspect of the documentation.\r\n\r\n> Where are these files stored? I guess not in the temporary directory that is removed...\r\n\r\nWe're using the Arrow file format to load datasets. Therefore each time you load a dataset, it is prepared as an arrow file on your disk. By default the file is located in the ~/.cache/huggingface/datasets/<dataset_name>/<config_id>/<version> directory.\r\n\r\n> What information is used to create the directory/filenames where the files are stored?\r\n\r\nThe config_id contains a hash that takes into account:\r\n- the dataset loader used and its source code (e.g. the \"csv\" loader)\r\n- the arguments passed to the loader (e.g. the csv delimiter)\r\n- metadata of the local data files if any (e.g. their timestamps)\r\n\r\n> I'm concerned about the following scenario: if I have a file, let's say train.csv at path the_path, run once, the dataset is prepared, some models are run, etc. Now let's say there is an issue and I recreate train.csv at the same path the_path. Is there enough information in the temporary name/hash to not reload the old prepared dataset (e.g., timestamp of the file)? Or is it going to reload the old prepared file?\r\n\r\nYes the timestamp of the local csv file is taken into account. If you edit your csv file, the config_id will change and loading the dataset will create a new arrow file.",
"Thank you for all your clarifications, really helpful! \r\n\r\nIf you have the bandwidth, please do revisit the api wrt cache disabling. Anywhere in the computer stack (hardware included) where you disable the cache, one assumes there is no caching that happens. ",
"That makes total sense indeed !\r\nI think we can do the change",
"I have another question about caching, this time in the case where FORCE_REDOWNLOAD is used to load the dataset, the datasets cache is one directory as defined by HF_HOME and there are multiple concurrent jobs running in a cluster using the same local dataset (i.e., same local files in the cluster). Does anything in the naming convention and/or file access/locking that you're using prevent race conditions between the concurrent jobs on the caching of the local dataset they all use?\r\n\r\nI noticed some errors (can provide more details if helpful) in load_dataset/prepare_split that lead to my question above. \r\n\r\nLet me know if my question is clear, I can elaborate more if needed @lhoestq Thank you!",
"I got another error that convinces me there is a race condition (one of the test files had zero samples at prediction time). I think it comes down to the fact that the `config_id` above (used in the naming for the cache) has no information on who's touching the data. If I have 2 concurrent jobs, both loading the same dataset and forcing redownload, they may step on each other foot/caching of the dataset. ",
"We're using a locking mechanism to prevent two processes from writing at the same time. The locking is based on the `filelock` module.\r\nAlso directories that are being written use a suffix \".incomplete\" so that reading is not possible on a dataset being written.\r\n\r\nDo you think you could provide a simple code to reproduce the race condition you experienced ?",
"I can provide details about the code I'm running (it's really-really close to some official samples from the huggingface transformers examples, I can point to the exact sample file, I kept a record of that). I can also describe in which conditions this race occurs (I'm convinced it has to do with forcing the redownloading of the dataset, I've been running hundreds of experiments before and didn't have a problem before I forced the redownload). I also can provide samples of the different stack errors I get and some details about the level of concurrency of jobs I was running. I can also try to imagine how the race manifests (I'm fairly sure that it's a combo of one job cleaning up and another job being in the middle of the run).\r\n\r\nHowever, I have to cleanup all this to make sure I'm no spilling any info I shouldn't be spilling. I'll try to do it by the end of the week, if you think all this is helpful. \r\n\r\nFor now, I have a workaround. Don't use forcing redownloading. And to be ultra careful (although I don't think this is a problem), I run a series of jobs that will prepare the datasets and I know there is no concurrency wrt the dataset. Once that's done (and I believe even having multiple jobs loading the datasets at the same time doesn't create problems, as long as REUSE_DATASET_IF_EXISTS is the policy for loading the dataset, so the filelock mechanism you're using is working in that scenario), the prepared datasets will be reused, no race possible in any way. \r\n\r\nThanks for all the details you provided, it helped me understand the underlying implementation and coming up with workarounds when I ran into issues. ",
"Hi! I have the same challenge with caching, where the **.cache** folder is required even though it isn't possible for me.\r\n\r\nI'd like to run transformers in Snowflake, using Snowpark for Python, this would mean I could provide configurable transformers in real-time for business users without having data leave an environment (for security reasons). With no need for data transfer,n the compute is faster. It is a large use case - is it possible to entirely disable caching in certain scenarios?\r\n@lhoestq ?\r\n",
"You can try to change the location of the cache folder using the `HF_CACHE_HOME` environment variable, and set a location where you have read/write access.",
"Thanks @lhoestq \r\n\r\nI wanted to do that, however, snowflake does not allow it to write at all. I'm asking around to see if they can help me out with that issue 😅"
] | 2021-04-08T00:16:28
| 2023-01-03T18:30:38
| null |
NONE
| null | null | null | null |
I thought I had disabled datasets caching in my code, as follows:
```
from datasets import set_caching_enabled
...
def main():
# disable caching in datasets
set_caching_enabled(False)
```
However, in my log files I see messages like the following:
```
04/07/2021 18:34:42 - WARNING - datasets.builder - Using custom data configuration default-888a87931cbc5877
04/07/2021 18:34:42 - WARNING - datasets.builder - Reusing dataset csv (xxxx/cache-transformers/datasets/csv/default-888a87931cbc5877/0.0.0/965b6429be0fc05f975b608ce64e1fa941cc8fb4f30629b523d2390f3c0e1a93
```
Can you please let me know what this reusing dataset csv means? I wouldn't expect any reusing with the datasets caching disabled. Thank you!
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2187/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2187/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/2185
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2185/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2185/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2185/events
|
https://github.com/huggingface/datasets/issues/2185
| 852,684,395
|
MDU6SXNzdWU4NTI2ODQzOTU=
| 2,185
|
.map() and distributed training
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4",
"events_url": "https://api.github.com/users/VictorSanh/events{/privacy}",
"followers_url": "https://api.github.com/users/VictorSanh/followers",
"following_url": "https://api.github.com/users/VictorSanh/following{/other_user}",
"gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/VictorSanh",
"id": 16107619,
"login": "VictorSanh",
"node_id": "MDQ6VXNlcjE2MTA3NjE5",
"organizations_url": "https://api.github.com/users/VictorSanh/orgs",
"received_events_url": "https://api.github.com/users/VictorSanh/received_events",
"repos_url": "https://api.github.com/users/VictorSanh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/VictorSanh",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi, one workaround would be to save the mapped(tokenized in your case) file using `save_to_disk`, and having each process load this file using `load_from_disk`. This is what I am doing, and in this case, I turn off the ability to automatically load from the cache.\r\n\r\nAlso, multiprocessing the map function seems to be slower at the moment (#1992), hope this helps you.",
"Thanks @hwijeen for the workaround, feels a bit prototypical but it works! (it seems files are written twice then though)\r\n\r\n(I haven't observed slowness using multiprocessed map function but I could be wrong)",
"To my understanding, files are written twice anyhow(one after load_dataset, another aftet map). It's just that you now have it at the location where you can see, whereas it was secretlely saved at caching folder(.cache/huggingface/datasets by default)! Correct me if I'm wrong!",
"Slowness in multiprocessing has been observed in certain environments but not others. We're investigating ;)",
"So to answer my initial question, I was just doing something stupid as I was not re-giving the `preprocessing_num_workers` arguments when launching the distributed training (and it was then set to `None`). I initially thought the hash was computed only with the `tokenize_function` but it's all arguments. Thanks @lhoestq for clarifying!",
"This cache process isn't really consistent. I just changed `per_device_train_batch_size` of training script and now it rebuilding the dataset cache!!!! Why?",
"Hi ! A `map` function is recomputed if the code changes or if any of the variables it uses changes. Can you check that your function doesn't use `per_device_train_batch_size` or any variable that contains `per_device_train_batch_size` ?",
"My code is actually a transformer's example for training t5, I modified a bit:\r\n\r\nhttps://github.com/puraminy/transformers/blob/4b40877132eedb566043f83de8f1d29a84d71430/examples/flax/language-modeling/run_t5_mlm_flax.py#L614\r\n\r\nNo, it doesn't use `per_device_train_batch_size`. I remember it worked for several times and then for no reason or various reasons like the above it started to build the cache again, as if it had an expiration date (maybe), or maybe I had changed the code! \r\n\r\nSo, to get rid of these problems I saved cache with a name (was forced to not use multiple_processes, because otherwise it generates multiple files) and then I load it from this cache file. "
] | 2021-04-07T18:22:14
| 2021-10-23T07:11:15
| 2021-04-09T15:38:31
|
CONTRIBUTOR
| null | null | null | null |
Hi,
I have a question regarding distributed training and the `.map` call on a dataset.
I have a local dataset "my_custom_dataset" that I am loading with `datasets = load_from_disk(dataset_path=my_path)`.
`dataset` is then tokenized:
```python
datasets = load_from_disk(dataset_path=my_path)
[...]
def tokenize_function(examples):
return tokenizer(examples[text_column_name])
logger.info("Mapping dataset to tokenized dataset.")
tokenized_datasets = datasets.map(
tokenize_function,
batched=True,
num_proc=preprocessing_num_workers,
remove_columns=column_names,
load_from_cache_file=True,
)
```
I am using 31 workers (`preprocessing_num_workers=31`) and thus it creates 31 `cache*.arrow` files in `my_path/train` (there is only a train split).
When I relaunch the script, the map is tokenization is skipped in favor of loading the 31 previously cached files, and that's perfect.
Everything so far was done by launching a **single process script**.
I now launch the same training script in **distributed mode** (`pytorch -m torch.distributed.launch --nproc_per_node 2`). However, once it reaches the map call, it re-does the tokenization... instead of loading the 31 cached files.
I tried adding the `cache_file_name` argument: `cache_file_name={"train": my_path/one_of_the_arrow_file}`, but I can't give the 31 cached files, so it probably isn't the right way to do it.
**My question: what is the best way to load cached files if they were pre-processed and dumped in multiple arrow files?** It seems automatically handled for single processes but fails on distributed training.
- I am following the same structure as the examples of transformers (more specifically [run_clm.py](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_clm.py) in my case)
- I am using 1.5.0 version of datasets if that matters.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4",
"events_url": "https://api.github.com/users/VictorSanh/events{/privacy}",
"followers_url": "https://api.github.com/users/VictorSanh/followers",
"following_url": "https://api.github.com/users/VictorSanh/following{/other_user}",
"gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/VictorSanh",
"id": 16107619,
"login": "VictorSanh",
"node_id": "MDQ6VXNlcjE2MTA3NjE5",
"organizations_url": "https://api.github.com/users/VictorSanh/orgs",
"received_events_url": "https://api.github.com/users/VictorSanh/received_events",
"repos_url": "https://api.github.com/users/VictorSanh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/VictorSanh",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2185/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2185/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 1 day, 21:16:17
|
https://api.github.com/repos/huggingface/datasets/issues/2181
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2181/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2181/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2181/events
|
https://github.com/huggingface/datasets/issues/2181
| 852,261,607
|
MDU6SXNzdWU4NTIyNjE2MDc=
| 2,181
|
Error when loading a HUGE json file (pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/29157715?v=4",
"events_url": "https://api.github.com/users/hwijeen/events{/privacy}",
"followers_url": "https://api.github.com/users/hwijeen/followers",
"following_url": "https://api.github.com/users/hwijeen/following{/other_user}",
"gists_url": "https://api.github.com/users/hwijeen/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hwijeen",
"id": 29157715,
"login": "hwijeen",
"node_id": "MDQ6VXNlcjI5MTU3NzE1",
"organizations_url": "https://api.github.com/users/hwijeen/orgs",
"received_events_url": "https://api.github.com/users/hwijeen/received_events",
"repos_url": "https://api.github.com/users/hwijeen/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hwijeen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hwijeen/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hwijeen",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi ! Can you try to increase the block size ? For example\r\n```python\r\nblock_size_10MB = 10<<20\r\nload_dataset(\"json\", ..., block_size=block_size_10MB)\r\n```\r\nThe block size corresponds to how much bytes to process at a time from the input stream.\r\nThis will determine multi-threading granularity as well as the size of individual chunks in the dataset.\r\n\r\nYou can also try with bigger block sizes if needed",
"Hi @lhoestq! Thank you for your prompt reply.\r\nI have experimented with (10<<20, 10<<28, 10<<30, 10<<33, 10<<34), since my machine has 192G of memory, but it's either the above-mentioned error or processed killed because of OOM.\r\n\r\nCould you give me a bit of background on why block size needs to be exactly calibrated?\r\nTo my understanding, small block sized should run just fine despite its slowness..\r\n\r\n\r\n",
"We're using the JSON loader of pyarrow. It parses the file chunk by chunk to load the dataset.\r\nThis issue happens when there's no delimiter in one chunk of data. For json line, the delimiter is the end of line.\r\nSo with a big value for chunk_size this should have worked unless you have one extremely long line in your file.\r\n\r\nAlso what version of pyarrow are you using ?\r\n\r\nFInally I wonder if it could be an issue on pyarrow's side when using big json files. (I haven't tested big json files like yours)",
"I'm using `pyarrow==3.0.0` with `datasets==1.5.0`.\r\n\r\nYour point totally makes sense. I will check if my jsonl file contains an extremely long file and let you know. \r\n\r\nHere are some different error messages that I got when tweaking `block_size`. I also suspect that this is related to the pyarrow... but I guess it would be wonderful if datasesets could give a clear guide on how to play with large datasets! (I am suddenly experiencing various issue when working with large datasets.. e.g. #1992 )\r\n```python\r\n return paj.ReadOptions(use_threads=self.use_threads, block_size=self.block_size)\r\n File \"pyarrow/_json.pyx\", line 56, in pyarrow._json.ReadOptions.__init__\r\n File \"pyarrow/_json.pyx\", line 81, in pyarrow._json.ReadOptions.block_size.__set__\r\nOverflowError: value too large to convert to int32_t\r\n```\r\n\r\n```python\r\n\r\nline 83, in _generate_tables\r\n parse_options=self.config.pa_parse_options,\r\n File \"pyarrow/_json.pyx\", line 247, in pyarrow._json.read_json\r\n File \"pyarrow/error.pxi\", line 122, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow/error.pxi\", line 84, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowInvalid: Exceeded maximum rows\r\n```",
"I am getting the same error. When I tweak the block_size, I also find:\r\n`OverflowError: value too large to convert to int32_t`\r\nand \r\n`pyarrow.lib.ArrowInvalid: Exceeded maximum rows`\r\n",
"I made more tests. I used a smaller dataset and I was getting the same error, which means that it was not necessarily linked to the dataset size. To make both my smaller and larger datasets work, I got rid of lists with the json file. I had the following data format:\r\n```python\r\n[\r\n {'key': \"a\", 'value': ['one', 'two', 'three']},\r\n {'key': \"b\", 'value': ['four', 'five', 'six']}\r\n]\r\n```\r\nI changed to:\r\n\r\n```python\r\n {'key': \"a\", 'value': 'one\\ntwo\\nthree'},\r\n {'key': \"b\", 'value': 'four\\nfive\\nsix']}\r\n```\r\nand that worked!\r\n\r\nI used the following to reformat my json file:\r\n```python\r\nwith open(file_name, \"w\", encoding=\"utf-8\") as f:\r\n for item in list_:\r\n f.write(json.dumps(item) + \"\\n\")\r\n```\r\nThis works with `block_size_10MB = 10 << 20` or without specifying `block_size`.",
"Thanks @hwijeen for reporting and thanks @jpilaul for pointing this out.\r\n\r\nIndeed, those are different JSON-like formats:\r\n- the first one is the **standard JSON** format: all the file content is JSON-valid, thus all content is either a JSON object (between curly brackets `{...}`) or a JSON array (between square brackets `[...]`)\r\n- the second one is called **JSON Lines**: the entire file content is not JSON-valid, but only every line (newline-delimited) is JSON-valid\r\n\r\nCurrently PyArrow only supports **JSON Lines** format: \r\n- https://arrow.apache.org/docs/python/generated/pyarrow.json.read_json.html\r\n > Currently only the line-delimited JSON format is supported.\r\n- https://arrow.apache.org/docs/python/json.html\r\n > Arrow supports reading columnar data from line-delimited JSON files.",
"Thanks @albertvillanova for your explanation, it is helpful to know (maybe add to docs?)!\r\nHowever, the problem I described above happened when I was dealing with jsonl files 😿\r\nAlthough I did not thoroughly inspect, I suspect the cause was the one extremely long document in my case.",
"I see... I guess there is another problem going one then, related to the size."
] | 2021-04-07T10:26:46
| 2021-04-12T07:15:55
| 2021-04-12T07:15:55
|
NONE
| null | null | null | null |
Hi, thanks for the great library. I have used the brilliant library for a couple of small projects, and now using it for a fairly big project.
When loading a huge json file of 500GB, pyarrow complains as follows:
```
Traceback (most recent call last):
File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-packages/datasets/builder.py", line 531, in incomplete_dir
yield tmp_dir
File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-packages/datasets/builder.py", line 573, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-packages/datasets/builder.py", line 650, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-packages/datasets/builder.py", line 1027, in _prepare_split
for key, table in utils.tqdm(generator, unit=" tables", leave=False, disable=not_verbose):
File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-packages/tqdm/std.py", line 1133, in __iter__
for obj in iterable:
File "/app/.cache/huggingface/modules/datasets_modules/datasets/json/9498524fd296a6cca99c66d6c5be507d1c0991f5a814e535b507f4a66096a641/json.py", line 83, in _generate_tables
parse_options=self.config.pa_parse_options,
File "pyarrow/_json.pyx", line 247, in pyarrow._json.read_json
File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?)
```
When using only a small portion of the sample file, say first 100 lines, it works perfectly well..
I see that it is the error from pyarrow, but could you give me a hint or possible solutions?
#369 describes the same error and #372 claims to have fixed the issue, but I have no clue why I am still getting this one. Thanks in advance!
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/29157715?v=4",
"events_url": "https://api.github.com/users/hwijeen/events{/privacy}",
"followers_url": "https://api.github.com/users/hwijeen/followers",
"following_url": "https://api.github.com/users/hwijeen/following{/other_user}",
"gists_url": "https://api.github.com/users/hwijeen/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hwijeen",
"id": 29157715,
"login": "hwijeen",
"node_id": "MDQ6VXNlcjI5MTU3NzE1",
"organizations_url": "https://api.github.com/users/hwijeen/orgs",
"received_events_url": "https://api.github.com/users/hwijeen/received_events",
"repos_url": "https://api.github.com/users/hwijeen/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hwijeen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hwijeen/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hwijeen",
"user_view_type": "public"
}
|
{
"+1": 3,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2181/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2181/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 4 days, 20:49:09
|
https://api.github.com/repos/huggingface/datasets/issues/2179
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2179/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2179/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2179/events
|
https://github.com/huggingface/datasets/issues/2179
| 852,237,957
|
MDU6SXNzdWU4NTIyMzc5NTc=
| 2,179
|
Load small datasets in-memory instead of using memory map
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "c5def5",
"default": false,
"description": "Generic discussion on the library",
"id": 2067400324,
"name": "generic discussion",
"node_id": "MDU6TGFiZWwyMDY3NDAwMzI0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] |
[] | 2021-04-07T09:58:16
| 2021-04-20T10:04:04
| 2021-04-20T10:04:03
|
MEMBER
| null | null | null | null |
Currently all datasets are loaded using memory mapping by default in `load_dataset`.
However this might not be necessary for small datasets. If a dataset is small enough, then it can be loaded in-memory and:
- its memory footprint would be small so it's ok
- in-memory computations/queries would be faster
- the caching on-disk would be disabled, making computations even faster (no I/O bound because of the disk)
- but running the same computation a second time would recompute everything since there would be no cached results on-disk. But this is probably fine since computations would be fast anyway + users should be able to provide a cache filename if needed.
Therefore, maybe the default behavior of `load_dataset` should be to load small datasets in-memory and big datasets using memory mapping.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2179/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2179/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 13 days, 0:05:47
|
https://api.github.com/repos/huggingface/datasets/issues/2176
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2176/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2176/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2176/events
|
https://github.com/huggingface/datasets/issues/2176
| 851,865,795
|
MDU6SXNzdWU4NTE4NjU3OTU=
| 2,176
|
Converting a Value to a ClassLabel
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/7272031?v=4",
"events_url": "https://api.github.com/users/nelson-liu/events{/privacy}",
"followers_url": "https://api.github.com/users/nelson-liu/followers",
"following_url": "https://api.github.com/users/nelson-liu/following{/other_user}",
"gists_url": "https://api.github.com/users/nelson-liu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/nelson-liu",
"id": 7272031,
"login": "nelson-liu",
"node_id": "MDQ6VXNlcjcyNzIwMzE=",
"organizations_url": "https://api.github.com/users/nelson-liu/orgs",
"received_events_url": "https://api.github.com/users/nelson-liu/received_events",
"repos_url": "https://api.github.com/users/nelson-liu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/nelson-liu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nelson-liu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/nelson-liu",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
| null |
[] |
[
"Hi @nelson-liu!\r\nHere is what I do to convert a string to class label:\r\n\r\n```python\r\nfrom datasets import load_dataset, features\r\n\r\n\r\ndset = load_dataset(...)\r\ncol_name = \"the string column name\"\r\n\r\nclass_names = dset.unique(col_name)\r\nclass_feature = features.ClassLabel(names=sorted(class_names))\r\ndset = dset.map(lambda str_value: {col_name: class_feature.str2int(str_value)}, input_columns=col_name)\r\n\r\ndset = dset.cast(features.Features({\r\n ...\r\n col_name: class_feature\r\n})\r\n```\r\n",
"Hi! You can use `Dataset.class_encode_column` for this. And in the next release of `datasets` (this feature is only available on `master`), you'll also be able to use `cast` to do the conversion. \r\n\r\nAn example of conversion via `cast`: \r\n```python\r\nfrom datasets import Dataset, Features, ClassLabel\r\nd = Dataset.from_dict({\"a\": [\"no\", \"yes\", \"no\"]})\r\nd = d.cast(Features({\"a\": ClassLabel(names=[\"yes\", \"no\"])}))\r\n```"
] | 2021-04-06T22:54:16
| 2022-06-01T16:31:49
| 2022-06-01T16:31:49
|
NONE
| null | null | null | null |
Hi!
In the docs for `cast`, it's noted that `For non-trivial conversion, e.g. string <-> ClassLabel you should use map() to update the Dataset.`
Would it be possible to have an example that demonstrates such a string <-> ClassLabel conversion using `map`? Thanks!
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2176/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2176/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 420 days, 17:37:33
|
https://api.github.com/repos/huggingface/datasets/issues/2175
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2175/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2175/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2175/events
|
https://github.com/huggingface/datasets/issues/2175
| 851,836,096
|
MDU6SXNzdWU4NTE4MzYwOTY=
| 2,175
|
dataset.search_batch() function outputs all -1 indices sometime.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"events_url": "https://api.github.com/users/shamanez/events{/privacy}",
"followers_url": "https://api.github.com/users/shamanez/followers",
"following_url": "https://api.github.com/users/shamanez/following{/other_user}",
"gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/shamanez",
"id": 16892570,
"login": "shamanez",
"node_id": "MDQ6VXNlcjE2ODkyNTcw",
"organizations_url": "https://api.github.com/users/shamanez/orgs",
"received_events_url": "https://api.github.com/users/shamanez/received_events",
"repos_url": "https://api.github.com/users/shamanez/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shamanez/subscriptions",
"type": "User",
"url": "https://api.github.com/users/shamanez",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Actually, I found the answer [here](https://github.com/facebookresearch/faiss/wiki/FAQ#what-does-it-mean-when-a-search-returns--1-ids). \r\n\r\nSo we have to do some modifications to the code for instances where the index doesn't retrieve any IDs.",
"@lhoestq @patrickvonplaten \r\n\r\nI also found another short bug in the retrieval part. Especially, when retrieving documents. If Faiss returns the -1 as the index, the retriever will always use the last element in the dataset.\r\n\r\nplease check [def get_doc_dicts function](https://github.com/huggingface/transformers/blob/master/src/transformers/models/rag/retrieval_rag.py#L222)\r\n\r\n\r\nDoes the use of the HNSW guarantee to retrieve valid indexes always? \r\n\r\n",
"Hi !\r\nNo it happens sometimes to return -1, especially if your dataset is small.\r\nIf your dataset is big enough it shouldn't happen in my experience.\r\n\r\nIdeally we should ignore all the -1 that are returned. It should be possible to change that in RAG's code ",
"I also checked with some indexes it returns more -1s. Specially with IVF\nwhen nprobr is very low. It doesn't happen when using HNSW though. But at\nthe moment if it happens, dataset will always return the last element.\nMaybe we should change it to repeat the most last valid retrieved doc id.\nWhat do you think?\n\nOn Wed, Apr 7, 2021, 21:09 Quentin Lhoest ***@***.***> wrote:\n\n> Hi !\n> No it happens sometimes to return -1, especially if your dataset is small.\n> If your dataset is big enough it shouldn't happen.\n>\n> Ideally we should ignore all the -1 that are returned. It should be\n> possible to change that in RAG's code\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/issues/2175#issuecomment-814746509>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AEA4FGTENOTLBEZTXEO2RS3THQOMPANCNFSM42PRVYDA>\n> .\n>\n",
"That would be an easy way to workaround this issue. Feel free to open a PR on `transformers` and ping me ! :)",
"Sure. Will push everything together with RAG end to end. :) thanks a lot.\n\nOn Wed, Apr 7, 2021, 21:16 Quentin Lhoest ***@***.***> wrote:\n\n> That would be an easy way to workaround this issue. Feel free to open a PR\n> on transformers and ping me ! :)\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/issues/2175#issuecomment-814752589>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AEA4FGWLROCGARKN7WOJYSTTHQPH5ANCNFSM42PRVYDA>\n> .\n>\n"
] | 2021-04-06T21:50:49
| 2021-04-16T12:21:16
| 2021-04-16T12:21:15
|
NONE
| null | null | null | null |
I am working with RAG and playing around with different faiss indexes. At the moment I use **index = faiss.index_factory(768, "IVF65536_HNSW32,Flat")**.
During the retrieval phase exactly in [this line of retrieval_rag.py](https://github.com/huggingface/transformers/blob/master/src/transformers/models/rag/retrieval_rag.py#L231) an error issue when all retrieved indices are -1. Please refer to the screenshot of a PID worker.

Here, my retrieve batch size is 2 and n_docs is 5. I can solve this by working around np. stack, but I want to ask, why we get an output index of -1. Do you have any idea :) ?
Is this a problem of the index, where the faiss can't find any similar vector?
Is there documentation on the output index being -1?
@lhoestq
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"events_url": "https://api.github.com/users/shamanez/events{/privacy}",
"followers_url": "https://api.github.com/users/shamanez/followers",
"following_url": "https://api.github.com/users/shamanez/following{/other_user}",
"gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/shamanez",
"id": 16892570,
"login": "shamanez",
"node_id": "MDQ6VXNlcjE2ODkyNTcw",
"organizations_url": "https://api.github.com/users/shamanez/orgs",
"received_events_url": "https://api.github.com/users/shamanez/received_events",
"repos_url": "https://api.github.com/users/shamanez/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shamanez/subscriptions",
"type": "User",
"url": "https://api.github.com/users/shamanez",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2175/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2175/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 9 days, 14:30:26
|
https://api.github.com/repos/huggingface/datasets/issues/2170
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2170/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2170/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2170/events
|
https://github.com/huggingface/datasets/issues/2170
| 850,913,228
|
MDU6SXNzdWU4NTA5MTMyMjg=
| 2,170
|
Wikipedia historic dumps are deleted but hf/datasets hardcodes dump date
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/946903?v=4",
"events_url": "https://api.github.com/users/leezu/events{/privacy}",
"followers_url": "https://api.github.com/users/leezu/followers",
"following_url": "https://api.github.com/users/leezu/following{/other_user}",
"gists_url": "https://api.github.com/users/leezu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/leezu",
"id": 946903,
"login": "leezu",
"node_id": "MDQ6VXNlcjk0NjkwMw==",
"organizations_url": "https://api.github.com/users/leezu/orgs",
"received_events_url": "https://api.github.com/users/leezu/received_events",
"repos_url": "https://api.github.com/users/leezu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/leezu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leezu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/leezu",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[
"It seems that this can be fixed from user's end by including a `date` argument, like this:\r\n\r\n`dataset = datasets.load_dataset('wikipedia', '20200501.en', date='20210420')`\r\n\r\nYou can get available dates from [here](https://dumps.wikimedia.org/enwiki/).\r\n\r\nThis is not a proper fix however as all the files will still have '20200501' in their file names."
] | 2021-04-06T03:13:18
| 2021-06-16T01:10:50
| null |
NONE
| null | null | null | null |
Wikimedia does not keep all historical dumps. For example, as of today https://dumps.wikimedia.org/kowiki/ only provides
```
20201220/ 02-Feb-2021 01:36 -
20210101/ 21-Feb-2021 01:26 -
20210120/ 02-Mar-2021 01:25 -
20210201/ 21-Mar-2021 01:26 -
20210220/ 02-Apr-2021 01:26 -
20210301/ 03-Mar-2021 08:10 -
20210320/ 21-Mar-2021 18:13 -
20210401/ 03-Apr-2021 10:08 -
latest/ 03-Apr-2021 10:08 -
```
However, the wikipedia dataset provided in the library, only supports the following configs, none of which are applicable anymore when disregarding the cached datasets:
```
ValueError: BuilderConfig 20210401.ko not found. Available: ['20200501.aa', '20200501.ab', '20200501.ace', '20200501.ady', '20200501.af', '20200501.ak', '20200501.als', '20200501.am', '20200501.an', '20200501.ang', '20200501.ar', '20200501.arc', '20200501.arz', '20200501.as', '20200501.ast', '20200501.atj', '20200501.av', '20200501.ay', '20200501.az', '20200501.azb', '20200501.ba', '20200501.bar', '20200501.bat-smg', '20200501.bcl', '20200501.be', '20200501.be-x-old', '20200501.bg', '20200501.bh', '20200501.bi', '20200501.bjn', '20200501.bm', '20200501.bn', '20200501.bo', '20200501.bpy', '20200501.br', '20200501.bs', '20200501.bug', '20200501.bxr', '20200501.ca', '20200501.cbk-zam', '20200501.cdo', '20200501.ce', '20200501.ceb', '20200501.ch', '20200501.cho', '20200501.chr', '20200501.chy', '20200501.ckb', '20200501.co', '20200501.cr', '20200501.crh', '20200501.cs', '20200501.csb', '20200501.cu', '20200501.cv', '20200501.cy', '20200501.da', '20200501.de', '20200501.din', '20200501.diq', '20200501.dsb', '20200501.dty', '20200501.dv', '20200501.dz', '20200501.ee', '20200501.el', '20200501.eml', '20200501.en', '20200501.eo', '20200501.es', '20200501.et', '20200501.eu', '20200501.ext', '20200501.fa', '20200501.ff', '20200501.fi', '20200501.fiu-vro', '20200501.fj', '20200501.fo', '20200501.fr', '20200501.frp', '20200501.frr', '20200501.fur', '20200501.fy', '20200501.ga', '20200501.gag', '20200501.gan', '20200501.gd', '20200501.gl', '20200501.glk', '20200501.gn', '20200501.gom', '20200501.gor', '20200501.got', '20200501.gu', '20200501.gv', '20200501.ha', '20200501.hak', '20200501.haw', '20200501.he', '20200501.hi', '20200501.hif', '20200501.ho', '20200501.hr', '20200501.hsb', '20200501.ht', '20200501.hu', '20200501.hy', '20200501.ia', '20200501.id', '20200501.ie', '20200501.ig', '20200501.ii', '20200501.ik', '20200501.ilo', '20200501.inh', '20200501.io', '20200501.is', '20200501.it', '20200501.iu', '20200501.ja', '20200501.jam', '20200501.jbo', '20200501.jv', '20200501.ka', '20200501.kaa', '20200501.kab', '20200501.kbd', '20200501.kbp', '20200501.kg', '20200501.ki', '20200501.kj', '20200501.kk', '20200501.kl', '20200501.km', '20200501.kn', '20200501.ko', '20200501.koi', '20200501.krc', '20200501.ks', '20200501.ksh', '20200501.ku', '20200501.kv', '20200501.kw', '20200501.ky', '20200501.la', '20200501.lad', '20200501.lb', '20200501.lbe', '20200501.lez', '20200501.lfn', '20200501.lg', '20200501.li', '20200501.lij', '20200501.lmo', '20200501.ln', '20200501.lo', '20200501.lrc', '20200501.lt', '20200501.ltg', '20200501.lv', '20200501.mai', '20200501.map-bms', '20200501.mdf', '20200501.mg', '20200501.mh', '20200501.mhr', '20200501.mi', '20200501.min', '20200501.mk', '20200501.ml', '20200501.mn', '20200501.mr', '20200501.mrj', '20200501.ms', '20200501.mt', '20200501.mus', '20200501.mwl', '20200501.my', '20200501.myv', '20200501.mzn', '20200501.na', '20200501.nah', '20200501.nap', '20200501.nds', '20200501.nds-nl', '20200501.ne', '20200501.new', '20200501.ng', '20200501.nl', '20200501.nn', '20200501.no', '20200501.nov', '20200501.nrm', '20200501.nso', '20200501.nv', '20200501.ny', '20200501.oc', '20200501.olo', '20200501.om', '20200501.or', '20200501.os', '20200501.pa', '20200501.pag', '20200501.pam', '20200501.pap', '20200501.pcd', '20200501.pdc', '20200501.pfl', '20200501.pi', '20200501.pih', '20200501.pl', '20200501.pms', '20200501.pnb', '20200501.pnt', '20200501.ps', '20200501.pt', '20200501.qu', '20200501.rm', '20200501.rmy', '20200501.rn', '20200501.ro', '20200501.roa-rup', '20200501.roa-tara', '20200501.ru', '20200501.rue', '20200501.rw', '20200501.sa', '20200501.sah', '20200501.sat', '20200501.sc', '20200501.scn', '20200501.sco', '20200501.sd', '20200501.se', '20200501.sg', '20200501.sh', '20200501.si', '20200501.simple', '20200501.sk', '20200501.sl', '20200501.sm', '20200501.sn', '20200501.so', '20200501.sq', '20200501.sr', '20200501.srn', '20200501.ss', '20200501.st', '20200501.stq', '20200501.su', '20200501.sv', '20200501.sw', '20200501.szl', '20200501.ta', '20200501.tcy', '20200501.te', '20200501.tet', '20200501.tg', '20200501.th', '20200501.ti', '20200501.tk', '20200501.tl', '20200501.tn', '20200501.to', '20200501.tpi', '20200501.tr', '20200501.ts', '20200501.tt', '20200501.tum', '20200501.tw', '20200501.ty', '20200501.tyv', '20200501.udm', '20200501.ug', '20200501.uk', '20200501.ur', '20200501.uz', '20200501.ve', '20200501.vec', '20200501.vep', '20200501.vi', '20200501.vls', '20200501.vo', '20200501.wa', '20200501.war', '20200501.wo', '20200501.wuu', '20200501.xal', '20200501.xh', '20200501.xmf', '20200501.yi', '20200501.yo', '20200501.za', '20200501.zea', '20200501.zh', '20200501.zh-classical', '20200501.zh-min-nan', '20200501.zh-yue', '20200501.zu']
```
The cached datasets:
```
% aws s3 --no-sign-request --endpoint-url https://storage.googleapis.com ls s3://huggingface-nlp/cache/datasets/wikipedia/
PRE 20200501.de/
PRE 20200501.en/
PRE 20200501.fr/
PRE 20200501.frr/
PRE 20200501.it/
PRE 20200501.simple/
```
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2170/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2170/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/2167
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2167/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2167/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2167/events
|
https://github.com/huggingface/datasets/issues/2167
| 849,944,891
|
MDU6SXNzdWU4NDk5NDQ4OTE=
| 2,167
|
Split type not preserved when reloading the dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[] | 2021-04-04T19:29:54
| 2021-04-19T09:08:55
| 2021-04-19T09:08:55
|
COLLABORATOR
| null | null | null | null |
A minimal reproducible example:
```python
>>> from datasets import load_dataset, Dataset
>>> dset = load_dataset("sst", split="train")
>>> dset.save_to_disk("sst")
>>> type(dset.split)
<class 'datasets.splits.NamedSplit'>
>>> dset = Dataset.load_from_disk("sst")
>>> type(dset.split) # NamedSplit expected
<class 'str'>
```
It seems like this bug was introduced in #2025.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2167/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2167/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 14 days, 13:39:01
|
https://api.github.com/repos/huggingface/datasets/issues/2166
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2166/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2166/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2166/events
|
https://github.com/huggingface/datasets/issues/2166
| 849,778,545
|
MDU6SXNzdWU4NDk3Nzg1NDU=
| 2,166
|
Regarding Test Sets for the GEM datasets
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/17217068?v=4",
"events_url": "https://api.github.com/users/vyraun/events{/privacy}",
"followers_url": "https://api.github.com/users/vyraun/followers",
"following_url": "https://api.github.com/users/vyraun/following{/other_user}",
"gists_url": "https://api.github.com/users/vyraun/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/vyraun",
"id": 17217068,
"login": "vyraun",
"node_id": "MDQ6VXNlcjE3MjE3MDY4",
"organizations_url": "https://api.github.com/users/vyraun/orgs",
"received_events_url": "https://api.github.com/users/vyraun/received_events",
"repos_url": "https://api.github.com/users/vyraun/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/vyraun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vyraun/subscriptions",
"type": "User",
"url": "https://api.github.com/users/vyraun",
"user_view_type": "public"
}
|
[
{
"color": "72f99f",
"default": false,
"description": "Discussions on the datasets",
"id": 2067401494,
"name": "Dataset discussion",
"node_id": "MDU6TGFiZWwyMDY3NDAxNDk0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/Dataset%20discussion"
}
] |
closed
| false
| null |
[] |
[
"Hi @vyraun ! The test references for CommonGen are not publicly available: you can reach out to the original dataset authors if you would like to ask for them, but we will not be releasing them as part of GEM (March 31st was the release date for the test set inputs, references are incidentally released for some of the test sets but shouldn't really be used for benchmark submissions)\r\n\r\ncc @sebastiangehrmann",
"Oh okay, thanks @yjernite ! "
] | 2021-04-04T02:02:45
| 2021-04-06T08:13:12
| 2021-04-06T08:13:12
|
NONE
| null | null | null | null |
@yjernite Hi, are the test sets for the GEM datasets scheduled to be [added soon](https://gem-benchmark.com/shared_task)?
e.g.
```
from datasets import load_dataset
DATASET_NAME="common_gen"
data = load_dataset("gem", DATASET_NAME)
```
The test set doesn't have the target or references.
```
data['test'][0]
{'concept_set_id': 0, 'concepts': ['drill', 'field', 'run', 'team'], 'gem_id': 'common_gen-test-0', 'gem_parent_id': 'common_gen-test-0', 'references': [], 'target': ''}
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/17217068?v=4",
"events_url": "https://api.github.com/users/vyraun/events{/privacy}",
"followers_url": "https://api.github.com/users/vyraun/followers",
"following_url": "https://api.github.com/users/vyraun/following{/other_user}",
"gists_url": "https://api.github.com/users/vyraun/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/vyraun",
"id": 17217068,
"login": "vyraun",
"node_id": "MDQ6VXNlcjE3MjE3MDY4",
"organizations_url": "https://api.github.com/users/vyraun/orgs",
"received_events_url": "https://api.github.com/users/vyraun/received_events",
"repos_url": "https://api.github.com/users/vyraun/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/vyraun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vyraun/subscriptions",
"type": "User",
"url": "https://api.github.com/users/vyraun",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2166/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2166/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 2 days, 6:10:27
|
https://api.github.com/repos/huggingface/datasets/issues/2165
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2165/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2165/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2165/events
|
https://github.com/huggingface/datasets/issues/2165
| 849,771,665
|
MDU6SXNzdWU4NDk3NzE2NjU=
| 2,165
|
How to convert datasets.arrow_dataset.Dataset to torch.utils.data.Dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/24562381?v=4",
"events_url": "https://api.github.com/users/y-rokutan/events{/privacy}",
"followers_url": "https://api.github.com/users/y-rokutan/followers",
"following_url": "https://api.github.com/users/y-rokutan/following{/other_user}",
"gists_url": "https://api.github.com/users/y-rokutan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/y-rokutan",
"id": 24562381,
"login": "y-rokutan",
"node_id": "MDQ6VXNlcjI0NTYyMzgx",
"organizations_url": "https://api.github.com/users/y-rokutan/orgs",
"received_events_url": "https://api.github.com/users/y-rokutan/received_events",
"repos_url": "https://api.github.com/users/y-rokutan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/y-rokutan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/y-rokutan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/y-rokutan",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi,\r\n\r\na HF dataset can be converted to a Torch Dataset with a simple wrapper as follows:\r\n```python\r\nfrom torch.utils.data import Dataset\r\n \r\nclass HFDataset(Dataset):\r\n def __init__(self, dset):\r\n self.dset = dset\r\n\r\n def __getitem__(self, idx):\r\n return self.dset[idx]\r\n\r\n def __len__(self):\r\n return len(self.dset)\r\n\r\ntrain_ds = HFDataset(train_ds)\r\n```\r\n@lhoestq Since the Arrow Dataset already provides `__getitem__` and `__len__`, I think we could use the [virtual subclass](https://docs.python.org/3/library/abc.html#abc.ABCMeta.register) mechanism from the `abc` module to elegantly solve this issue. This mechanism would allow the Arrow Dataset to be used in place of the Torch Dataset because the `isinstance(instance of Arrow Dataset, TorchDataset)` check would return True (DeepSpeed has this check [here](https://github.com/microsoft/DeepSpeed/blob/ab5534fc4c0f8ca21ada321f9730d723aa31288b/deepspeed/runtime/engine.py#L823)).\r\n\r\nAnd it requires a minimal change in the `arrow_dataset.py` file:\r\n```python\r\nif config.TORCH_AVAILABLE:\r\n from torch.utils.data import Dataset as TorchDataset\r\n TorchDataset.register(Dataset)\r\n```",
"Interesting ! Thanks for sharing this @mariosasko . I like the idea\r\nThis looks like something we should add IMO",
"@mariosasko \r\nThx for your code!\r\nIt perfectly works with a small modification for HF NLP dataset:\r\n```\r\noriginal_ds = nlp.load_dataset('scientific_papers', 'arxiv')\r\ntrain_ds = HFDataset(train_ds['train']) # needs splitting\r\n```",
"@lhoestq Sadly, from Python 3.7 onwards `torch.utils.data.Dataset` doesn't support the virtual subclass mechanism due to `typing.Generic` type no longer having `abc.ABCMeta` as its metaclass.\r\n\r\nWith that in mind, another option is to remove a direct type check (`isinstance(dataset, torch.utils.data.Dataset)`) in `deepspeed.initalize` and to rewrite the checks in a manner similar to `torch.utils.data.DataLoader` ([link](https://github.com/pytorch/pytorch/blob/b80c6f863f2327c712c478f67c248b94d66b65ac/torch/utils/data/dataloader.py#L197-L239)). This is exactly why the `DataLoader` works with arbitrary objects that provide `__getitem__` and `__len__` (and in our case, the `ArrowDataset`). By doing so, their code wouldn't be any stricter in comparison to the `DataLoader`.\r\n\r\nSo if you agree, I can open an issue in their repo and fix this if they like the idea.",
"That makes sense ! Feel free to open an issue on their repo and discuss this idea",
"@y-rokutan Hi, now if you install `deepspeed` from master (this feature will be available in the next official release), the code should work without subclassing. Let us know if you still have any issues.",
"Worth mentioning that any function that expects a `torch..Dataset` (like `torch..DataLoader`) will fail a mypy-esque typecheck if a `datasets.Dataset` is passed, even though it implements the interface correctly (I think). The virtual subclass idea was a good one- I wonder if there's another workaround given the Generic issue. What we're really talking about is something similar to the structural subtyping semantics that `typing.Protocol` defines. If `torch..DataLoader` accepted anything that supports `__getitem__` and `__len__` methods this would be much easier. Not sure if there's a way to do this without the wrapper from the perspective of `datasets`."
] | 2021-04-04T01:01:48
| 2021-08-24T15:55:35
| 2021-04-07T15:06:04
|
NONE
| null | null | null | null |
Hi,
I'm trying to pretraine deep-speed model using HF arxiv dataset like:
```
train_ds = nlp.load_dataset('scientific_papers', 'arxiv')
train_ds.set_format(
type="torch",
columns=["input_ids", "attention_mask", "global_attention_mask", "labels"],
)
engine, _, _, _ = deepspeed.initialize(
args=args,
model=model,
model_parameters=[p for p in model.parameters() if p.requires_grad],
training_data=train_ds)
```
but deepspeed.initialize accepts torch.utils.data.Dataset only. How can I convert HF-style dataset to torch-style dataset?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/24562381?v=4",
"events_url": "https://api.github.com/users/y-rokutan/events{/privacy}",
"followers_url": "https://api.github.com/users/y-rokutan/followers",
"following_url": "https://api.github.com/users/y-rokutan/following{/other_user}",
"gists_url": "https://api.github.com/users/y-rokutan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/y-rokutan",
"id": 24562381,
"login": "y-rokutan",
"node_id": "MDQ6VXNlcjI0NTYyMzgx",
"organizations_url": "https://api.github.com/users/y-rokutan/orgs",
"received_events_url": "https://api.github.com/users/y-rokutan/received_events",
"repos_url": "https://api.github.com/users/y-rokutan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/y-rokutan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/y-rokutan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/y-rokutan",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2165/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2165/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 3 days, 14:04:16
|
https://api.github.com/repos/huggingface/datasets/issues/2162
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2162/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2162/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2162/events
|
https://github.com/huggingface/datasets/issues/2162
| 849,129,201
|
MDU6SXNzdWU4NDkxMjkyMDE=
| 2,162
|
visualization for cc100 is broken
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dorost1234",
"id": 79165106,
"login": "dorost1234",
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dorost1234",
"user_view_type": "public"
}
|
[
{
"color": "94203D",
"default": false,
"description": "",
"id": 2107841032,
"name": "nlp-viewer",
"node_id": "MDU6TGFiZWwyMTA3ODQxMDMy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer"
}
] |
closed
| false
| null |
[] |
[
"This looks like an issue with the cc100 dataset itself but not sure\r\nDid you try loading cc100 on your machine ?",
"Hi\nloading works fine, but the viewer only is broken\nthanks\n\nOn Wed, Apr 7, 2021 at 12:17 PM Quentin Lhoest ***@***.***>\nwrote:\n\n> This looks like an issue with the cc100 dataset itself but not sure\n> Did you try loading cc100 on your machine ?\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/issues/2162#issuecomment-814793809>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AS37NMRUO33JSOYGT6RETWLTHQWNLANCNFSM42IUOR6Q>\n> .\n>\n",
"Hi! This visualization tool is deprecated now. The viewer at https://huggingface.co/datasets/cc100 works fine, so I'm closing this issue."
] | 2021-04-02T10:11:13
| 2022-10-05T13:20:24
| 2022-10-05T13:20:24
|
NONE
| null | null | null | null |
Hi
visualization through dataset viewer for cc100 is broken
https://huggingface.co/datasets/viewer/
thanks a lot
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2162/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2162/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 551 days, 3:09:11
|
https://api.github.com/repos/huggingface/datasets/issues/2161
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2161/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2161/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2161/events
|
https://github.com/huggingface/datasets/issues/2161
| 849,127,041
|
MDU6SXNzdWU4NDkxMjcwNDE=
| 2,161
|
any possibility to download part of large datasets only?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dorost1234",
"id": 79165106,
"login": "dorost1234",
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dorost1234",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Not yet but it’s on the short/mid-term roadmap (requested by many indeed).",
"oh, great, really awesome feature to have, thank you very much for the great, fabulous work",
"We'll work on dataset streaming soon. This should allow you to only load the examples you need ;)",
"thanks a lot Quentin, this would be really really a great feature to have\n\nOn Wed, Apr 7, 2021 at 12:14 PM Quentin Lhoest ***@***.***>\nwrote:\n\n> We'll work on dataset streaming soon. This should allow you to only load\n> the examples you need ;)\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/issues/2161#issuecomment-814791922>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AS37NMROD62QAKIJMAKWISTTHQWBVANCNFSM42IUI5JQ>\n> .\n>\n",
"Is streaming completed? On the 1.8.0 docs it is mentioned (https://huggingface.co/docs/datasets/dataset_streaming.html), but when following the example I get the following error:\r\n\r\n```\r\n>>> dataset2 = load_dataset(\"amazon_us_reviews\", \"Pet_Products_v1_00\", split='train', streaming=True)\r\n\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n<ipython-input-21-1eedab26cff1> in <module>()\r\n----> 1 en_dataset = load_dataset('oscar', \"unshuffled_deduplicated_en\", split='train', streaming=True)\r\n\r\n3 frames\r\n/usr/local/lib/python3.7/dist-packages/datasets/builder.py in _create_builder_config(self, name, custom_features, **config_kwargs)\r\n 339 if value is not None:\r\n 340 if not hasattr(builder_config, key):\r\n--> 341 raise ValueError(f\"BuilderConfig {builder_config} doesn't have a '{key}' key.\")\r\n 342 setattr(builder_config, key, value)\r\n 343 \r\n\r\nValueError: BuilderConfig OscarConfig(name='unshuffled_deduplicated_en', version=1.0.0, data_dir=None, data_files=None, description='Unshuffled and deduplicated, English OSCAR dataset') doesn't have a 'streaming' key.\r\n```\r\n\r\nUPDATE: Managed to get streaming working by building from source and installing the additional `datasets[streaming]` package:\r\n\r\n```\r\n!pip install git+https://github.com/huggingface/datasets.git\r\n!pip install datasets[streaming]\r\n```",
"Hi ! Streaming is available on `master` only right now. We'll make a new release 1.9.0 on Monday :)"
] | 2021-04-02T10:06:46
| 2022-10-05T13:26:51
| 2022-10-05T13:26:51
|
NONE
| null | null | null | null |
Hi
Some of the datasets I need like cc100 are very large, and then I wonder if I can download first X samples of the shuffled/unshuffled data without going through first downloading the whole data then sampling? thanks
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2161/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2161/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 551 days, 3:20:05
|
https://api.github.com/repos/huggingface/datasets/issues/2160
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2160/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2160/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2160/events
|
https://github.com/huggingface/datasets/issues/2160
| 849,052,921
|
MDU6SXNzdWU4NDkwNTI5MjE=
| 2,160
|
data_args.preprocessing_num_workers almost freezes
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dorost1234",
"id": 79165106,
"login": "dorost1234",
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dorost1234",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi.\r\nI cannot always reproduce this issue, and on later runs I did not see it so far. Sometimes also I set 8 processes but I see less being showed, is this normal, here only 5 are shown for 8 being set, thanks\r\n\r\n```\r\n#3: 11%|███████████████▊ | 172/1583 [00:46<06:21, 3.70ba/s]\r\n#4: 9%|█████████████▏ | 143/1583 [00:46<07:46, 3.09ba/s]\r\n#7: 6%|█████████ | 98/1583 [00:45<11:34, 2.14ba/s]\r\n#5: 8%|███████████▍ | 124/1583 [00:46<09:03, 2.68ba/s]\r\n#6: 7%|██████████▏ \r\n```",
"closing since I cannot reproduce it again, thanks "
] | 2021-04-02T07:56:13
| 2021-04-02T10:14:32
| 2021-04-02T10:14:31
|
NONE
| null | null | null | null |
Hi @lhoestq
I am running this code from huggingface transformers https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py
to speed up tokenization, since I am running on multiple datasets, I am using data_args.preprocessing_num_workers = 4 with opus100 corpus but this moves on till a point and then this freezes almost for sometime during tokenization steps and then this is back again, overall to me taking more time than normal case, I appreciate your advice on how I can use this option properly to speed up.
thanks
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dorost1234",
"id": 79165106,
"login": "dorost1234",
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dorost1234",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2160/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2160/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 2:18:18
|
https://api.github.com/repos/huggingface/datasets/issues/2159
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2159/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2159/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2159/events
|
https://github.com/huggingface/datasets/issues/2159
| 848,851,962
|
MDU6SXNzdWU4NDg4NTE5NjI=
| 2,159
|
adding ccnet dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dorost1234",
"id": 79165106,
"login": "dorost1234",
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dorost1234",
"user_view_type": "public"
}
|
[
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] |
closed
| false
| null |
[] |
[
"closing since I think this is cc100, just the name has been changed. thanks "
] | 2021-04-01T23:28:36
| 2021-04-02T10:05:19
| 2021-04-02T10:05:19
|
NONE
| null | null | null | null |
## Adding a Dataset
- **Name:** ccnet
- **Description:**
Common Crawl
- **Paper:**
https://arxiv.org/abs/1911.00359
- **Data:**
https://github.com/facebookresearch/cc_net
- **Motivation:**
this is one of the most comprehensive clean monolingual datasets across a variety of languages. Quite important for cross-lingual reseach
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
thanks
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dorost1234",
"id": 79165106,
"login": "dorost1234",
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dorost1234",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2159/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2159/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 10:36:43
|
https://api.github.com/repos/huggingface/datasets/issues/2158
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2158/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2158/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2158/events
|
https://github.com/huggingface/datasets/issues/2158
| 848,506,746
|
MDU6SXNzdWU4NDg1MDY3NDY=
| 2,158
|
viewer "fake_news_english" error
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/9447991?v=4",
"events_url": "https://api.github.com/users/emanuelevivoli/events{/privacy}",
"followers_url": "https://api.github.com/users/emanuelevivoli/followers",
"following_url": "https://api.github.com/users/emanuelevivoli/following{/other_user}",
"gists_url": "https://api.github.com/users/emanuelevivoli/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/emanuelevivoli",
"id": 9447991,
"login": "emanuelevivoli",
"node_id": "MDQ6VXNlcjk0NDc5OTE=",
"organizations_url": "https://api.github.com/users/emanuelevivoli/orgs",
"received_events_url": "https://api.github.com/users/emanuelevivoli/received_events",
"repos_url": "https://api.github.com/users/emanuelevivoli/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/emanuelevivoli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emanuelevivoli/subscriptions",
"type": "User",
"url": "https://api.github.com/users/emanuelevivoli",
"user_view_type": "public"
}
|
[
{
"color": "94203D",
"default": false,
"description": "",
"id": 2107841032,
"name": "nlp-viewer",
"node_id": "MDU6TGFiZWwyMTA3ODQxMDMy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer"
}
] |
closed
| false
| null |
[] |
[
"Thanks for reporting !\r\nThe viewer doesn't have all the dependencies of the datasets. We may add openpyxl to be able to show this dataset properly",
"This viewer tool is deprecated now and the new viewer at https://huggingface.co/datasets/fake_news_english works fine, so I'm closing this issue"
] | 2021-04-01T14:13:20
| 2022-10-05T13:22:02
| 2022-10-05T13:22:02
|
NONE
| null | null | null | null |
When I visit the [Huggingface - viewer](https://huggingface.co/datasets/viewer/) web site, under the dataset "fake_news_english" I've got this error:
> ImportError: To be able to use this dataset, you need to install the following dependencies['openpyxl'] using 'pip install # noqa: requires this pandas optional dependency for reading xlsx files' for instance'
as well as the error Traceback.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2158/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2158/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 551 days, 23:08:42
|
https://api.github.com/repos/huggingface/datasets/issues/2153
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2153/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2153/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2153/events
|
https://github.com/huggingface/datasets/issues/2153
| 846,181,502
|
MDU6SXNzdWU4NDYxODE1MDI=
| 2,153
|
load_dataset ignoring features
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/37592763?v=4",
"events_url": "https://api.github.com/users/GuillemGSubies/events{/privacy}",
"followers_url": "https://api.github.com/users/GuillemGSubies/followers",
"following_url": "https://api.github.com/users/GuillemGSubies/following{/other_user}",
"gists_url": "https://api.github.com/users/GuillemGSubies/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/GuillemGSubies",
"id": 37592763,
"login": "GuillemGSubies",
"node_id": "MDQ6VXNlcjM3NTkyNzYz",
"organizations_url": "https://api.github.com/users/GuillemGSubies/orgs",
"received_events_url": "https://api.github.com/users/GuillemGSubies/received_events",
"repos_url": "https://api.github.com/users/GuillemGSubies/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/GuillemGSubies/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/GuillemGSubies/subscriptions",
"type": "User",
"url": "https://api.github.com/users/GuillemGSubies",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
] |
[
"Hi ! Thanks for reporting. I opened a PR to fix this issue: #2201",
"Nice question which helped me a lot! I have wasted a lot of time to the `DatasetDict` creation from a csv file. Hope the document of this module add some simple examples.",
"Hi :) We're indeed working on tutorials that we will add to the docs !"
] | 2021-03-31T08:30:09
| 2022-10-05T13:29:12
| 2022-10-05T13:29:12
|
NONE
| null | null | null | null |
First of all, I'm sorry if it is a repeated issue or the changes are already in master, I searched and I didn't find anything.
I'm using datasets 1.5.0

As you can see, when I load the dataset, the ClassLabels are ignored, I have to cast the dataset in order to make it work.
Code to reproduce:
```python
import datasets
data_location = "/data/prueba_multiclase"
features = datasets.Features(
{"texto": datasets.Value("string"), "label": datasets.features.ClassLabel(names=["false", "true"])}
)
dataset = datasets.load_dataset(
"csv", data_files=data_location, delimiter="\t", features=features
)
```
Dataset I used:
[prueba_multiclase.zip](https://github.com/huggingface/datasets/files/6235022/prueba_multiclase.zip) (it has to be unzipped)
Thank you! ❤️
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2153/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2153/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 553 days, 4:59:03
|
https://api.github.com/repos/huggingface/datasets/issues/2149
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2149/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2149/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2149/events
|
https://github.com/huggingface/datasets/issues/2149
| 844,734,076
|
MDU6SXNzdWU4NDQ3MzQwNzY=
| 2,149
|
Telugu subset missing for xtreme tatoeba dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/50871412?v=4",
"events_url": "https://api.github.com/users/cosmeowpawlitan/events{/privacy}",
"followers_url": "https://api.github.com/users/cosmeowpawlitan/followers",
"following_url": "https://api.github.com/users/cosmeowpawlitan/following{/other_user}",
"gists_url": "https://api.github.com/users/cosmeowpawlitan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cosmeowpawlitan",
"id": 50871412,
"login": "cosmeowpawlitan",
"node_id": "MDQ6VXNlcjUwODcxNDEy",
"organizations_url": "https://api.github.com/users/cosmeowpawlitan/orgs",
"received_events_url": "https://api.github.com/users/cosmeowpawlitan/received_events",
"repos_url": "https://api.github.com/users/cosmeowpawlitan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cosmeowpawlitan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cosmeowpawlitan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cosmeowpawlitan",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Good catch ! Thanks for reporting\r\n\r\nI just opened #2180 to fix this",
"Fixed in #2180"
] | 2021-03-30T15:26:34
| 2022-10-05T13:28:30
| 2022-10-05T13:28:30
|
CONTRIBUTOR
| null | null | null | null |
from nlp import load_dataset
train_dataset = load_dataset('xtreme', 'tatoeba.tel')['validation']
ValueError: BuilderConfig tatoeba.tel not found.
but language tel is actually included in xtreme:
https://github.com/google-research/xtreme/blob/master/utils_preprocess.py
def tatoeba_preprocess(args):
lang3_dict = {
'afr':'af', 'ara':'ar', 'bul':'bg', 'ben':'bn',
'deu':'de', 'ell':'el', 'spa':'es', 'est':'et',
'eus':'eu', 'pes':'fa', 'fin':'fi', 'fra':'fr',
'heb':'he', 'hin':'hi', 'hun':'hu', 'ind':'id',
'ita':'it', 'jpn':'ja', 'jav':'jv', 'kat':'ka',
'kaz':'kk', 'kor':'ko', 'mal':'ml', 'mar':'mr',
'nld':'nl', 'por':'pt', 'rus':'ru', 'swh':'sw',
'tam':'ta', **_'tel':'te'_**, 'tha':'th', 'tgl':'tl', <----here
'tur':'tr', 'urd':'ur', 'vie':'vi', 'cmn':'zh',
'eng':'en',
}
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2149/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2149/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 553 days, 22:01:56
|
https://api.github.com/repos/huggingface/datasets/issues/2148
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2148/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2148/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2148/events
|
https://github.com/huggingface/datasets/issues/2148
| 844,700,910
|
MDU6SXNzdWU4NDQ3MDA5MTA=
| 2,148
|
Add configurable options to `seqeval` metric
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/44571847?v=4",
"events_url": "https://api.github.com/users/marrodion/events{/privacy}",
"followers_url": "https://api.github.com/users/marrodion/followers",
"following_url": "https://api.github.com/users/marrodion/following{/other_user}",
"gists_url": "https://api.github.com/users/marrodion/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/marrodion",
"id": 44571847,
"login": "marrodion",
"node_id": "MDQ6VXNlcjQ0NTcxODQ3",
"organizations_url": "https://api.github.com/users/marrodion/orgs",
"received_events_url": "https://api.github.com/users/marrodion/received_events",
"repos_url": "https://api.github.com/users/marrodion/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/marrodion/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marrodion/subscriptions",
"type": "User",
"url": "https://api.github.com/users/marrodion",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi @marrodion. \r\n\r\nThanks for pointing this out. It would be great to incorporate this metric-specific enhancement.\r\n\r\nAnother possibility would be to require the user to input the scheme as a string `mode=\"strict\", scheme=\"IOB2\"` and then dynamically import the corresponding module using Python `importlib`:\r\n```python\r\nif scheme:\r\n scheme = importlib.import_module(f\"seqeval.scheme.{scheme}\")\r\n```\r\n\r\nFeel free to create a Pull Request to make this contribution."
] | 2021-03-30T15:04:06
| 2021-04-15T13:49:46
| 2021-04-15T13:49:46
|
CONTRIBUTOR
| null | null | null | null |
Right now `load_metric("seqeval")` only works in the default mode of evaluation (equivalent to conll evaluation).
However, seqeval library [supports](https://github.com/chakki-works/seqeval#support-features) different evaluation schemes (IOB1, IOB2, etc.), which can be plugged in just by supporting additional kwargs in `Seqeval._compute`
https://github.com/huggingface/datasets/blob/85cf7ff920c90ca2e12bedca12b36d2a043c3da2/metrics/seqeval/seqeval.py#L109
Things that would be relevant are, for example, supporting `mode="strict", scheme=IOB2` to count only full entity match as a true positive and omit partial matches.
The only problem I see is that the spirit of `metrics` seems to not require additional imports from user. `seqeval` only supports schemes as objects, without any string aliases.
It can be solved naively with mapping like `{"IOB2": seqeval.scheme.IOB2}`. Or just left as is and require user to explicitly import scheme from `seqeval` if he wants to configure it past the default implementation.
If that makes sense, I am happy to implement the change.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2148/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2148/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 15 days, 22:45:40
|
https://api.github.com/repos/huggingface/datasets/issues/2146
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2146/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2146/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2146/events
|
https://github.com/huggingface/datasets/issues/2146
| 844,673,244
|
MDU6SXNzdWU4NDQ2NzMyNDQ=
| 2,146
|
Dataset file size on disk is very large with 3D Array
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/22685854?v=4",
"events_url": "https://api.github.com/users/jblemoine/events{/privacy}",
"followers_url": "https://api.github.com/users/jblemoine/followers",
"following_url": "https://api.github.com/users/jblemoine/following{/other_user}",
"gists_url": "https://api.github.com/users/jblemoine/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jblemoine",
"id": 22685854,
"login": "jblemoine",
"node_id": "MDQ6VXNlcjIyNjg1ODU0",
"organizations_url": "https://api.github.com/users/jblemoine/orgs",
"received_events_url": "https://api.github.com/users/jblemoine/received_events",
"repos_url": "https://api.github.com/users/jblemoine/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jblemoine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jblemoine/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jblemoine",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[
"Hi ! In the arrow file we store all the integers as uint8.\r\nSo your arrow file should weigh around `height x width x n_channels x n_images` bytes.\r\n\r\nWhat feature type do your TFDS dataset have ?\r\n\r\nIf it uses a `tfds.features.Image` type, then what is stored is the encoded data (as png or jpg for example). Since these encodings are made for compression, the resulting tfrecord is smaller that the arrow file.\r\n\r\nWe are working on adding a similar feature in `datasets`: the ability to store the encoded data instead of the raw integers for images, but also for audio data. This way, arrow files will have similar sizes as tfrecords for images.",
"Thanks for the prompt response. You're right about the encoding, I have the `tfds.features.Image` feature type you mentioned.\r\nHowever, as described in the `dataset_info.json`, my dataset is made of 1479 (224x224x3) images. 1479 x 224 x 224 x 3 = 222630912 bytes which is far from the actual size 520803408 bytes. \r\n\r\nAnyway I look forward to the Image feature type in `datasets`. ",
"@lhoestq I changed the data structure so I have a 2D Array feature type instead of a 3D Array by grouping the two last dimensions ( a 224x672 2D Array instead of a 224x224x3 3D Array). The file size is now 223973964 bytes, nearly half the previous size! Which is around of what I would expect.\r\nI found similar behavior in existing `datasets` collection, when comparing black and white vs color image, for example MNIST vs CIFAR. ",
"Interesting !\r\nThis may be because of the offsets that are stored with the array data.\r\n\r\nCurrently the offsets are stored even if the `shape` of the arrays is fixed. This was needed because of some issues with pyarrow a few months ago. I think these issues have been addressed now, so we can probably try to remove them to make the file lighter.\r\n\r\nIdeally in your case the floats data should be 220 MB for both Array2D and Array3D",
"Yeah for sure, can you be a bit more specific about where the offset is stored in the code base ? And any reference to pyarrow issues if you have some. I would be very interested in contributing to `datasets` by trying to fix this issue. ",
"Pyarrow has two types of lists: variable length lists and fixed size lists.\r\nCurrently we store the ArrayXD data as variable length lists. They take more disk space because they must store both actual data and offsets.\r\nIn the `datasets` code this is done here:\r\n\r\nhttps://github.com/huggingface/nlp/blob/dbac87c8a083f806467f5afc4ec9b401a7e4c15c/src/datasets/features.py#L346-L352\r\n\r\nTo use a fixed length list, one should use the `list_size` argument of `pyarrow.list_()`.\r\nI believe this would work directly modulo some changes in the numpy conversion here:\r\n\r\nhttps://github.com/huggingface/nlp/blob/dbac87c8a083f806467f5afc4ec9b401a7e4c15c/src/datasets/features.py#L381-L395"
] | 2021-03-30T14:46:09
| 2021-04-16T13:07:02
| null |
NONE
| null | null | null | null |
Hi,
I have created my own dataset using the provided dataset loading script. It is an image dataset where images are stored as 3D Array with dtype=uint8.
The actual size on disk is surprisingly large. It takes 520 MB. Here is some info from `dataset_info.json`.
`{
"description": "",
"citation": "",
"homepage": "",
"license": "",
"features": {
"image": {
"shape": [224, 224, 3],
"dtype": "uint8",
"id": null,
"_type": "Array3D",
}
},
"post_processed": null,
"supervised_keys": null,
"builder_name": "shot_type_image_dataset",
"config_name": "default",
"version": {
"version_str": "0.0.0",
"description": null,
"major": 0,
"minor": 0,
"patch": 0,
},
"splits": {
"train": {
"name": "train",
"num_bytes": 520803408,
"num_examples": 1479,
"dataset_name": "shot_type_image_dataset",
}
},
"download_checksums": {
"": {
"num_bytes": 16940447118,
"checksum": "5854035705efe08b0ed8f3cf3da7b4d29cba9055c2d2d702c79785350d72ee03",
}
},
"download_size": 16940447118,
"post_processing_size": null,
"dataset_size": 520803408,
"size_in_bytes": 17461250526,
}`
I have created the same dataset with tensorflow_dataset and it takes only 125MB on disk.
I am wondering, is it normal behavior ? I understand `Datasets` uses Arrow for serialization wheres tf uses TF Records.
This might be a problem for large dataset.
Thanks for your help.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2146/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2146/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/2144
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2144/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2144/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2144/events
|
https://github.com/huggingface/datasets/issues/2144
| 844,352,067
|
MDU6SXNzdWU4NDQzNTIwNjc=
| 2,144
|
Loading wikipedia 20200501.en throws pyarrow related error
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/26637405?v=4",
"events_url": "https://api.github.com/users/TomPyonsuke/events{/privacy}",
"followers_url": "https://api.github.com/users/TomPyonsuke/followers",
"following_url": "https://api.github.com/users/TomPyonsuke/following{/other_user}",
"gists_url": "https://api.github.com/users/TomPyonsuke/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/TomPyonsuke",
"id": 26637405,
"login": "TomPyonsuke",
"node_id": "MDQ6VXNlcjI2NjM3NDA1",
"organizations_url": "https://api.github.com/users/TomPyonsuke/orgs",
"received_events_url": "https://api.github.com/users/TomPyonsuke/received_events",
"repos_url": "https://api.github.com/users/TomPyonsuke/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/TomPyonsuke/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TomPyonsuke/subscriptions",
"type": "User",
"url": "https://api.github.com/users/TomPyonsuke",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[
"That's how I loaded the dataset\r\n```python\r\nfrom datasets import load_dataset\r\nds = load_dataset('wikipedia', '20200501.en', cache_dir='/usr/local/workspace/NAS_NLP/cache')\r\n```",
"Hi ! It looks like the arrow file in the folder\r\n`/usr/local/workspace/NAS_NLP/cache/wikipedia/20200501.en/1.0.0/50aa706aa417bb77d910ad61211cc672c0ef3e0f224225a5e0a18277ade8b931` is corrupted.\r\n\r\nCan you take a look and check that it's 18.3GB ?\r\n\r\nIf not, then maybe you need to redownload it:\r\n```python\r\nfrom datasets import load_dataset\r\nds = load_dataset('wikipedia', '20200501.en', cache_dir='/usr/local/workspace/NAS_NLP/cache', download_mode=\"force_redownload\")\r\n```",
"> Hi ! It looks like the arrow file in the folder\r\n> `/usr/local/workspace/NAS_NLP/cache/wikipedia/20200501.en/1.0.0/50aa706aa417bb77d910ad61211cc672c0ef3e0f224225a5e0a18277ade8b931` is corrupted.\r\n> \r\n> Can you take a look and check that it's 18.3GB ?\r\n> \r\n> If not, then maybe you need to redownload it:\r\n> \r\n> ```python\r\n> from datasets import load_dataset\r\n> ds = load_dataset('wikipedia', '20200501.en', cache_dir='/usr/local/workspace/NAS_NLP/cache', download_mode=\"force_redownload\")\r\n> ```\r\n\r\nHi Ihoestq, thanks for the reply! Actually i think my issue is i couldn't download the dataset beyond 10.7G. It feels like the whole dataset is split into different volumes and after the first one was downloaded it crashed before proceeding to the next one. I did try 'force_redownload' mode but still got the same issue.",
"I just tried on my side and got no issues.\r\nWhen downloading the dataset again, did it crash at 10.7GB as well ?",
"> I just tried on my side and got no issues.\r\n> When downloading the dataset again, did it crash at 10.7GB as well ?\r\n\r\nYes i have tried it multiple times on different machines. I am wondering if you could share the screenshot of your dependency versions and i will try to make them the same as yours?",
"I tried using `datasets` from `master` on macos with python 3.7.2\r\nI also have `requests==2.23.0` and `tqdm==4.45.0`."
] | 2021-03-30T10:38:31
| 2021-04-01T09:21:17
| null |
NONE
| null | null | null | null |
**Problem description**
I am getting the following error when trying to load wikipedia/20200501.en dataset.
**Error log**
Downloading and preparing dataset wikipedia/20200501.en (download: 16.99 GiB, generated: 17.07 GiB, post-processed: Unknown size, total: 34.06 GiB) to /usr/local/workspace/NAS_NLP/cache/wikipedia/20200501.en/1.0.0/50aa706aa417bb77d910ad61211cc672c0ef3e0f224225a5e0a18277ade8b931...
Downloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 14.6k/14.6k [00:00<00:00, 5.41MB/s]
Downloading: 59%|███████████████████████████████████████████████████████████████████████████████████████▊ | 10.7G/18.3G [11:30<08:08, 15.5MB/s]
Dataset wikipedia downloaded and prepared to /usr/local/workspace/NAS_NLP/cache/wikipedia/20200501.en/1.0.0/50aa706aa417bb77d910ad61211cc672c0ef3e0f224225a5e0a18277ade8b931. Subsequent calls will reuse this data.
Traceback (most recent call last):
File "load_wiki.py", line 2, in <module>
ds = load_dataset('wikipedia', '20200501.en', cache_dir='/usr/local/workspace/NAS_NLP/cache')
File "/usr/local/lib/python3.6/dist-packages/datasets/load.py", line 751, in load_dataset
ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications, in_memory=keep_in_memory)
File "/usr/local/lib/python3.6/dist-packages/datasets/builder.py", line 746, in as_dataset
map_tuple=True,
File "/usr/local/lib/python3.6/dist-packages/datasets/utils/py_utils.py", line 204, in map_nested
_single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)
File "/usr/local/lib/python3.6/dist-packages/datasets/utils/py_utils.py", line 204, in <listcomp>
_single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)
File "/usr/local/lib/python3.6/dist-packages/datasets/utils/py_utils.py", line 142, in _single_map_nested
return function(data_struct)
File "/usr/local/lib/python3.6/dist-packages/datasets/builder.py", line 763, in _build_single_dataset
in_memory=in_memory,
File "/usr/local/lib/python3.6/dist-packages/datasets/builder.py", line 835, in _as_dataset
in_memory=in_memory,
File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_reader.py", line 215, in read
return self.read_files(files=files, original_instructions=instructions, in_memory=in_memory)
File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_reader.py", line 236, in read_files
pa_table = self._read_files(files, in_memory=in_memory)
File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_reader.py", line 171, in _read_files
pa_table: pa.Table = self._get_dataset_from_filename(f_dict, in_memory=in_memory)
File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_reader.py", line 302, in _get_dataset_from_filename
pa_table = ArrowReader.read_table(filename, in_memory=in_memory)
File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_reader.py", line 324, in read_table
pa_table = f.read_all()
File "pyarrow/ipc.pxi", line 544, in pyarrow.lib.RecordBatchReader.read_all
File "pyarrow/error.pxi", line 99, in pyarrow.lib.check_status
OSError: Expected to be able to read 9176784 bytes for message body, got 4918712
**Detailed version info**
datasets==1.5.0
- dataclasses [required: Any, installed: 0.8]
- dill [required: Any, installed: 0.3.3]
- fsspec [required: Any, installed: 0.8.7]
- importlib-metadata [required: Any, installed: 1.7.0]
- zipp [required: >=0.5, installed: 3.1.0]
- huggingface-hub [required: <0.1.0, installed: 0.0.7]
- filelock [required: Any, installed: 3.0.12]
- importlib-metadata [required: Any, installed: 1.7.0]
- zipp [required: >=0.5, installed: 3.1.0]
- requests [required: Any, installed: 2.24.0]
- certifi [required: >=2017.4.17, installed: 2020.6.20]
- chardet [required: >=3.0.2,<4, installed: 3.0.4]
- idna [required: >=2.5,<3, installed: 2.6]
- urllib3 [required: >=1.21.1,<1.26,!=1.25.1,!=1.25.0, installed: 1.25.10]
- tqdm [required: Any, installed: 4.49.0]
- importlib-metadata [required: Any, installed: 1.7.0]
- zipp [required: >=0.5, installed: 3.1.0]
- multiprocess [required: Any, installed: 0.70.11.1]
- dill [required: >=0.3.3, installed: 0.3.3]
- numpy [required: >=1.17, installed: 1.17.0]
- pandas [required: Any, installed: 1.1.5]
- numpy [required: >=1.15.4, installed: 1.17.0]
- python-dateutil [required: >=2.7.3, installed: 2.8.0]
- six [required: >=1.5, installed: 1.15.0]
- pytz [required: >=2017.2, installed: 2020.1]
- pyarrow [required: >=0.17.1, installed: 3.0.0]
- numpy [required: >=1.16.6, installed: 1.17.0]
- requests [required: >=2.19.0, installed: 2.24.0]
- certifi [required: >=2017.4.17, installed: 2020.6.20]
- chardet [required: >=3.0.2,<4, installed: 3.0.4]
- idna [required: >=2.5,<3, installed: 2.6]
- urllib3 [required: >=1.21.1,<1.26,!=1.25.1,!=1.25.0, installed: 1.25.10]
- tqdm [required: >=4.27,<4.50.0, installed: 4.49.0]
- xxhash [required: Any, installed: 2.0.0]
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2144/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2144/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/2139
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2139/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2139/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2139/events
|
https://github.com/huggingface/datasets/issues/2139
| 843,662,613
|
MDU6SXNzdWU4NDM2NjI2MTM=
| 2,139
|
TypeError when using save_to_disk in a dataset loaded with ReadInstruction split
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/22480495?v=4",
"events_url": "https://api.github.com/users/PedroMLF/events{/privacy}",
"followers_url": "https://api.github.com/users/PedroMLF/followers",
"following_url": "https://api.github.com/users/PedroMLF/following{/other_user}",
"gists_url": "https://api.github.com/users/PedroMLF/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/PedroMLF",
"id": 22480495,
"login": "PedroMLF",
"node_id": "MDQ6VXNlcjIyNDgwNDk1",
"organizations_url": "https://api.github.com/users/PedroMLF/orgs",
"received_events_url": "https://api.github.com/users/PedroMLF/received_events",
"repos_url": "https://api.github.com/users/PedroMLF/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/PedroMLF/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PedroMLF/subscriptions",
"type": "User",
"url": "https://api.github.com/users/PedroMLF",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi !\r\nI think this has been fixed recently on `master`.\r\nCan you try again by installing `datasets` from `master` ?\r\n```\r\npip install git+https://github.com/huggingface/datasets.git\r\n```",
"Hi!\r\n\r\nUsing that version of the code solves the issue. Thanks!"
] | 2021-03-29T18:23:54
| 2021-03-30T09:12:53
| 2021-03-30T09:12:53
|
NONE
| null | null | null | null |
Hi,
Loading a dataset with `load_dataset` using a split defined via `ReadInstruction` and then saving it to disk results in the following error: `TypeError: Object of type ReadInstruction is not JSON serializable`.
Here is the minimal reproducible example:
```python
from datasets import load_dataset
from datasets import ReadInstruction
data_1 = load_dataset(
"wikiann",
"en",
split="validation",
)
data_1.save_to_disk("temporary_path_1")
print("Save with regular split works.")
data_2 = load_dataset(
"wikiann",
"en",
split=ReadInstruction("validation", to=50, unit="%"),
)
data_2.save_to_disk("temporary_path_2")
```
and the corresponding output:
```
Reusing dataset wikiann (/xxxxx/.cache/huggingface/datasets/wikiann/en/1.1.0/0b11a6fb31eea02f38ca17610657bfba3206100685283014daceb8da291c3be9)
Save with regular split works.
Reusing dataset wikiann (/xxxxx/.cache/huggingface/datasets/wikiann/en/1.1.0/0b11a6fb31eea02f38ca17610657bfba3206100685283014daceb8da291c3be9)
Traceback (most recent call last):
File "bug.py", line 20, in <module>
data_2.save_to_disk("temporary_path_2")
File "/xxxxx/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 645, in save_to_disk
json.dump(state, state_file, indent=2, sort_keys=True)
File "/usr/lib/python3.7/json/__init__.py", line 179, in dump
for chunk in iterable:
File "/usr/lib/python3.7/json/encoder.py", line 431, in _iterencode
yield from _iterencode_dict(o, _current_indent_level)
File "/usr/lib/python3.7/json/encoder.py", line 405, in _iterencode_dict
yield from chunks
File "/usr/lib/python3.7/json/encoder.py", line 438, in _iterencode
o = _default(o)
File "/usr/lib/python3.7/json/encoder.py", line 179, in default
raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type ReadInstruction is not JSON serializable
```
Let me know if there is some misuse from my end.
Thanks in advance.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/22480495?v=4",
"events_url": "https://api.github.com/users/PedroMLF/events{/privacy}",
"followers_url": "https://api.github.com/users/PedroMLF/followers",
"following_url": "https://api.github.com/users/PedroMLF/following{/other_user}",
"gists_url": "https://api.github.com/users/PedroMLF/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/PedroMLF",
"id": 22480495,
"login": "PedroMLF",
"node_id": "MDQ6VXNlcjIyNDgwNDk1",
"organizations_url": "https://api.github.com/users/PedroMLF/orgs",
"received_events_url": "https://api.github.com/users/PedroMLF/received_events",
"repos_url": "https://api.github.com/users/PedroMLF/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/PedroMLF/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PedroMLF/subscriptions",
"type": "User",
"url": "https://api.github.com/users/PedroMLF",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2139/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2139/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 14:48:59
|
https://api.github.com/repos/huggingface/datasets/issues/2135
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2135/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2135/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2135/events
|
https://github.com/huggingface/datasets/issues/2135
| 843,246,344
|
MDU6SXNzdWU4NDMyNDYzNDQ=
| 2,135
|
en language data from MLQA dataset is missing
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4",
"events_url": "https://api.github.com/users/rabeehk/events{/privacy}",
"followers_url": "https://api.github.com/users/rabeehk/followers",
"following_url": "https://api.github.com/users/rabeehk/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rabeehk",
"id": 6278280,
"login": "rabeehk",
"node_id": "MDQ6VXNlcjYyNzgyODA=",
"organizations_url": "https://api.github.com/users/rabeehk/orgs",
"received_events_url": "https://api.github.com/users/rabeehk/received_events",
"repos_url": "https://api.github.com/users/rabeehk/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rabeehk",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi ! Indeed only the languages of the `translate-train` data are included...\r\nI can't find a link to download the english train set on https://github.com/facebookresearch/MLQA though, do you know where we can download it ?",
"Hi @lhoestq \r\nthank you very much for coming back to me, now I see, you are right, in the link you sent I see split of {split}-context-{context_language}-question-{question_language}.json with context_language=question_language=en, TFDS most probably has extracted english ones from these files as en language files, but translate-train/test do not have en indeed. thanks a lot for the great explanations",
"I close the ticket, since I do not see any en existing, they have trained on \"SQuAD V1.1\" instead. Thanks. "
] | 2021-03-29T10:47:50
| 2021-03-30T10:20:23
| 2021-03-30T10:20:23
|
CONTRIBUTOR
| null | null | null | null |
Hi
I need mlqa-translate-train.en dataset, but it is missing from the MLQA dataset. could you have a look please? @lhoestq thank you for your help to fix this issue.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4",
"events_url": "https://api.github.com/users/rabeehk/events{/privacy}",
"followers_url": "https://api.github.com/users/rabeehk/followers",
"following_url": "https://api.github.com/users/rabeehk/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rabeehk",
"id": 6278280,
"login": "rabeehk",
"node_id": "MDQ6VXNlcjYyNzgyODA=",
"organizations_url": "https://api.github.com/users/rabeehk/orgs",
"received_events_url": "https://api.github.com/users/rabeehk/received_events",
"repos_url": "https://api.github.com/users/rabeehk/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rabeehk",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2135/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2135/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 23:32:33
|
https://api.github.com/repos/huggingface/datasets/issues/2134
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2134/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2134/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2134/events
|
https://github.com/huggingface/datasets/issues/2134
| 843,242,849
|
MDU6SXNzdWU4NDMyNDI4NDk=
| 2,134
|
Saving large in-memory datasets with save_to_disk crashes because of pickling
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/5815801?v=4",
"events_url": "https://api.github.com/users/prokopCerny/events{/privacy}",
"followers_url": "https://api.github.com/users/prokopCerny/followers",
"following_url": "https://api.github.com/users/prokopCerny/following{/other_user}",
"gists_url": "https://api.github.com/users/prokopCerny/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/prokopCerny",
"id": 5815801,
"login": "prokopCerny",
"node_id": "MDQ6VXNlcjU4MTU4MDE=",
"organizations_url": "https://api.github.com/users/prokopCerny/orgs",
"received_events_url": "https://api.github.com/users/prokopCerny/received_events",
"repos_url": "https://api.github.com/users/prokopCerny/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/prokopCerny/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/prokopCerny/subscriptions",
"type": "User",
"url": "https://api.github.com/users/prokopCerny",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
] |
[
"Hi !\r\nIndeed `save_to_disk` doesn't call pickle anymore. Though the `OverflowError` can still appear for in-memory datasets bigger than 4GB. This happens when doing this for example:\r\n```python\r\nimport pyarrow as pa\r\nimport pickle\r\n\r\narr = pa.array([0] * ((4 * 8 << 30) // 64))\r\ntable = pa.Table.from_arrays([a], names=[\"foo\"])\r\npickle.dumps(table) # fails with an OverflowError\r\npickle.dumps(table, 4) # works !\r\n```\r\nWe'll do the change to use `protocol=4`.\r\n\r\nMoreover I've also seen other users complain about this error\r\n```\r\nstruct.error: 'I' format requires 0 <= number <= 4294967295\r\n```\r\n\r\nIt looks like something related to the 4GB limit as well but I'm not able to reproduce on my side.\r\nDo you think you can provide a script that reproduces the issue ?\r\nHow big is your dataset ? (number of bytes, number of rows)\r\n\r\n",
"Hi!\r\nSo I've managed to created a minimum working (well technically crashing) example for the multiprocessing case, I create a huge list of zeros, like in your example, and then I try to .map(None, num_proc=2) over it, which then crashes, here's the code:\r\n\r\n```python\r\nfrom datasets import Dataset\r\n\r\nif __name__ == '__main__':\r\n ton_of_zeroes = [0] * ((12 * 8 << 30) // 64)\r\n large_dataset = Dataset.from_dict({'col': ton_of_zeroes})\r\n print(\"Start\")\r\n large_dataset.map(function=None, num_proc=2)\r\n print(\"Done - should not print\")\r\n```\r\n\r\nThe amount of zeros could probably be reduced, I haven't tried to minimize it to find the breaking point, I just increased it from your code (which by quick glance I assumed tried to allocate over 4 GiB)\r\n\r\nRunning this results in the following traceback:\r\n\r\n```\r\nParameter 'indices'=[ 0 1 2 ... 805306365 805306366 805306367] of the transform datasets.arrow_dataset.Dataset.select couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed.\r\nTraceback (most recent call last):\r\n File \"./crash_multiproc_pickle.py\", line 7, in <module>\r\n large_dataset.map(function=None, num_proc=2)\r\n File \"/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/datasets/arrow_dataset.py\", line 1485, in map\r\n transformed_shards = [r.get() for r in results]\r\n File \"/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/datasets/arrow_dataset.py\", line 1485, in <listcomp>\r\n transformed_shards = [r.get() for r in results]\r\n File \"/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/pool.py\", line 657, in get\r\n raise self._value\r\n File \"/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/pool.py\", line 431, in _handle_tasks\r\n put(task)\r\n File \"/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/connection.py\", line 209, in send\r\n self._send_bytes(_ForkingPickler.dumps(obj))\r\n File \"/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/reduction.py\", line 54, in dumps\r\n cls(buf, protocol, *args, **kwds).dump(obj)\r\n File \"/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/dill/_dill.py\", line 454, in dump\r\n StockPickler.dump(self, obj)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 437, in dump\r\n self.save(obj)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 789, in save_tuple\r\n save(element)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/dill/_dill.py\", line 941, in save_module_dict\r\n StockPickler.save_dict(pickler, obj)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 859, in save_dict\r\n self._batch_setitems(obj.items())\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 885, in _batch_setitems\r\n save(v)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 549, in save\r\n self.save_reduce(obj=obj, *rv)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 662, in save_reduce\r\n save(state)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/dill/_dill.py\", line 941, in save_module_dict\r\n StockPickler.save_dict(pickler, obj)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 859, in save_dict\r\n self._batch_setitems(obj.items())\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 885, in _batch_setitems\r\n save(v)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 549, in save\r\n self.save_reduce(obj=obj, *rv)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 638, in save_reduce\r\n save(args)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 774, in save_tuple\r\n save(element)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 819, in save_list\r\n self._batch_appends(obj)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 846, in _batch_appends\r\n save(tmp[0])\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 549, in save\r\n self.save_reduce(obj=obj, *rv)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 638, in save_reduce\r\n save(args)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 774, in save_tuple\r\n save(element)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 819, in save_list\r\n self._batch_appends(obj)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 846, in _batch_appends\r\n save(tmp[0])\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 549, in save\r\n self.save_reduce(obj=obj, *rv)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 638, in save_reduce\r\n save(args)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 774, in save_tuple\r\n save(element)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 789, in save_tuple\r\n save(element)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 819, in save_list\r\n self._batch_appends(obj)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 843, in _batch_appends\r\n save(x)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 549, in save\r\n self.save_reduce(obj=obj, *rv)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 638, in save_reduce\r\n save(args)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 774, in save_tuple\r\n save(element)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 732, in save_bytes\r\n self._write_large_bytes(BINBYTES + pack(\"<I\", n), obj)\r\nstruct.error: 'I' format requires 0 <= number <= 4294967295\r\n```\r\n\r\nMy datasets usually have hundreds of thousands to low millions of rows, with each row containing a list of 10 strings and list of vectors of different length (the strings tokenized), which in the worst case have 10\\*512\\*8 = 40960 bytes (but usually it is much smaller, as the vectors tend to be shorter. I need these groups of text lines to create training data for the Inverse Cloze Task.\r\n\r\nAnyway I don't think my particular dataset is relevant, as the tiny script I created also manages to crash.\r\nBut I think the issue is the same as the save_to_disk, from the traceback it seems that in multiprocessing, it tries to use dill to return the result of the map workers, which tries to pickle the data and can't do it, probably because it's again using the older pickle protocol. That's my guess anyway.",
"I just merged a fix #2150 that allows to pickle tables bigger than 4GiB\r\nFeel free to try it on the `master` branch !",
"awesome! I started getting this error as well when I tried to tokenize with a longer sequence length",
"@prokopCerny does this fix work for you? I found that with the latest master, my container with 500GB RAM starts crashing when I try to map a large dataset using `num_proc`.\r\n\r\n@lhoestq would it be possible to implement some logic to keep the individual cache files small (say below 100mb)? I find this helps with loading large datasets, but the \"hack\" I was using (increasing `num_proc` to a large number) doesn't work anymore with the latest master; my container crashes even with `num_proc=200` now",
"Closing since the original issue was fixed in #2150 \r\nFeel free to reopen if you are still experiencing it.\r\nFor the other problems, please open separate issues"
] | 2021-03-29T10:43:15
| 2021-05-03T17:59:21
| 2021-05-03T17:59:21
|
NONE
| null | null | null | null |
Using Datasets 1.5.0 on Python 3.7.
Recently I've been working on medium to large size datasets (pretokenized raw text sizes from few gigabytes to low tens of gigabytes), and have found out that several preprocessing steps are massively faster when done in memory, and I have the ability to requisition a lot of RAM, so I decided to do these steps completely out of the datasets library.
So my workflow is to do several .map() on datasets object, then for the operation which is faster in memory to extract the necessary columns from the dataset and then drop it whole, do the transformation in memory, and then create a fresh Dataset object using .from_dict() or other method.
When I then try to call save_to_disk(path) on the dataset, it crashes because of pickling, which appears to be because of using old pickle protocol which doesn't support large files (over 4 GiB).
```
Traceback (most recent call last):
File "./tokenize_and_chunkify_in_memory.py", line 80, in <module>
main()
File "./tokenize_and_chunkify_in_memory.py", line 75, in main
tokenize_and_chunkify(config)
File "./tokenize_and_chunkify_in_memory.py", line 60, in tokenize_and_chunkify
contexts_dataset.save_to_disk(chunked_path)
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 457, in save_to_disk
self = pickle.loads(pickle.dumps(self))
OverflowError: cannot serialize a bytes object larger than 4 GiB
```
From what I've seen this issue may be possibly fixed, as the line `self = pickle.loads(pickle.dumps(self))` does not appear to be present in the current state of the repository.
To save these datasets to disk, I've resorted to calling .map() over them with `function=None` and specifying the .arrow cache file, and then creating a new dataset using the .from_file() method, which I can then safely save to disk.
Additional issue when working with these large in-memory datasets is when using multiprocessing, is again to do with pickling. I've tried to speed up the mapping with function=None by specifying num_proc to the available cpu count, and I again get issues with transferring the dataset, with the following traceback. I am not sure if I should open a separate issue for that.
```
Traceback (most recent call last):
File "./tokenize_and_chunkify_in_memory.py", line 94, in <module>
main()
File "./tokenize_and_chunkify_in_memory.py", line 89, in main
tokenize_and_chunkify(config)
File "./tokenize_and_chunkify_in_memory.py", line 67, in tokenize_and_chunkify
contexts_dataset.map(function=None, cache_file_name=str(output_dir_path / "tmp.arrow"), writer_batch_size=50000, num_proc=config.threads)
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1485, in map
transformed_shards = [r.get() for r in results]
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1485, in <listcomp>
transformed_shards = [r.get() for r in results]
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/pool.py", line 657, in get
raise self._value
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/pool.py", line 431, in _handle_tasks
put(task)
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/connection.py", line 209, in send
self._send_bytes(_ForkingPickler.dumps(obj))
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/reduction.py", line 54, in dumps
cls(buf, protocol, *args, **kwds).dump(obj)
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/dill/_dill.py", line 454, in dump
StockPickler.dump(self, obj)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 437, in dump
self.save(obj)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 789, in save_tuple
save(element)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/dill/_dill.py", line 941, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 859, in save_dict
self._batch_setitems(obj.items())
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 885, in _batch_setitems
save(v)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save
self.save_reduce(obj=obj, *rv)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 662, in save_reduce
save(state)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/dill/_dill.py", line 941, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 859, in save_dict
self._batch_setitems(obj.items())
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 885, in _batch_setitems
save(v)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save
self.save_reduce(obj=obj, *rv)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 638, in save_reduce
save(args)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 774, in save_tuple
save(element)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list
self._batch_appends(obj)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 843, in _batch_appends
save(x)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save
self.save_reduce(obj=obj, *rv)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 638, in save_reduce
save(args)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 774, in save_tuple
save(element)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list
self._batch_appends(obj)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 846, in _batch_appends
save(tmp[0])
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save
self.save_reduce(obj=obj, *rv)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 638, in save_reduce
save(args)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 774, in save_tuple
save(element)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 789, in save_tuple
save(element)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list
self._batch_appends(obj)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 846, in _batch_appends
save(tmp[0])
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 789, in save_tuple
save(element)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list
self._batch_appends(obj)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 846, in _batch_appends
save(tmp[0])
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 789, in save_tuple
save(element)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list
self._batch_appends(obj)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 843, in _batch_appends
save(x)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save
self.save_reduce(obj=obj, *rv)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 638, in save_reduce
save(args)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 774, in save_tuple
save(element)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 732, in save_bytes
self._write_large_bytes(BINBYTES + pack("<I", n), obj)
struct.error: 'I' format requires 0 <= number <= 4294967295Traceback (most recent call last):
File "./tokenize_and_chunkify_in_memory.py", line 94, in <module>
main()
File "./tokenize_and_chunkify_in_memory.py", line 89, in main
tokenize_and_chunkify(config)
File "./tokenize_and_chunkify_in_memory.py", line 67, in tokenize_and_chunkify
contexts_dataset.map(function=None, cache_file_name=str(output_dir_path / "tmp.arrow"), writer_batch_size=50000, num_proc=config.threads)
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1485, in map
transformed_shards = [r.get() for r in results]
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1485, in <listcomp>
transformed_shards = [r.get() for r in results]
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/pool.py", line 657, in get
raise self._value
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/pool.py", line 431, in _handle_tasks
put(task)
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/connection.py", line 209, in send
self._send_bytes(_ForkingPickler.dumps(obj))
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/reduction.py", line 54, in dumps
cls(buf, protocol, *args, **kwds).dump(obj)
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/dill/_dill.py", line 454, in dump
StockPickler.dump(self, obj)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 437, in dump
self.save(obj)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 789, in save_tuple
save(element)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/dill/_dill.py", line 941, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 859, in save_dict
self._batch_setitems(obj.items())
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 885, in _batch_setitems
save(v)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save
self.save_reduce(obj=obj, *rv)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 662, in save_reduce
save(state)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/dill/_dill.py", line 941, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 859, in save_dict
self._batch_setitems(obj.items())
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 885, in _batch_setitems
save(v)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save
self.save_reduce(obj=obj, *rv)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 638, in save_reduce
save(args)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 774, in save_tuple
save(element)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list
self._batch_appends(obj)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 843, in _batch_appends
save(x)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save
self.save_reduce(obj=obj, *rv)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 638, in save_reduce
save(args)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 774, in save_tuple
save(element)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list
self._batch_appends(obj)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 846, in _batch_appends
save(tmp[0])
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save
self.save_reduce(obj=obj, *rv)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 638, in save_reduce
save(args)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 774, in save_tuple
save(element)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 789, in save_tuple
save(element)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list
self._batch_appends(obj)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 846, in _batch_appends
save(tmp[0])
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 789, in save_tuple
save(element)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list
self._batch_appends(obj)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 846, in _batch_appends
save(tmp[0])
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 789, in save_tuple
save(element)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list
self._batch_appends(obj)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 843, in _batch_appends
save(x)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save
self.save_reduce(obj=obj, *rv)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 638, in save_reduce
save(args)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 774, in save_tuple
save(element)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 732, in save_bytes
self._write_large_bytes(BINBYTES + pack("<I", n), obj)
struct.error: 'I' format requires 0 <= number <= 4294967295
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2134/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2134/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 35 days, 7:16:06
|
https://api.github.com/repos/huggingface/datasets/issues/2133
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2133/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2133/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2133/events
|
https://github.com/huggingface/datasets/issues/2133
| 843,149,680
|
MDU6SXNzdWU4NDMxNDk2ODA=
| 2,133
|
bug in mlqa dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dorost1234",
"id": 79165106,
"login": "dorost1234",
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dorost1234",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"If you print those questions, you get readable texts:\r\n```python\r\n>>> questions = [\r\n... \"\\u0645\\u062a\\u0649 \\u0628\\u062f\\u0627\\u062a \\u0627\\u0644\\u0645\\u062c\\u0644\\u0629 \\u0627\\u0644\\u0645\\u062f\\u0631\\u0633\\u064a\\u0629 \\u0641\\u064a \\u0646\\u0648\\u062a\\u0631\\u062f\\u0627\\u0645 \\u0628\\u0627\\u0644\\u0646\\u0634\\u0631?\",\r\n... \"\\u0643\\u0645 \\u0645\\u0631\\u0629 \\u064a\\u062a\\u0645 \\u0646\\u0634\\u0631\\u0647\\u0627 \\u0641\\u064a \\u0646\\u0648\\u062a\\u0631\\u062f\\u0627\\u0645?\",\r\n... \"\\u0645\\u0627 \\u0647\\u064a \\u0627\\u0644\\u0648\\u0631\\u0642\\u0629 \\u0627\\u0644\\u064a\\u0648\\u0645\\u064a\\u0629 \\u0644\\u0644\\u0637\\u0644\\u0627\\u0628 \\u0641\\u064a \\u0646\\u0648\\u062a\\u0631\\u062f\\u0627\\u0645?\",\r\n... \"\\u0643\\u0645 \\u0639\\u062f\\u062f \\u0627\\u0644\\u0627\\u0648\\u0631\\u0627\\u0642 \\u0627\\u0644\\u0627\\u062e\\u0628\\u0627\\u0631\\u064a\\u0629 \\u0644\\u0644\\u0637\\u0644\\u0627\\u0628 \\u0627\\u0644\\u062a\\u064a \\u0648\\u062c\\u062f\\u062a \\u0641\\u064a \\u0646\\u0648\\u062a\\u0631\\u062f\\u0627\\u0645?\",\r\n... \"\\u0641\\u064a \\u0627\\u064a \\u0633\\u0646\\u0629 \\u0628\\u062f\\u0627\\u062a \\u0648\\u0631\\u0642\\u0629 \\u0627\\u0644\\u0637\\u0627\\u0644\\u0628 \\u0627\\u0644\\u062d\\u0633 \\u0627\\u0644\\u0633\\u0644\\u064a\\u0645 \\u0628\\u0627\\u0644\\u0646\\u0634\\u0631 \\u0641\\u064a \\u0646\\u0648\\u062a\\u0631\\u062f\\u0627\\u0645?\"\r\n... ]\r\n>>> print(questions)\r\n['متى بدات المجلة المدرسية في نوتردام بالنشر?', 'كم مرة يتم نشرها في نوتردام?', 'ما هي الورقة اليومية للطلاب في نوتردام?', 'كم عدد الاوراق الاخبارية للطلاب التي وجدت في نوتردام?', 'في اي سنة بدات ورقة الطالب الحس السليم بالنشر في نوتردام?']\r\n```\r\nI don't think we can change this",
"Hi @dorost1234.\r\n\r\nIn Python 3, strings are sequences of Unicode _code points_. Unicode is a specification that maps all characters (and emoji symbols) with its unique representation in terms of code points. That is what you see: Unicode code points (represented by a \\u escaped sequence of 16-bit hex values).\r\n\r\nCharacters are usually represented (on screen and papers) with a graphical element called _glyph_. That is what you would like to see: glyphs. But Python does not care about glyphs: that is the job of the GUI or the terminal; glyphs are what you get with the `print` function (if your terminal is properly configured to display those glyphs).\r\n\r\nYou have more detailed information about Unicode in the Python documentation: https://docs.python.org/3/howto/unicode.html",
"thank you so much for the insightful comments. "
] | 2021-03-29T09:03:09
| 2021-03-30T17:40:57
| 2021-03-30T17:40:57
|
NONE
| null | null | null | null |
Hi
Looking into MLQA dataset for langauge "ar":
```
"question": [
"\u0645\u062a\u0649 \u0628\u062f\u0627\u062a \u0627\u0644\u0645\u062c\u0644\u0629 \u0627\u0644\u0645\u062f\u0631\u0633\u064a\u0629 \u0641\u064a \u0646\u0648\u062a\u0631\u062f\u0627\u0645 \u0628\u0627\u0644\u0646\u0634\u0631?",
"\u0643\u0645 \u0645\u0631\u0629 \u064a\u062a\u0645 \u0646\u0634\u0631\u0647\u0627 \u0641\u064a \u0646\u0648\u062a\u0631\u062f\u0627\u0645?",
"\u0645\u0627 \u0647\u064a \u0627\u0644\u0648\u0631\u0642\u0629 \u0627\u0644\u064a\u0648\u0645\u064a\u0629 \u0644\u0644\u0637\u0644\u0627\u0628 \u0641\u064a \u0646\u0648\u062a\u0631\u062f\u0627\u0645?",
"\u0643\u0645 \u0639\u062f\u062f \u0627\u0644\u0627\u0648\u0631\u0627\u0642 \u0627\u0644\u0627\u062e\u0628\u0627\u0631\u064a\u0629 \u0644\u0644\u0637\u0644\u0627\u0628 \u0627\u0644\u062a\u064a \u0648\u062c\u062f\u062a \u0641\u064a \u0646\u0648\u062a\u0631\u062f\u0627\u0645?",
"\u0641\u064a \u0627\u064a \u0633\u0646\u0629 \u0628\u062f\u0627\u062a \u0648\u0631\u0642\u0629 \u0627\u0644\u0637\u0627\u0644\u0628 \u0627\u0644\u062d\u0633 \u0627\u0644\u0633\u0644\u064a\u0645 \u0628\u0627\u0644\u0646\u0634\u0631 \u0641\u064a \u0646\u0648\u062a\u0631\u062f\u0627\u0645?"
]
```
the questions are in the wrong format, and not readable, could you please have a look? thanks @lhoestq
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dorost1234",
"id": 79165106,
"login": "dorost1234",
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dorost1234",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2133/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2133/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 1 day, 8:37:48
|
https://api.github.com/repos/huggingface/datasets/issues/2132
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2132/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2132/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2132/events
|
https://github.com/huggingface/datasets/issues/2132
| 843,142,822
|
MDU6SXNzdWU4NDMxNDI4MjI=
| 2,132
|
TydiQA dataset is mixed and is not split per language
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dorost1234",
"id": 79165106,
"login": "dorost1234",
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dorost1234",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[
"You can filter the languages this way:\r\n```python\r\ntydiqa_en = tydiqa_dataset.filter(lambda x: x[\"language\"] == \"english\")\r\n```\r\n\r\nOtherwise maybe we can have one configuration per language ?\r\nWhat do you think of this for example ?\r\n\r\n```python\r\nload_dataset(\"tydiqa\", \"primary_task.en\")\r\n```",
"Hi\nthank you very much for the great response, this will be really wonderful\nto have one configuration per language, as one need the dataset in majority\nof case per language for cross-lingual evaluations.\nThis becomes also then more close to TFDS format, which is separated per\nlanguage https://www.tensorflow.org/datasets/catalog/tydi_qa which will be\nreally awesome to have.\nthanks\n\nOn Mon, Mar 29, 2021 at 6:17 PM Quentin Lhoest ***@***.***>\nwrote:\n\n> You can filter the languages this way:\n>\n> tydiqa_en = tydiqa_dataset.filter(lambda x: x[\"language\"] == \"english\")\n>\n> Otherwise maybe we can have one configuration per language ?\n> What do you think of this for example ?\n>\n> load_dataset(\"tydiqa\", \"primary_task.en\")\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/issues/2132#issuecomment-809516799>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AS37NMXPW2PWSQ2RHG73O7TTGCY4LANCNFSM4Z7ER7IA>\n> .\n>\n",
"@lhoestq I greatly appreciate any updates on this. thanks a lot"
] | 2021-03-29T08:56:21
| 2021-04-04T09:57:15
| null |
NONE
| null | null | null | null |
Hi @lhoestq
Currently TydiQA is mixed and user can only access the whole training set of all languages:
https://www.tensorflow.org/datasets/catalog/tydi_qa
for using this dataset, one need to train/evaluate in each separate language, and having them mixed, makes it hard to use this dataset. This is much convenient for user to have them split and I appreciate your help on this.
Meanwhile, till hopefully this is split per language, I greatly appreciate telling me how I can preprocess and get data per language. thanks a lot
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2132/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2132/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/2131
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2131/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2131/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2131/events
|
https://github.com/huggingface/datasets/issues/2131
| 843,133,112
|
MDU6SXNzdWU4NDMxMzMxMTI=
| 2,131
|
When training with Multi-Node Multi-GPU the worker 2 has TypeError: 'NoneType' object
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/23011317?v=4",
"events_url": "https://api.github.com/users/andy-yangz/events{/privacy}",
"followers_url": "https://api.github.com/users/andy-yangz/followers",
"following_url": "https://api.github.com/users/andy-yangz/following{/other_user}",
"gists_url": "https://api.github.com/users/andy-yangz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/andy-yangz",
"id": 23011317,
"login": "andy-yangz",
"node_id": "MDQ6VXNlcjIzMDExMzE3",
"organizations_url": "https://api.github.com/users/andy-yangz/orgs",
"received_events_url": "https://api.github.com/users/andy-yangz/received_events",
"repos_url": "https://api.github.com/users/andy-yangz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/andy-yangz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andy-yangz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/andy-yangz",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
] |
[
"Hi ! Thanks for reporting\r\nI was able to reproduce this issue. This was caused by missing split infos if a worker reloads the cache of the other worker.\r\n\r\nI just opened https://github.com/huggingface/datasets/pull/2137 to fix this issue",
"The PR got merged :)\r\nFeel free to try it out on the `master` branch",
"Sorry for the late reply. \r\nNow everything just works well XD"
] | 2021-03-29T08:45:58
| 2021-04-10T11:08:55
| 2021-04-10T11:08:55
|
NONE
| null | null | null | null |
version: 1.5.0
met a very strange error, I am training large scale language model, and need train on 2 machines(workers).
And sometimes I will get this error `TypeError: 'NoneType' object is not iterable`
This is traceback
```
71 | | Traceback (most recent call last):
-- | -- | --
72 | | File "run_gpt.py", line 316, in <module>
73 | | main()
74 | | File "run_gpt.py", line 222, in main
75 | | delimiter="\t", column_names=["input_ids", "attention_mask", "chinese_ref"])
76 | | File "/data/miniconda3/lib/python3.7/site-packages/datasets/load.py", line 747, in load_dataset
77 | | use_auth_token=use_auth_token,
78 | | File "/data/miniconda3/lib/python3.7/site-packages/datasets/builder.py", line 513, in download_and_prepare
79 | | self.download_post_processing_resources(dl_manager)
80 | | File "/data/miniconda3/lib/python3.7/site-packages/datasets/builder.py", line 673, in download_post_processing_resources
81 | | for split in self.info.splits:
82 | | TypeError: 'NoneType' object is not iterable
83 | | WARNING:datasets.builder:Reusing dataset csv (/usr/local/app/.cache/huggingface/datasets/csv/default-1c257ebd48e225e7/0.0.0/2960f95a26e85d40ca41a230ac88787f715ee3003edaacb8b1f0891e9f04dda2)
84 | | Traceback (most recent call last):
85 | | File "/data/miniconda3/lib/python3.7/runpy.py", line 193, in _run_module_as_main
86 | | "__main__", mod_spec)
87 | | File "/data/miniconda3/lib/python3.7/runpy.py", line 85, in _run_code
88 | | exec(code, run_globals)
89 | | File "/data/miniconda3/lib/python3.7/site-packages/torch/distributed/launch.py", line 340, in <module>
90 | | main()
91 | | File "/data/miniconda3/lib/python3.7/site-packages/torch/distributed/launch.py", line 326, in main
92 | | sigkill_handler(signal.SIGTERM, None) # not coming back
93 | | File "/data/miniconda3/lib/python3.7/site-packages/torch/distributed/launch.py", line 301, in sigkill_handler
94 | | raise subprocess.CalledProcessError(returncode=last_return_code, cmd=cmd)
```
On worker 1 it loads the dataset well, however on worker 2 will get this error.
And I will meet this error from time to time, sometimes it just goes well.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/23011317?v=4",
"events_url": "https://api.github.com/users/andy-yangz/events{/privacy}",
"followers_url": "https://api.github.com/users/andy-yangz/followers",
"following_url": "https://api.github.com/users/andy-yangz/following{/other_user}",
"gists_url": "https://api.github.com/users/andy-yangz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/andy-yangz",
"id": 23011317,
"login": "andy-yangz",
"node_id": "MDQ6VXNlcjIzMDExMzE3",
"organizations_url": "https://api.github.com/users/andy-yangz/orgs",
"received_events_url": "https://api.github.com/users/andy-yangz/received_events",
"repos_url": "https://api.github.com/users/andy-yangz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/andy-yangz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andy-yangz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/andy-yangz",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 1,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2131/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2131/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 12 days, 2:22:57
|
https://api.github.com/repos/huggingface/datasets/issues/2130
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2130/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2130/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2130/events
|
https://github.com/huggingface/datasets/issues/2130
| 843,111,936
|
MDU6SXNzdWU4NDMxMTE5MzY=
| 2,130
|
wikiann dataset is missing columns
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dorost1234",
"id": 79165106,
"login": "dorost1234",
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dorost1234",
"user_view_type": "public"
}
|
[
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
}
] |
closed
| false
| null |
[] |
[
"Here please find TFDS format of this dataset: https://www.tensorflow.org/datasets/catalog/wikiann\r\nwhere there is a span column, this is really necessary to be able to use the data, and I appreciate your help @lhoestq ",
"Hi !\r\nApparently you can get the spans from the NER tags using `tags_to_spans` defined here:\r\n\r\nhttps://github.com/tensorflow/datasets/blob/c7096bd38e86ed240b8b2c11ecab9893715a7d55/tensorflow_datasets/text/wikiann/wikiann.py#L81-L126\r\n\r\nIt would be nice to include the `spans` field in this dataset as in TFDS. This could be a good first issue for new contributors !\r\n\r\nThe objective is to use `tags_to_spans` in the `_generate_examples` method [here](https://github.com/huggingface/nlp/blob/c98e4b8f23e3770c401c6d9326e243e1ffd599ec/datasets/wikiann/wikiann.py#L292-L316) to create he `spans` for each example.",
"Hi @lhoestq \r\nthank you very much for the help, it would be very nice to have it included, here is the full code, one need to also convert tags to string first:\r\n\r\n```\r\nimport datasets \r\nfrom datasets import load_dataset\r\n\r\ndef tags_to_spans(tags):\r\n \"\"\"Convert tags to spans.\"\"\"\r\n spans = set()\r\n span_start = 0\r\n span_end = 0\r\n active_conll_tag = None\r\n for index, string_tag in enumerate(tags):\r\n # Actual BIO tag.\r\n bio_tag = string_tag[0]\r\n assert bio_tag in [\"B\", \"I\", \"O\"], \"Invalid Tag\"\r\n conll_tag = string_tag[2:]\r\n if bio_tag == \"O\":\r\n # The span has ended.\r\n if active_conll_tag:\r\n spans.add((active_conll_tag, (span_start, span_end)))\r\n active_conll_tag = None\r\n # We don't care about tags we are\r\n # told to ignore, so we do nothing.\r\n continue\r\n elif bio_tag == \"B\":\r\n # We are entering a new span; reset indices and active tag to new span.\r\n if active_conll_tag:\r\n spans.add((active_conll_tag, (span_start, span_end)))\r\n active_conll_tag = conll_tag\r\n span_start = index\r\n span_end = index\r\n elif bio_tag == \"I\" and conll_tag == active_conll_tag:\r\n # We're inside a span.\r\n span_end += 1\r\n else:\r\n # This is the case the bio label is an \"I\", but either:\r\n # 1) the span hasn't started - i.e. an ill formed span.\r\n # 2) We have IOB1 tagging scheme.\r\n # We'll process the previous span if it exists, but also include this\r\n # span. This is important, because otherwise, a model may get a perfect\r\n # F1 score whilst still including false positive ill-formed spans.\r\n if active_conll_tag:\r\n spans.add((active_conll_tag, (span_start, span_end)))\r\n active_conll_tag = conll_tag\r\n span_start = index\r\n span_end = index\r\n # Last token might have been a part of a valid span.\r\n if active_conll_tag:\r\n spans.add((active_conll_tag, (span_start, span_end)))\r\n # Return sorted list of spans\r\n return sorted(list(spans), key=lambda x: x[1][0])\r\n\r\ndataset = load_dataset('wikiann', 'en', split=\"train\")\r\nner_tags = {\r\n 0:\"O\",\r\n 1:\"B-PER\",\r\n 2:\"I-PER\",\r\n 3:\"B-ORG\",\r\n 4:\"I-ORG\",\r\n 5:\"B-LOC\",\r\n 6:\"I-LOC\"\r\n}\r\n\r\ndef get_spans(tokens, tags):\r\n \"\"\"Convert tags to textspans.\"\"\"\r\n spans = tags_to_spans(tags)\r\n text_spans = [\r\n x[0] + \": \" + \" \".join([tokens[i]\r\n for i in range(x[1][0], x[1][1] + 1)])\r\n for x in spans\r\n ]\r\n if not text_spans:\r\n text_spans = [\"None\"]\r\n return text_spans\r\n\r\n\r\nfor i, d in enumerate(dataset):\r\n tokens = d['tokens']\r\n tags = d['ner_tags']\r\n tags = [ner_tags[i] for i in tags]\r\n spans = get_spans(tokens, tags)\r\n print(\"spans \", spans)\r\n print(d)\r\n if i > 10:\r\n break; \r\n```\r\nI am not sure how to contribute to the repository and how things work, could you let me know how one can access the datasets to be able to contribute to the repository? Maybe I could do it then\r\nthanks \r\n",
"Cool ! Let me give you some context:\r\n\r\n#### Contribution guide\r\n\r\nYou can find the contribution guide here:\r\n\r\nhttps://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md\r\n\r\nIt explains how to set up your dev environment in a few steps.\r\n\r\n#### Dataset loading\r\n\r\nEach Dataset is defined by a Table that have many rows (one row = one example) and columns (one column = one feature).\r\nTo change how a dataset is constructed, you have to modify its dataset script that you can find here:\r\n\r\nhttps://github.com/huggingface/datasets/blob/master/datasets/wikiann/wikiann.py\r\n\r\nIt includes everything needed to load the WikiANN dataset.\r\nYou can load locally a modified version of `wikiann.py` with `load_dataset(\"path/to/wikiann.py\")`.\r\n\r\n#### Define a new column\r\n\r\nEach column has a name and a type. You can see how the features of WikiANN are defined here:\r\n\r\nhttps://github.com/huggingface/datasets/blob/c98e4b8f23e3770c401c6d9326e243e1ffd599ec/datasets/wikiann/wikiann.py#L245-L263\r\n\r\nIdeally we would have one additional feature \"spans\":\r\n```python\r\n \"spans\": datasets.Sequence(datasets.Value(\"string\")),\r\n```\r\n\r\n#### Compute the content of each row\r\n\r\nTo build the WikiANN rows, the _generate_examples method from [here](https://github.com/huggingface/nlp/blob/c98e4b8f23e3770c401c6d9326e243e1ffd599ec/datasets/wikiann/wikiann.py#L292-L316) is used. This function `yield` one python dictionary for each example:\r\n```python\r\nyield guid_index, {\"tokens\": tokens, \"ner_tags\": ner_tags, \"langs\": langs}\r\n```\r\n\r\nThe objective would be to return instead something like\r\n```python\r\nspans = spans = get_spans(tokens, tags)\r\nyield guid_index, {\"tokens\": tokens, \"ner_tags\": ner_tags, \"langs\": langs, \"spans\": spans}\r\n```\r\n\r\nLet me know if you have questions !",
"The PR was merged. Issue should be closed.\r\n\r\nCC: @lhoestq "
] | 2021-03-29T08:23:00
| 2021-08-27T14:44:18
| 2021-08-27T14:44:18
|
NONE
| null | null | null | null |
Hi
Wikiann dataset needs to have "spans" columns, which is necessary to be able to use this dataset, but this column is missing from huggingface datasets, could you please have a look? thank you @lhoestq
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2130/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2130/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 151 days, 6:21:18
|
https://api.github.com/repos/huggingface/datasets/issues/2129
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2129/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2129/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2129/events
|
https://github.com/huggingface/datasets/issues/2129
| 843,033,656
|
MDU6SXNzdWU4NDMwMzM2NTY=
| 2,129
|
How to train BERT model with next sentence prediction?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/836541?v=4",
"events_url": "https://api.github.com/users/jnishi/events{/privacy}",
"followers_url": "https://api.github.com/users/jnishi/followers",
"following_url": "https://api.github.com/users/jnishi/following{/other_user}",
"gists_url": "https://api.github.com/users/jnishi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jnishi",
"id": 836541,
"login": "jnishi",
"node_id": "MDQ6VXNlcjgzNjU0MQ==",
"organizations_url": "https://api.github.com/users/jnishi/orgs",
"received_events_url": "https://api.github.com/users/jnishi/received_events",
"repos_url": "https://api.github.com/users/jnishi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jnishi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jnishi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jnishi",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi !\r\nWe're not using `TextDatasetForNextSentencePrediction` in `datasets`.\r\nAlthough you can probably use the `TextDatasetForNextSentencePrediction.create_examples_from_document` on a dataset to prepare it for next sentence prediction.",
"Thanks.\r\n\r\nDo you mean that `TextDatasetForNextSentencePrediction.create_exapmles_from_document` can be applied to dataset object other than `TextDatasetForNextSentencePrediction` e.g. a `Dataset` object which is loaded by `datasets.load_dataset`?",
"It would probably require a bit of tweaking, but you can apply it to a dataset, yes.\r\nThis should give you a new dataset with sentence pairs you can train a model on.\r\n\r\nYou can find the documentation about dataset processing here:\r\nhttps://huggingface.co/docs/datasets/processing.html#processing-data-with-map",
"Thank you for detail information.\r\n\r\nI'll try to apply `create_examples_from_document` to `Dataset` object.\r\n"
] | 2021-03-29T06:48:03
| 2021-04-01T04:58:40
| 2021-04-01T04:58:40
|
NONE
| null | null | null | null |
Hello.
I'm trying to pretrain the BERT model with next sentence prediction. Is there any function that supports next sentence prediction
like ` TextDatasetForNextSentencePrediction` of `huggingface/transformers` ?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/836541?v=4",
"events_url": "https://api.github.com/users/jnishi/events{/privacy}",
"followers_url": "https://api.github.com/users/jnishi/followers",
"following_url": "https://api.github.com/users/jnishi/following{/other_user}",
"gists_url": "https://api.github.com/users/jnishi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jnishi",
"id": 836541,
"login": "jnishi",
"node_id": "MDQ6VXNlcjgzNjU0MQ==",
"organizations_url": "https://api.github.com/users/jnishi/orgs",
"received_events_url": "https://api.github.com/users/jnishi/received_events",
"repos_url": "https://api.github.com/users/jnishi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jnishi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jnishi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jnishi",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2129/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2129/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 2 days, 22:10:37
|
https://api.github.com/repos/huggingface/datasets/issues/2128
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2128/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2128/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2128/events
|
https://github.com/huggingface/datasets/issues/2128
| 843,023,910
|
MDU6SXNzdWU4NDMwMjM5MTA=
| 2,128
|
Dialogue action slot name and value are reversed in MultiWoZ 2.2
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/31605305?v=4",
"events_url": "https://api.github.com/users/adamlin120/events{/privacy}",
"followers_url": "https://api.github.com/users/adamlin120/followers",
"following_url": "https://api.github.com/users/adamlin120/following{/other_user}",
"gists_url": "https://api.github.com/users/adamlin120/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/adamlin120",
"id": 31605305,
"login": "adamlin120",
"node_id": "MDQ6VXNlcjMxNjA1MzA1",
"organizations_url": "https://api.github.com/users/adamlin120/orgs",
"received_events_url": "https://api.github.com/users/adamlin120/received_events",
"repos_url": "https://api.github.com/users/adamlin120/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/adamlin120/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adamlin120/subscriptions",
"type": "User",
"url": "https://api.github.com/users/adamlin120",
"user_view_type": "public"
}
|
[
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] |
closed
| false
| null |
[] |
[
"Hi\r\nGood catch ! Thanks for reporting\r\n\r\nIf you are interested in contributing, feel free to open a PR to fix this :) "
] | 2021-03-29T06:34:02
| 2021-03-31T12:48:01
| 2021-03-31T12:48:01
|
CONTRIBUTOR
| null | null | null | null |
Hi @yjernite, thank you for adding MultiWoZ 2.2 in the huggingface datasets platform. It is beneficial!
I spot an error that the order of Dialogue action slot names and values are reversed.
https://github.com/huggingface/datasets/blob/649b2c469779bc4221e1b6969aa2496d63eb5953/datasets/multi_woz_v22/multi_woz_v22.py#L251-L262
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2128/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2128/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 2 days, 6:13:59
|
https://api.github.com/repos/huggingface/datasets/issues/2125
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2125/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2125/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2125/events
|
https://github.com/huggingface/datasets/issues/2125
| 842,690,570
|
MDU6SXNzdWU4NDI2OTA1NzA=
| 2,125
|
Is dataset timit_asr broken?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42398050?v=4",
"events_url": "https://api.github.com/users/kosuke-kitahara/events{/privacy}",
"followers_url": "https://api.github.com/users/kosuke-kitahara/followers",
"following_url": "https://api.github.com/users/kosuke-kitahara/following{/other_user}",
"gists_url": "https://api.github.com/users/kosuke-kitahara/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kosuke-kitahara",
"id": 42398050,
"login": "kosuke-kitahara",
"node_id": "MDQ6VXNlcjQyMzk4MDUw",
"organizations_url": "https://api.github.com/users/kosuke-kitahara/orgs",
"received_events_url": "https://api.github.com/users/kosuke-kitahara/received_events",
"repos_url": "https://api.github.com/users/kosuke-kitahara/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kosuke-kitahara/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kosuke-kitahara/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kosuke-kitahara",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi,\r\n\r\nthanks for the report, but this is a duplicate of #2052. ",
"@mariosasko \r\nThank you for your quick response! Following #2052, I've fixed the problem."
] | 2021-03-28T08:30:18
| 2021-03-28T12:29:25
| 2021-03-28T12:29:25
|
NONE
| null | null | null | null |
Using `timit_asr` dataset, I saw all records are the same.
``` python
from datasets import load_dataset, load_metric
timit = load_dataset("timit_asr")
from datasets import ClassLabel
import random
import pandas as pd
from IPython.display import display, HTML
def show_random_elements(dataset, num_examples=10):
assert num_examples <= len(dataset), "Can't pick more elements than there are in the dataset."
picks = []
for _ in range(num_examples):
pick = random.randint(0, len(dataset)-1)
while pick in picks:
pick = random.randint(0, len(dataset)-1)
picks.append(pick)
df = pd.DataFrame(dataset[picks])
display(HTML(df.to_html()))
show_random_elements(timit['train'].remove_columns(["file", "phonetic_detail", "word_detail", "dialect_region", "id",
"sentence_type", "speaker_id"]), num_examples=20)
```
`output`
<img width="312" alt="Screen Shot 2021-03-28 at 17 29 04" src="https://user-images.githubusercontent.com/42398050/112746646-21acee80-8feb-11eb-84f3-dbb5d4269724.png">
I double-checked it [here](https://huggingface.co/datasets/viewer/), and met the same problem.
<img width="1374" alt="Screen Shot 2021-03-28 at 17 32 07" src="https://user-images.githubusercontent.com/42398050/112746698-9bdd7300-8feb-11eb-97ed-5babead385f4.png">
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42398050?v=4",
"events_url": "https://api.github.com/users/kosuke-kitahara/events{/privacy}",
"followers_url": "https://api.github.com/users/kosuke-kitahara/followers",
"following_url": "https://api.github.com/users/kosuke-kitahara/following{/other_user}",
"gists_url": "https://api.github.com/users/kosuke-kitahara/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kosuke-kitahara",
"id": 42398050,
"login": "kosuke-kitahara",
"node_id": "MDQ6VXNlcjQyMzk4MDUw",
"organizations_url": "https://api.github.com/users/kosuke-kitahara/orgs",
"received_events_url": "https://api.github.com/users/kosuke-kitahara/received_events",
"repos_url": "https://api.github.com/users/kosuke-kitahara/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kosuke-kitahara/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kosuke-kitahara/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kosuke-kitahara",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2125/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2125/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 3:59:07
|
https://api.github.com/repos/huggingface/datasets/issues/2124
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2124/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2124/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2124/events
|
https://github.com/huggingface/datasets/issues/2124
| 842,627,729
|
MDU6SXNzdWU4NDI2Mjc3Mjk=
| 2,124
|
Adding ScaNN library to do MIPS?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"events_url": "https://api.github.com/users/shamanez/events{/privacy}",
"followers_url": "https://api.github.com/users/shamanez/followers",
"following_url": "https://api.github.com/users/shamanez/following{/other_user}",
"gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/shamanez",
"id": 16892570,
"login": "shamanez",
"node_id": "MDQ6VXNlcjE2ODkyNTcw",
"organizations_url": "https://api.github.com/users/shamanez/orgs",
"received_events_url": "https://api.github.com/users/shamanez/received_events",
"repos_url": "https://api.github.com/users/shamanez/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shamanez/subscriptions",
"type": "User",
"url": "https://api.github.com/users/shamanez",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[
"I haven't played with it (yet) but it sounds really cool !\r\n"
] | 2021-03-28T00:07:00
| 2021-03-29T13:23:43
| null |
NONE
| null | null | null | null |
@lhoestq Hi I am thinking of adding this new google library to do the MIPS similar to **add_faiss_idex**. As the paper suggests, it is really fast when it comes to retrieving the nearest neighbors.
https://github.com/google-research/google-research/tree/master/scann

| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2124/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2124/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/2123
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2123/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2123/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2123/events
|
https://github.com/huggingface/datasets/issues/2123
| 842,577,285
|
MDU6SXNzdWU4NDI1NzcyODU=
| 2,123
|
Problem downloading GEM wiki_auto_asset_turk dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/29705940?v=4",
"events_url": "https://api.github.com/users/mille-s/events{/privacy}",
"followers_url": "https://api.github.com/users/mille-s/followers",
"following_url": "https://api.github.com/users/mille-s/following{/other_user}",
"gists_url": "https://api.github.com/users/mille-s/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mille-s",
"id": 29705940,
"login": "mille-s",
"node_id": "MDQ6VXNlcjI5NzA1OTQw",
"organizations_url": "https://api.github.com/users/mille-s/orgs",
"received_events_url": "https://api.github.com/users/mille-s/received_events",
"repos_url": "https://api.github.com/users/mille-s/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mille-s/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mille-s/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mille-s",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi,\r\n\r\nsadly I can't replicate the problem on my Windows machine. Try to update the library to the newest version with:\r\n```bash\r\npip install git+https://github.com/huggingface/datasets\r\n``` ",
"Thanks for the answer! I updated the library but unfortunately it didn't solve the problem.",
"Is there an error message ?\r\nWhat stacktrace do you get if you interrupt the execution of the program while downloading ?",
"Sorry for the long time since my last comment, I tried again and don't seem to have the problem anymore, thanks for your support!",
"Great ! I'm closing the issue then. Feel free to re-open if you experience this issue again"
] | 2021-03-27T18:41:28
| 2021-05-12T16:15:18
| 2021-05-12T16:15:17
|
NONE
| null | null | null | null |
@yjernite
### Summary
I am currently working on the GEM datasets and do not manage to download the wiki_auto_asset_turk data, whereas all other datasets download well with the same code.
### Steps to reproduce
Code snippet:
from datasets import load_dataset
#dataset = load_dataset('gem', 'web_nlg_en')
dataset = load_dataset('gem', 'wiki_auto_asset_turk')
```
**Expected behavior:**
I expect the dataset to start downloading (download bar appears and progresses toward 100%)
**Actual behavior:**
Instead of seeing the download bar appearing, nothing happens; the following appears in the console as expected, but nothing more:
Downloading: 36.6kB [00:00, 37.2MB/s]
Downloading: 41.7kB [00:00, ?B/s]
Downloading and preparing dataset gem/wiki_auto_asset_turk (download: 121.37 MiB, generated: 145.69 MiB, post-processed: Unknown size, total: 267.07 MiB) to C:\Users\sfmil\.cache\huggingface\datasets\gem\wiki_auto_asset_turk\1.0.0\f252756d7f1b8f019aac71a1623b2950acfe10d25d956668ac4eae4e93c58b8d...
### Is this a regression?
No, it was the first time I was trying to download this dataset (same for the other ones).
### Debug info
- Python version: Python 3.8.2
- OS version: Windows 10 Family
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2123/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2123/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 45 days, 21:33:49
|
https://api.github.com/repos/huggingface/datasets/issues/2120
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2120/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2120/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2120/events
|
https://github.com/huggingface/datasets/issues/2120
| 841,954,521
|
MDU6SXNzdWU4NDE5NTQ1MjE=
| 2,120
|
dataset viewer does not work anymore
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dorost1234",
"id": 79165106,
"login": "dorost1234",
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dorost1234",
"user_view_type": "public"
}
|
[
{
"color": "94203D",
"default": false,
"description": "",
"id": 2107841032,
"name": "nlp-viewer",
"node_id": "MDU6TGFiZWwyMTA3ODQxMDMy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer"
}
] |
closed
| false
| null |
[] |
[
"Thanks for reporting :) We're looking into it",
"Back up. "
] | 2021-03-26T13:22:13
| 2021-03-26T15:52:22
| 2021-03-26T15:52:22
|
NONE
| null | null | null | null |
Hi
I normally use this link to see all datasets and how I can load them
https://huggingface.co/datasets/viewer/
Now I am getting
502 Bad Gateway
nginx/1.18.0 (Ubuntu)
could you bring this webpage back ? this was very helpful @lhoestq
thanks for your help
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/35882?v=4",
"events_url": "https://api.github.com/users/srush/events{/privacy}",
"followers_url": "https://api.github.com/users/srush/followers",
"following_url": "https://api.github.com/users/srush/following{/other_user}",
"gists_url": "https://api.github.com/users/srush/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/srush",
"id": 35882,
"login": "srush",
"node_id": "MDQ6VXNlcjM1ODgy",
"organizations_url": "https://api.github.com/users/srush/orgs",
"received_events_url": "https://api.github.com/users/srush/received_events",
"repos_url": "https://api.github.com/users/srush/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/srush/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/srush/subscriptions",
"type": "User",
"url": "https://api.github.com/users/srush",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2120/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2120/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 2:30:09
|
https://api.github.com/repos/huggingface/datasets/issues/2117
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2117/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2117/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2117/events
|
https://github.com/huggingface/datasets/issues/2117
| 841,535,283
|
MDU6SXNzdWU4NDE1MzUyODM=
| 2,117
|
load_metric from local "glue.py" meet error 'NoneType' object is not callable
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/54012361?v=4",
"events_url": "https://api.github.com/users/Frankie123421/events{/privacy}",
"followers_url": "https://api.github.com/users/Frankie123421/followers",
"following_url": "https://api.github.com/users/Frankie123421/following{/other_user}",
"gists_url": "https://api.github.com/users/Frankie123421/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Frankie123421",
"id": 54012361,
"login": "Frankie123421",
"node_id": "MDQ6VXNlcjU0MDEyMzYx",
"organizations_url": "https://api.github.com/users/Frankie123421/orgs",
"received_events_url": "https://api.github.com/users/Frankie123421/received_events",
"repos_url": "https://api.github.com/users/Frankie123421/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Frankie123421/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Frankie123421/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Frankie123421",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"@Frankie123421 what was the resolution to this?",
"> @Frankie123421 what was the resolution to this?\r\n\r\nuse glue_metric.py instead of glue.py in load_metric",
"thank you!"
] | 2021-03-26T02:35:22
| 2021-08-25T21:44:05
| 2021-03-26T02:40:26
|
NONE
| null | null | null | null |
actual_task = "mnli" if task == "mnli-mm" else task
dataset = load_dataset(path='/home/glue.py', name=actual_task)
metric = load_metric(path='/home/glue.py', name=actual_task)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-8-7ab77a465d81> in <module>
1 actual_task = "mnli" if task == "mnli-mm" else task
2 dataset = load_dataset(path='/home/jcli/glue.py', name=actual_task)
----> 3 metric = load_metric(path='/home/jcli/glue.py', name=actual_task)
~/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/load.py in load_metric(path, config_name, process_id, num_process, cache_dir, experiment_id, keep_in_memory, download_config, download_mode, script_version, **metric_init_kwargs)
508 keep_in_memory=keep_in_memory,
509 experiment_id=experiment_id,
--> 510 **metric_init_kwargs,
511 )
512
TypeError: 'NoneType' object is not callable
Please help
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/54012361?v=4",
"events_url": "https://api.github.com/users/Frankie123421/events{/privacy}",
"followers_url": "https://api.github.com/users/Frankie123421/followers",
"following_url": "https://api.github.com/users/Frankie123421/following{/other_user}",
"gists_url": "https://api.github.com/users/Frankie123421/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Frankie123421",
"id": 54012361,
"login": "Frankie123421",
"node_id": "MDQ6VXNlcjU0MDEyMzYx",
"organizations_url": "https://api.github.com/users/Frankie123421/orgs",
"received_events_url": "https://api.github.com/users/Frankie123421/received_events",
"repos_url": "https://api.github.com/users/Frankie123421/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Frankie123421/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Frankie123421/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Frankie123421",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2117/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2117/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 0:05:04
|
https://api.github.com/repos/huggingface/datasets/issues/2116
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2116/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2116/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2116/events
|
https://github.com/huggingface/datasets/issues/2116
| 841,481,292
|
MDU6SXNzdWU4NDE0ODEyOTI=
| 2,116
|
Creating custom dataset results in error while calling the map() function
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/13940397?v=4",
"events_url": "https://api.github.com/users/GeetDsa/events{/privacy}",
"followers_url": "https://api.github.com/users/GeetDsa/followers",
"following_url": "https://api.github.com/users/GeetDsa/following{/other_user}",
"gists_url": "https://api.github.com/users/GeetDsa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/GeetDsa",
"id": 13940397,
"login": "GeetDsa",
"node_id": "MDQ6VXNlcjEzOTQwMzk3",
"organizations_url": "https://api.github.com/users/GeetDsa/orgs",
"received_events_url": "https://api.github.com/users/GeetDsa/received_events",
"repos_url": "https://api.github.com/users/GeetDsa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/GeetDsa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/GeetDsa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/GeetDsa",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi,\r\n\r\nthe `_data` attribute is missing due to `MyDataset.__init__` not calling the parent `__init__`. However, I don't think it's a good idea to subclass the `datasets.Dataset` class (e.g. it's kind of dangerous to override `datasets.Dataset.__getitem__`). Instead, it's better to follow the \"association over inheritance\" approach with a simple wrapper class that delegates calls to a wrapped `Dataset` (map, etc.). Btw, the library offers the `datasets.Dataset.from_pandas` class method to directly create a `datasets.Dataset` from the dataframe."
] | 2021-03-26T00:37:46
| 2021-03-31T14:30:32
| 2021-03-31T14:30:32
|
NONE
| null | null | null | null |
calling `map()` of `datasets` library results into an error while defining a Custom dataset.
Reproducible example:
```
import datasets
class MyDataset(datasets.Dataset):
def __init__(self, sentences):
"Initialization"
self.samples = sentences
def __len__(self):
"Denotes the total number of samples"
return len(self.samples)
def __getitem__(self, index):
"Generates one sample of data"
# Select sample
# Load data and get label
samples = self.samples[index]
return samples
def preprocess_function_train(examples):
inputs = examples
labels = [example+tokenizer.eos_token for example in examples ]
inputs = tokenizer(inputs, max_length=30, padding=True, truncation=True)
labels = tokenizer(labels, max_length=30, padding=True, truncation=True)
model_inputs = inputs
model_inputs["labels"] = labels["input_ids"]
print("about to return")
return model_inputs
##train["sentence"] is dataframe column
train_dataset = MyDataset(train['sentence'].values.tolist())
train_dataset = train_dataset.map(
preprocess_function,
batched = True,
batch_size=32
)
```
Stack trace of error:
```
Traceback (most recent call last):
File "dir/train_generate.py", line 362, in <module>
main()
File "dir/train_generate.py", line 245, in main
train_dataset = train_dataset.map(
File "anaconda_dir/anaconda3/envs/env1/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1244, in map
return self._map_single(
File "anaconda_dir/anaconda3/envs/env1/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 149, in wrapper
unformatted_columns = set(self.column_names) - set(self._format_columns or [])
File "anaconda_dir/anaconda3/envs/env1/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 526, in column_names
return self._data.column_names
AttributeError: 'MyDataset' object has no attribute '_data'
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/13940397?v=4",
"events_url": "https://api.github.com/users/GeetDsa/events{/privacy}",
"followers_url": "https://api.github.com/users/GeetDsa/followers",
"following_url": "https://api.github.com/users/GeetDsa/following{/other_user}",
"gists_url": "https://api.github.com/users/GeetDsa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/GeetDsa",
"id": 13940397,
"login": "GeetDsa",
"node_id": "MDQ6VXNlcjEzOTQwMzk3",
"organizations_url": "https://api.github.com/users/GeetDsa/orgs",
"received_events_url": "https://api.github.com/users/GeetDsa/received_events",
"repos_url": "https://api.github.com/users/GeetDsa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/GeetDsa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/GeetDsa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/GeetDsa",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2116/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2116/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 5 days, 13:52:46
|
https://api.github.com/repos/huggingface/datasets/issues/2115
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2115/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2115/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2115/events
|
https://github.com/huggingface/datasets/issues/2115
| 841,283,974
|
MDU6SXNzdWU4NDEyODM5NzQ=
| 2,115
|
The datasets.map() implementation modifies the datatype of os.environ object
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/19983848?v=4",
"events_url": "https://api.github.com/users/leleamol/events{/privacy}",
"followers_url": "https://api.github.com/users/leleamol/followers",
"following_url": "https://api.github.com/users/leleamol/following{/other_user}",
"gists_url": "https://api.github.com/users/leleamol/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/leleamol",
"id": 19983848,
"login": "leleamol",
"node_id": "MDQ6VXNlcjE5OTgzODQ4",
"organizations_url": "https://api.github.com/users/leleamol/orgs",
"received_events_url": "https://api.github.com/users/leleamol/received_events",
"repos_url": "https://api.github.com/users/leleamol/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/leleamol/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leleamol/subscriptions",
"type": "User",
"url": "https://api.github.com/users/leleamol",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[] | 2021-03-25T20:29:19
| 2021-03-26T15:13:52
| 2021-03-26T15:13:52
|
NONE
| null | null | null | null |
In our testing, we noticed that the datasets.map() implementation is modifying the datatype of python os.environ object from '_Environ' to 'dict'.
This causes following function calls to fail as follows:
`
x = os.environ.get("TEST_ENV_VARIABLE_AFTER_dataset_map", default=None)
TypeError: get() takes no keyword arguments
`
It looks like the following line in datasets.map implementation introduced this functionality.
https://github.com/huggingface/datasets/blob/0cb1ac06acb0df44a1cf4128d03a01865faa2504/src/datasets/arrow_dataset.py#L1421
Here is the test script to reproduce this error.
```
from datasets import load_dataset
from transformers import AutoTokenizer
import os
def test_train():
model_checkpoint = "distilgpt2"
datasets = load_dataset('wikitext', 'wikitext-2-raw-v1')
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint, use_fast=True)
tokenizer.pad_token = tokenizer.eos_token
def tokenize_function(examples):
y = tokenizer(examples['text'], truncation=True, max_length=64)
return y
x = os.environ.get("TEST_ENV_VARIABLE_BEFORE_dataset_map", default=None)
print(f"Testing environment variable: TEST_ENV_VARIABLE_BEFORE_dataset_map {x}")
print(f"Data type of os.environ before datasets.map = {os.environ.__class__.__name__}")
datasets.map(tokenize_function, batched=True, num_proc=2, remove_columns=["text"])
print(f"Data type of os.environ after datasets.map = {os.environ.__class__.__name__}")
x = os.environ.get("TEST_ENV_VARIABLE_AFTER_dataset_map", default=None)
print(f"Testing environment variable: TEST_ENV_VARIABLE_AFTER_dataset_map {x}")
if __name__ == "__main__":
test_train()
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2115/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2115/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 18:44:33
|
https://api.github.com/repos/huggingface/datasets/issues/2108
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2108/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2108/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2108/events
|
https://github.com/huggingface/datasets/issues/2108
| 840,181,055
|
MDU6SXNzdWU4NDAxODEwNTU=
| 2,108
|
Is there a way to use a GPU only when training an Index in the process of add_faisis_index?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"events_url": "https://api.github.com/users/shamanez/events{/privacy}",
"followers_url": "https://api.github.com/users/shamanez/followers",
"following_url": "https://api.github.com/users/shamanez/following{/other_user}",
"gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/shamanez",
"id": 16892570,
"login": "shamanez",
"node_id": "MDQ6VXNlcjE2ODkyNTcw",
"organizations_url": "https://api.github.com/users/shamanez/orgs",
"received_events_url": "https://api.github.com/users/shamanez/received_events",
"repos_url": "https://api.github.com/users/shamanez/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shamanez/subscriptions",
"type": "User",
"url": "https://api.github.com/users/shamanez",
"user_view_type": "public"
}
|
[
{
"color": "d876e3",
"default": true,
"description": "Further information is requested",
"id": 1935892912,
"name": "question",
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/question"
}
] |
open
| false
| null |
[] |
[] | 2021-03-24T21:32:16
| 2021-03-25T06:31:43
| null |
NONE
| null | null | null | null |
Motivation - Some FAISS indexes like IVF consist of the training step that clusters the dataset into a given number of indexes. It would be nice if we can use a GPU to do the training step and covert the index back to CPU as mention in [this faiss example](https://gist.github.com/mdouze/46d6bbbaabca0b9778fca37ed2bcccf6).
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2108/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2108/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/2106
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2106/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2106/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2106/events
|
https://github.com/huggingface/datasets/issues/2106
| 839,084,264
|
MDU6SXNzdWU4MzkwODQyNjQ=
| 2,106
|
WMT19 Dataset for Kazakh-English is not formatted correctly
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/22580542?v=4",
"events_url": "https://api.github.com/users/trina731/events{/privacy}",
"followers_url": "https://api.github.com/users/trina731/followers",
"following_url": "https://api.github.com/users/trina731/following{/other_user}",
"gists_url": "https://api.github.com/users/trina731/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/trina731",
"id": 22580542,
"login": "trina731",
"node_id": "MDQ6VXNlcjIyNTgwNTQy",
"organizations_url": "https://api.github.com/users/trina731/orgs",
"received_events_url": "https://api.github.com/users/trina731/received_events",
"repos_url": "https://api.github.com/users/trina731/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/trina731/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/trina731/subscriptions",
"type": "User",
"url": "https://api.github.com/users/trina731",
"user_view_type": "public"
}
|
[
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] |
open
| false
| null |
[] |
[
"Hi ! Thanks for reporting\r\n\r\nBy looking at the raw `news-commentary-v14.en-kk.tsv` file, it looks like there are at least 17 lines with this issue.\r\nMoreover these issues are not always the same:\r\n- L97 is only `kk` text and must be appended at the end of the `kk` text of the **next** line\r\n- L2897 is only `kk` text and must be appended at the end of the `kk` text of the **previous** line\r\n- L1247 and L1248 are only `kk` texts and must be inserted at the **beginning** of the `kk` text of the next line\r\n- (and there are many others)\r\n\r\nIt would be nice to have a corrected version of this file ! The file is available in the `wmt/news-commentary` repository on the Datasets Hub here:\r\nhttps://huggingface.co/datasets/wmt/news-commentary/tree/main/v14/training\r\n\r\nThen maybe we can notify the WMT authors and host the corrected version somewhere"
] | 2021-03-23T20:14:47
| 2021-03-25T21:36:20
| null |
NONE
| null | null | null | null |
In addition to the bug of languages being switched from Issue @415, there are incorrect translations in the dataset because the English-Kazakh translations have a one off formatting error.
The News Commentary v14 parallel data set for kk-en from http://www.statmt.org/wmt19/translation-task.html has a bug here:
> Line 94. The Swiss National Bank, for its part, has been battling with the deflationary effects of the franc’s dramatic appreciation over the past few years. Швейцарияның Ұлттық банкі өз тарапынан, соңғы бірнеше жыл ішінде франк құнының қатты өсуінің дефляциялық әсерімен күресіп келеді.
>
> Line 95. Дефляциялық күштер 2008 жылы терең және ұзаққа созылған жаһандық дағдарысқа байланысты орын алған ірі экономикалық және қаржылық орын алмасулардың арқасында босатылды. Жеке қарыз қаражаты үлесінің қысқаруы орталық банктің рефляцияға жұмсалған күш-жігеріне тұрақты соққан қарсы желдей болды.
>
> Line 96. The deflationary forces were unleashed by the major economic and financial dislocations associated with the deep and protracted global crisis that erupted in 2008. Private deleveraging became a steady headwind to central bank efforts to reflate. 2009 жылы, алдыңғы қатарлы экономикалардың шамамен үштен бірі бағаның төмендеуін көрсетті, бұл соғыстан кейінгі жоғары деңгей болды.
As you can see, line 95 has only the Kazakh translation which should be part of line 96. This causes all of the following English-Kazakh translation pairs to be one off rendering ALL of those translations incorrect. This issue was not fixed when the dataset was imported to Huggingface. By running this code
```
import datasets
from datasets import load_dataset
dataset = load_dataset('wmt19', 'kk-en')
for key in dataset['train']['translation']:
if 'The deflationary forces were unleashed by the major economic and financial dislocations associated with the deep and protracted global crisis that erupted in 2008.' in key['kk']:
print(key['en'])
print(key['kk'])
break
```
we get:
> 2009 жылы, алдыңғы қатарлы экономикалардың шамамен үштен бірі бағаның төмендеуін көрсетті, бұл соғыстан кейінгі жоғары деңгей болды.
> The deflationary forces were unleashed by the major economic and financial dislocations associated with the deep and protracted global crisis that erupted in 2008. Private deleveraging became a steady headwind to central bank efforts to reflate.
which shows that the issue still persists in the Huggingface dataset. The Kazakh sentence matches up to the next English sentence in the dataset instead of the current one.
Please let me know if there's you have any ideas to fix this one-off error from the dataset or if this can be fixed by Huggingface.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2106/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2106/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/2105
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2105/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2105/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2105/events
|
https://github.com/huggingface/datasets/issues/2105
| 839,059,226
|
MDU6SXNzdWU4MzkwNTkyMjY=
| 2,105
|
Request to remove S2ORC dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/13603748?v=4",
"events_url": "https://api.github.com/users/kyleclo/events{/privacy}",
"followers_url": "https://api.github.com/users/kyleclo/followers",
"following_url": "https://api.github.com/users/kyleclo/following{/other_user}",
"gists_url": "https://api.github.com/users/kyleclo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kyleclo",
"id": 13603748,
"login": "kyleclo",
"node_id": "MDQ6VXNlcjEzNjAzNzQ4",
"organizations_url": "https://api.github.com/users/kyleclo/orgs",
"received_events_url": "https://api.github.com/users/kyleclo/received_events",
"repos_url": "https://api.github.com/users/kyleclo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kyleclo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kyleclo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kyleclo",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[
"Hello @kyleclo! Currently, we are getting the data from your bucket, so if you remove it the HF script won't work anymore :) \r\n\r\nUntil you solve things on your end, @lhoestq suggested we just return a warning message when people try to load that dataset from HF. What would you like it to say?",
"Hi @kyleclo, as of today, you have not removed your bucket data yet, and therefore HuggingFace can download it from there.\r\n\r\nIs it OK? Are you planning to eventually delete it? Thank you.",
"Hi! Sorry I missed @yjernite 's previous message, thanks for responding! \r\n\r\nIs there an option where we can keep our data in our bucket, but the HF script no longer pulls data from it? "
] | 2021-03-23T19:43:06
| 2021-08-04T19:18:02
| null |
NONE
| null | null | null | null |
Hi! I was wondering if it's possible to remove [S2ORC](https://huggingface.co/datasets/s2orc) from hosting on Huggingface's platform? Unfortunately, there are some legal considerations about how we make this data available. Happy to add back to Huggingface's platform once we work out those hurdles! Thanks!
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2105/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2105/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/2104
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2104/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2104/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2104/events
|
https://github.com/huggingface/datasets/issues/2104
| 839,027,834
|
MDU6SXNzdWU4MzkwMjc4MzQ=
| 2,104
|
Trouble loading wiki_movies
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/35391599?v=4",
"events_url": "https://api.github.com/users/adityaarunsinghal/events{/privacy}",
"followers_url": "https://api.github.com/users/adityaarunsinghal/followers",
"following_url": "https://api.github.com/users/adityaarunsinghal/following{/other_user}",
"gists_url": "https://api.github.com/users/adityaarunsinghal/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/adityaarunsinghal",
"id": 35391599,
"login": "adityaarunsinghal",
"node_id": "MDQ6VXNlcjM1MzkxNTk5",
"organizations_url": "https://api.github.com/users/adityaarunsinghal/orgs",
"received_events_url": "https://api.github.com/users/adityaarunsinghal/received_events",
"repos_url": "https://api.github.com/users/adityaarunsinghal/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/adityaarunsinghal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adityaarunsinghal/subscriptions",
"type": "User",
"url": "https://api.github.com/users/adityaarunsinghal",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi ! `wiki_movies` was added in `datasets==1.2.0`. However it looks like you have `datasets==1.1.2`.\r\n\r\nTo use `wiki_movies`, please update `datasets` with\r\n```\r\npip install --upgrade datasets\r\n```",
"Thanks a lot! That solved it and I was able to upload a model trained on it as well :)"
] | 2021-03-23T18:59:54
| 2022-03-30T08:22:58
| 2022-03-30T08:22:58
|
NONE
| null | null | null | null |
Hello,
I am trying to load_dataset("wiki_movies") and it gives me this error -
`FileNotFoundError: Couldn't find file locally at wiki_movies/wiki_movies.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/wiki_movies/wiki_movies.py or https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/wiki_movies/wiki_movies.py`
Trying to do `python run_mlm.py \
--model_name_or_path roberta-base \
--dataset_name wiki_movies \` also gives the same error.
Is this something on my end? From what I can tell, this dataset was re-added by @lhoestq a few months ago.
Thank you!
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2104/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2104/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 371 days, 13:23:04
|
https://api.github.com/repos/huggingface/datasets/issues/2103
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2103/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2103/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2103/events
|
https://github.com/huggingface/datasets/issues/2103
| 838,946,916
|
MDU6SXNzdWU4Mzg5NDY5MTY=
| 2,103
|
citation, homepage, and license fields of `dataset_info.json` are duplicated many times
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/15007950?v=4",
"events_url": "https://api.github.com/users/samsontmr/events{/privacy}",
"followers_url": "https://api.github.com/users/samsontmr/followers",
"following_url": "https://api.github.com/users/samsontmr/following{/other_user}",
"gists_url": "https://api.github.com/users/samsontmr/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/samsontmr",
"id": 15007950,
"login": "samsontmr",
"node_id": "MDQ6VXNlcjE1MDA3OTUw",
"organizations_url": "https://api.github.com/users/samsontmr/orgs",
"received_events_url": "https://api.github.com/users/samsontmr/received_events",
"repos_url": "https://api.github.com/users/samsontmr/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/samsontmr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/samsontmr/subscriptions",
"type": "User",
"url": "https://api.github.com/users/samsontmr",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
}
] |
closed
| false
| null |
[] |
[
"Thanks for reporting :)\r\nMaybe we can concatenate fields only if they are different.\r\n\r\nCurrently this is done here:\r\n\r\nhttps://github.com/huggingface/nlp/blob/349ac4398a3bcae6356f14c5754483383a60e8a4/src/datasets/info.py#L180-L196\r\n\r\nThis can be a good first contribution to the library.\r\nPlease comment if you'd like to improve this and open a PR :)"
] | 2021-03-23T17:18:09
| 2021-04-06T14:39:59
| 2021-04-06T14:39:59
|
NONE
| null | null | null | null |
This happens after a `map` operation when `num_proc` is set to `>1`. I tested this by cleaning up the json before running the `map` op on the dataset so it's unlikely it's coming from an earlier concatenation.
Example result:
```
"citation": "@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n
```
@lhoestq and I believe this is happening due to the fields being concatenated `num_proc` times.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2103/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2103/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 13 days, 21:21:50
|
https://api.github.com/repos/huggingface/datasets/issues/2099
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2099/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2099/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2099/events
|
https://github.com/huggingface/datasets/issues/2099
| 838,523,819
|
MDU6SXNzdWU4Mzg1MjM4MTk=
| 2,099
|
load_from_disk takes a long time to load local dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/15007950?v=4",
"events_url": "https://api.github.com/users/samsontmr/events{/privacy}",
"followers_url": "https://api.github.com/users/samsontmr/followers",
"following_url": "https://api.github.com/users/samsontmr/following{/other_user}",
"gists_url": "https://api.github.com/users/samsontmr/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/samsontmr",
"id": 15007950,
"login": "samsontmr",
"node_id": "MDQ6VXNlcjE1MDA3OTUw",
"organizations_url": "https://api.github.com/users/samsontmr/orgs",
"received_events_url": "https://api.github.com/users/samsontmr/received_events",
"repos_url": "https://api.github.com/users/samsontmr/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/samsontmr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/samsontmr/subscriptions",
"type": "User",
"url": "https://api.github.com/users/samsontmr",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi !\r\nCan you share more information about the features of your dataset ? You can get them by printing `my_dataset.features`\r\nCan you also share the code of your `map` function ?",
"It is actually just the tokenized `wikipedia` dataset with `input_ids`, `attention_mask`, etc, with one extra column which is a list of integers. The `text` column is removed during tokenization.\r\n\r\n```\r\ndef add_len_and_seq(example):\r\n end_idx = example['input_ids'].index(SEP)\r\n example['actual_len'] = end_idx-1\r\n seq_len = len(example['input_ids'])\r\n \r\n\r\n example['seq'] = [PAD_ID] + [np.uint8(example['some_integer'])]*(end_idx-1) + [PAD_ID]*(seq_len-end_idx)\r\n \r\n return example\r\n```\r\n",
"Is `PAD_ID` a python integer ? You need all the integers in `example['seq']` to have the same type.\r\nDoes this work if you remove the `np.uint8` and use python integers instead ?",
"yup I casted it to `np.uint8` outside the function where it was defined. It was originally using python integers.",
"Strangely, even when I manually created `np.arrays` of specific `dtypes`, the types in the final `dataset_info.json` that gets written are still `int64`.\r\n\r\nUpdate: I tried creating lists of `int8`s and got the same result.",
"Yes this is a known issue: #625 \r\nWe're working on making the precision kept for numpy :)\r\nTo specify the precision of the integers, currently one needs to specify the output features with `.map(..., features=output_features)`",
"Do you know what step is taking forever in the code ?\r\nWhat happens if you interrupt the execution of the dataset loading ?",
"After a synchronous discussion, we found that the cache file sizes have an enormous effect on the loading speed: smaller cache files result in faster load times. `num_proc` controls the number of cache files that are being written and is inversely proportional to the individual file size. In other words, increase `num_proc` for smaller cache files :)\r\n\r\nMaybe this can be highlighted somewhere in the docs."
] | 2021-03-23T09:28:37
| 2021-03-23T17:12:16
| 2021-03-23T17:12:16
|
NONE
| null | null | null | null |
I have an extremely large tokenized dataset (24M examples) that loads in a few minutes. However, after adding a column similar to `input_ids` (basically a list of integers) and saving the dataset to disk, the load time goes to >1 hour. I've even tried using `np.uint8` after seeing #1985 but it doesn't seem to be helping (the total size seems to be smaller though).
Does anyone know what could be the issue? Or does the casting of that column to `int8` need to happen in the function that writes the arrow table instead of in the `map` where I create the list of integers?
Tagging @lhoestq since you seem to be working on these issues and PRs :)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/15007950?v=4",
"events_url": "https://api.github.com/users/samsontmr/events{/privacy}",
"followers_url": "https://api.github.com/users/samsontmr/followers",
"following_url": "https://api.github.com/users/samsontmr/following{/other_user}",
"gists_url": "https://api.github.com/users/samsontmr/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/samsontmr",
"id": 15007950,
"login": "samsontmr",
"node_id": "MDQ6VXNlcjE1MDA3OTUw",
"organizations_url": "https://api.github.com/users/samsontmr/orgs",
"received_events_url": "https://api.github.com/users/samsontmr/received_events",
"repos_url": "https://api.github.com/users/samsontmr/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/samsontmr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/samsontmr/subscriptions",
"type": "User",
"url": "https://api.github.com/users/samsontmr",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2099/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2099/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 7:43:39
|
https://api.github.com/repos/huggingface/datasets/issues/2098
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2098/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2098/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2098/events
|
https://github.com/huggingface/datasets/issues/2098
| 838,447,959
|
MDU6SXNzdWU4Mzg0NDc5NTk=
| 2,098
|
SQuAD version
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/39556019?v=4",
"events_url": "https://api.github.com/users/h-peng17/events{/privacy}",
"followers_url": "https://api.github.com/users/h-peng17/followers",
"following_url": "https://api.github.com/users/h-peng17/following{/other_user}",
"gists_url": "https://api.github.com/users/h-peng17/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/h-peng17",
"id": 39556019,
"login": "h-peng17",
"node_id": "MDQ6VXNlcjM5NTU2MDE5",
"organizations_url": "https://api.github.com/users/h-peng17/orgs",
"received_events_url": "https://api.github.com/users/h-peng17/received_events",
"repos_url": "https://api.github.com/users/h-peng17/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/h-peng17/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/h-peng17/subscriptions",
"type": "User",
"url": "https://api.github.com/users/h-peng17",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi ! This is 1.1 as specified by the download urls here:\r\n\r\nhttps://github.com/huggingface/nlp/blob/349ac4398a3bcae6356f14c5754483383a60e8a4/datasets/squad/squad.py#L50-L55",
"Got it. Thank you~"
] | 2021-03-23T07:47:54
| 2021-03-26T09:48:54
| 2021-03-26T09:48:54
|
NONE
| null | null | null | null |
Hi~
I want train on squad dataset. What's the version of the squad? Is it 1.1 or 1.0? I'm new in QA, I don't find some descriptions about it.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/39556019?v=4",
"events_url": "https://api.github.com/users/h-peng17/events{/privacy}",
"followers_url": "https://api.github.com/users/h-peng17/followers",
"following_url": "https://api.github.com/users/h-peng17/following{/other_user}",
"gists_url": "https://api.github.com/users/h-peng17/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/h-peng17",
"id": 39556019,
"login": "h-peng17",
"node_id": "MDQ6VXNlcjM5NTU2MDE5",
"organizations_url": "https://api.github.com/users/h-peng17/orgs",
"received_events_url": "https://api.github.com/users/h-peng17/received_events",
"repos_url": "https://api.github.com/users/h-peng17/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/h-peng17/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/h-peng17/subscriptions",
"type": "User",
"url": "https://api.github.com/users/h-peng17",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2098/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2098/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 3 days, 2:01:00
|
https://api.github.com/repos/huggingface/datasets/issues/2096
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2096/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2096/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2096/events
|
https://github.com/huggingface/datasets/issues/2096
| 838,038,379
|
MDU6SXNzdWU4MzgwMzgzNzk=
| 2,096
|
CoNLL 2003 dataset not including German
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8406802?v=4",
"events_url": "https://api.github.com/users/rxian/events{/privacy}",
"followers_url": "https://api.github.com/users/rxian/followers",
"following_url": "https://api.github.com/users/rxian/following{/other_user}",
"gists_url": "https://api.github.com/users/rxian/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rxian",
"id": 8406802,
"login": "rxian",
"node_id": "MDQ6VXNlcjg0MDY4MDI=",
"organizations_url": "https://api.github.com/users/rxian/orgs",
"received_events_url": "https://api.github.com/users/rxian/received_events",
"repos_url": "https://api.github.com/users/rxian/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rxian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rxian/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rxian",
"user_view_type": "public"
}
|
[
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] |
closed
| false
| null |
[] |
[
"Hello. I've been looking for information about German Conll2003 and found your question. Official site (https://www.clips.uantwerpen.be/conll2003/ner/) mentions that organizers provide only annotation. German texts (ECI Multilingual Text Corpus) are not freely available and can be ordered from the Linguistic Data Consortium.\r\n\r\nBut maybe something has changed since 2003.",
"You can find the reason for not including the German data here: https://github.com/huggingface/datasets/issues/4230."
] | 2021-03-22T19:23:56
| 2023-07-25T16:49:07
| 2023-07-25T16:49:07
|
NONE
| null | null | null | null |
Hello, thanks for all the work on developing and maintaining this amazing platform, which I am enjoying working with!
I was wondering if there is a reason why the German CoNLL 2003 dataset is not included in the [repository](https://github.com/huggingface/datasets/tree/master/datasets/conll2003), since a copy of it could be found in some places on the internet such as GitHub? I could help adding the German data to the hub, unless there are some copyright issues that I am unaware of...
This is considering that many work use the union of CoNLL 2002 and 2003 datasets for comparing cross-lingual NER transfer performance in `en`, `de`, `es`, and `nl`. E.g., [XLM-R](https://www.aclweb.org/anthology/2020.acl-main.747.pdf).
## Adding a Dataset
- **Name:** CoNLL 2003 German
- **Paper:** https://www.aclweb.org/anthology/W03-0419/
- **Data:** https://github.com/huggingface/datasets/tree/master/datasets/conll2003
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2096/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2096/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 854 days, 21:25:11
|
https://api.github.com/repos/huggingface/datasets/issues/2092
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2092/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2092/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2092/events
|
https://github.com/huggingface/datasets/issues/2092
| 836,984,043
|
MDU6SXNzdWU4MzY5ODQwNDM=
| 2,092
|
How to disable making arrow tables in load_dataset ?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/48825663?v=4",
"events_url": "https://api.github.com/users/Jeevesh8/events{/privacy}",
"followers_url": "https://api.github.com/users/Jeevesh8/followers",
"following_url": "https://api.github.com/users/Jeevesh8/following{/other_user}",
"gists_url": "https://api.github.com/users/Jeevesh8/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Jeevesh8",
"id": 48825663,
"login": "Jeevesh8",
"node_id": "MDQ6VXNlcjQ4ODI1NjYz",
"organizations_url": "https://api.github.com/users/Jeevesh8/orgs",
"received_events_url": "https://api.github.com/users/Jeevesh8/received_events",
"repos_url": "https://api.github.com/users/Jeevesh8/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Jeevesh8/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Jeevesh8/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Jeevesh8",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi ! We plan to add streaming features in the future.\r\n\r\nThis should allow to load a dataset instantaneously without generating the arrow table. The trade-off is that accessing examples from a streaming dataset must be done in an iterative way, and with an additional (but hopefully minor) overhead.\r\nWhat do you think about this ?\r\n\r\nIf you have ideas or suggestions of what you expect from such features as a user, feel free to share them, this is really valuable to us !",
"People mainly want this feature either because it takes too much time too make arrow tables, or they occupy too much memory on the disk. I think both the problem can be solved if we provide arrow tables themselves on datasets hub. Can we do this currently @lhoestq ? \r\n",
"@lhoestq I think the ```try_from_hf_gcs``` provide the same functionality. What all datasets are available on HF GCS? Are all the datasets on huggingFace datasets hub are made available on GCS, automatically?",
"Only datasets like wikipedia, wiki40b, wiki_dpr and natural questions are available already processed on the HF google storage. This is used to download directly the arrow file instead of building it from the original data files.",
"@lhoestq How can we make sure that the data we upload on HuggingFace hub is available in form of preprocessed arrow files ?",
"We're still working on this :) This will be available soon\r\nUsers will be able to put their processed arrow files on the Hub",
"Hi! You can now use `Dataset.push_to_hub` to store preprocessed files on the Hub.\r\n\r\nAnd to avoid downloading preprocessed files, you can use streaming by setting `streaming=True` in `load_dataset`."
] | 2021-03-21T04:50:07
| 2022-06-01T16:49:52
| 2022-06-01T16:49:52
|
NONE
| null | null | null | null |
Is there a way to disable the construction of arrow tables, or to make them on the fly as the dataset is being used ?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2092/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2092/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 437 days, 11:59:45
|
https://api.github.com/repos/huggingface/datasets/issues/2089
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2089/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2089/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2089/events
|
https://github.com/huggingface/datasets/issues/2089
| 836,788,019
|
MDU6SXNzdWU4MzY3ODgwMTk=
| 2,089
|
Add documentaton for dataset README.md files
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4",
"events_url": "https://api.github.com/users/PhilipMay/events{/privacy}",
"followers_url": "https://api.github.com/users/PhilipMay/followers",
"following_url": "https://api.github.com/users/PhilipMay/following{/other_user}",
"gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/PhilipMay",
"id": 229382,
"login": "PhilipMay",
"node_id": "MDQ6VXNlcjIyOTM4Mg==",
"organizations_url": "https://api.github.com/users/PhilipMay/orgs",
"received_events_url": "https://api.github.com/users/PhilipMay/received_events",
"repos_url": "https://api.github.com/users/PhilipMay/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions",
"type": "User",
"url": "https://api.github.com/users/PhilipMay",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi ! We are using the [datasets-tagging app](https://github.com/huggingface/datasets-tagging) to select the tags to add.\r\n\r\nWe are also adding the full list of tags in #2107 \r\nThis covers multilinguality, language_creators, licenses, size_categories and task_categories.\r\n\r\nIn general if you want to add a tag that doesn't exist (for example for a custom license) you must make it start with `other-` and then a custom tag name.\r\n\r\nedit (@theo-m) if you ever find yourself resorting to adding an `other-*` tag, please do ping us somewhere so we can think about adding it to the \"official\" list :)",
"@lhoestq hmm - ok thanks for the answer.\r\nTo be honest I am not sure if this issue can be closed now.\r\nI just wanted to point out that this should either be documented or linked in the documentation.\r\nIf you feel like it is (will be) please just close this.",
"We're still working on the validation+documentation in this.\r\nFeel free to keep this issue open till we've added them",
"@lhoestq what is the status on this? Did you add documentation?",
"Hi ! There's the tagging app at https://huggingface.co/datasets/tagging/ that you can use.\r\nIt shows the list of all the tags you can use.\r\n\r\nIt is based on all the tag sets defined in this folder:\r\nhttps://github.com/huggingface/datasets/tree/master/src/datasets/utils/resources",
"@lhoestq is there something like this form Models?",
"I don't think so. Feel free to take a look at the tags of other models (example [here](https://huggingface.co/bert-base-uncased/blob/main/README.md)). But we should definitely have some docs or an app to write the tags. Feel free to open an issue in the `transformers` repo or in the `huggingface_hub` repo so we can discuss this",
"When modifying a README file, the Hub now displays a special UI with allowed values (see https://huggingface.co/docs/datasets/main/en/upload_dataset#create-a-dataset-card)."
] | 2021-03-20T11:44:38
| 2023-07-25T16:45:38
| 2023-07-25T16:45:37
|
CONTRIBUTOR
| null | null | null | null |
Hi,
the dataset README files have special headers.
Somehow a documenation of the allowed values and tags is missing.
Could you add that?
Just to give some concrete questions that should be answered imo:
- which values can be passted to multilinguality?
- what should be passed to language_creators?
- which values should licenses have? What do I say when it is a custom license? Should I add a link?
- how should I choose size_categories ? What are valid ranges?
- what are valid task_categories?
Thanks
Philip
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2089/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2089/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 857 days, 5:00:59
|
https://api.github.com/repos/huggingface/datasets/issues/2084
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2084/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2084/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2084/events
|
https://github.com/huggingface/datasets/issues/2084
| 835,750,671
|
MDU6SXNzdWU4MzU3NTA2NzE=
| 2,084
|
CUAD - Contract Understanding Atticus Dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4",
"events_url": "https://api.github.com/users/theo-m/events{/privacy}",
"followers_url": "https://api.github.com/users/theo-m/followers",
"following_url": "https://api.github.com/users/theo-m/following{/other_user}",
"gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/theo-m",
"id": 17948980,
"login": "theo-m",
"node_id": "MDQ6VXNlcjE3OTQ4OTgw",
"organizations_url": "https://api.github.com/users/theo-m/orgs",
"received_events_url": "https://api.github.com/users/theo-m/received_events",
"repos_url": "https://api.github.com/users/theo-m/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/theo-m/subscriptions",
"type": "User",
"url": "https://api.github.com/users/theo-m",
"user_view_type": "public"
}
|
[
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] |
closed
| false
| null |
[] |
[
"+1 on this request"
] | 2021-03-19T09:27:43
| 2021-04-16T08:50:44
| 2021-04-16T08:50:44
|
CONTRIBUTOR
| null | null | null | null |
## Adding a Dataset
- **Name:** CUAD - Contract Understanding Atticus Dataset
- **Description:** As one of the only large, specialized NLP benchmarks annotated by experts, CUAD can serve as a challenging research benchmark for the broader NLP community.
- **Paper:** https://arxiv.org/abs/2103.06268
- **Data:** https://github.com/TheAtticusProject/cuad/
- **Motivation:** good domain specific datasets are valuable
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2084/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2084/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 27 days, 23:23:01
|
https://api.github.com/repos/huggingface/datasets/issues/2083
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2083/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2083/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2083/events
|
https://github.com/huggingface/datasets/issues/2083
| 835,695,425
|
MDU6SXNzdWU4MzU2OTU0MjU=
| 2,083
|
`concatenate_datasets` throws error when changing the order of datasets to concatenate
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi,\r\n\r\nthis bug is related to `Dataset.{remove_columns, rename_column, flatten}` not propagating the change to the schema metadata when the info features are updated, so this line is the culprit:\r\n```python\r\ncommon_voice_train = common_voice_train.remove_columns(['client_id', 'up_votes', 'down_votes', 'age', 'gender', 'accent', 'locale', 'segment'])\r\n\r\n``` \r\nThe order is important because the resulting dataset inherits the schema metadata of the first dataset passed to the `concatenate_datasets(...)` function (`pa.concat_tables` [docs](https://arrow.apache.org/docs/python/generated/pyarrow.concat_tables.html)). I'll try to fix this ASAP."
] | 2021-03-19T08:29:48
| 2021-04-09T09:25:33
| 2021-04-09T09:25:33
|
CONTRIBUTOR
| null | null | null | null |
Hey,
I played around with the `concatenate_datasets(...)` function: https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=concatenate_datasets#datasets.concatenate_datasets
and noticed that when the order in which the datasets are concatenated changes an error is thrown where it should not IMO.
Here is a google colab to reproduce the error: https://colab.research.google.com/drive/17VTFU4KQ735-waWZJjeOHS6yDTfV5ekK?usp=sharing
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2083/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2083/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 21 days, 0:55:45
|
https://api.github.com/repos/huggingface/datasets/issues/2080
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2080/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2080/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2080/events
|
https://github.com/huggingface/datasets/issues/2080
| 835,023,000
|
MDU6SXNzdWU4MzUwMjMwMDA=
| 2,080
|
Multidimensional arrays in a Dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/3142085?v=4",
"events_url": "https://api.github.com/users/vermouthmjl/events{/privacy}",
"followers_url": "https://api.github.com/users/vermouthmjl/followers",
"following_url": "https://api.github.com/users/vermouthmjl/following{/other_user}",
"gists_url": "https://api.github.com/users/vermouthmjl/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/vermouthmjl",
"id": 3142085,
"login": "vermouthmjl",
"node_id": "MDQ6VXNlcjMxNDIwODU=",
"organizations_url": "https://api.github.com/users/vermouthmjl/orgs",
"received_events_url": "https://api.github.com/users/vermouthmjl/received_events",
"repos_url": "https://api.github.com/users/vermouthmjl/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/vermouthmjl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vermouthmjl/subscriptions",
"type": "User",
"url": "https://api.github.com/users/vermouthmjl",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi !\r\n\r\nThis is actually supported ! but not yet in `from_pandas`.\r\nYou can use `from_dict` for now instead:\r\n```python\r\nfrom datasets import Dataset, Array2D, Features, Value\r\nimport pandas as pd\r\nimport numpy as np\r\n\r\ndataset = {\r\n 'bbox': [\r\n np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]]),\r\n np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]]),\r\n np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]]),\r\n np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]])\r\n ],\r\n 'input_ids': [1, 2, 3, 4]\r\n}\r\ndataset = Dataset.from_dict(dataset)\r\n```\r\n\r\nThis will work but to use it with the torch formatter you must specify the `Array2D` feature type in order to tell the shape:\r\n```python\r\nfrom datasets import Dataset, Array2D, Features, Value\r\nimport pandas as pd\r\nimport numpy as np\r\n\r\ndataset = {\r\n 'bbox': [\r\n np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]]),\r\n np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]]),\r\n np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]]),\r\n np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]])\r\n ],\r\n 'input_ids': [1, 2, 3, 4]\r\n}\r\ndataset = Dataset.from_dict(dataset, features=Features({\r\n \"bbox\": Array2D(shape=(3, 4), dtype=\"int64\"),\r\n \"input_ids\": Value(\"int64\")\r\n}))\r\ndataset.set_format(\"torch\")\r\nprint(dataset[0]['bbox'])\r\n# tensor([[1, 2, 3, 4],\r\n# [1, 2, 3, 4],\r\n# [1, 2, 3, 4]])\r\n```\r\nIf you don't specify the `Array2D` feature type, then the inferred type will be Sequence(Sequence(Value(\"int64\"))) and therefore the torch formatter will return list of tensors",
"Thanks for the explanation. \r\nWith my original DataFrame, I did\r\n```\r\ndataset = dataset.to_dict(\"list\")\r\n```\r\nand then the rest of the transformation from dictionary works just fine."
] | 2021-03-18T16:29:14
| 2021-03-25T12:46:53
| 2021-03-25T12:46:53
|
NONE
| null | null | null | null |
Hi,
I'm trying to put together a `datasets.Dataset` to be used with LayoutLM which is available in `transformers`. This model requires as input the bounding boxes of each of the token of a sequence. This is when I realized that `Dataset` does not support multi-dimensional arrays as a value for a column in a row.
The following code results in conversion error in pyarrow (`pyarrow.lib.ArrowInvalid: ('Can only convert 1-dimensional array values', 'Conversion failed for column bbox with type object')`)
```
from datasets import Dataset
import pandas as pd
import numpy as np
dataset = pd.DataFrame({
'bbox': [
np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]]),
np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]]),
np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]]),
np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]])
],
'input_ids': [1, 2, 3, 4]
})
dataset = Dataset.from_pandas(dataset)
```
Since I wanted to use pytorch for the downstream training task, I also tried a few ways to directly put in a column of 2-D pytorch tensor in a formatted dataset, but I can only have a list of 1-D tensors, or a list of arrays, or a list of lists.
```
import torch
from datasets import Dataset
import pandas as pd
dataset = pd.DataFrame({
'bbox': [
[[1,2,3,4],[1,2,3,4],[1,2,3,4]],
[[1,2,3,4],[1,2,3,4],[1,2,3,4]],
[[1,2,3,4],[1,2,3,4],[1,2,3,4]],
[[1,2,3,4],[1,2,3,4],[1,2,3,4]]
],
'input_ids': [1, 2, 3, 4]
})
dataset = Dataset.from_pandas(dataset)
def test(examples):
return {'bbbox': torch.Tensor(examples['bbox'])}
dataset = dataset.map(test)
print(dataset[0]['bbox'])
print(dataset[0]['bbbox'])
dataset.set_format(type='torch', columns=['input_ids', 'bbox'], output_all_columns=True)
print(dataset[0]['bbox'])
print(dataset[0]['bbbox'])
def test2(examples):
return {'bbbox': torch.stack(examples['bbox'])}
dataset = dataset.map(test2)
print(dataset[0]['bbox'])
print(dataset[0]['bbbox'])
```
Is is possible to support n-D arrays/tensors in datasets?
It seems that it can also be useful for this [feature request](https://github.com/huggingface/datasets/issues/263).
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/3142085?v=4",
"events_url": "https://api.github.com/users/vermouthmjl/events{/privacy}",
"followers_url": "https://api.github.com/users/vermouthmjl/followers",
"following_url": "https://api.github.com/users/vermouthmjl/following{/other_user}",
"gists_url": "https://api.github.com/users/vermouthmjl/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/vermouthmjl",
"id": 3142085,
"login": "vermouthmjl",
"node_id": "MDQ6VXNlcjMxNDIwODU=",
"organizations_url": "https://api.github.com/users/vermouthmjl/orgs",
"received_events_url": "https://api.github.com/users/vermouthmjl/received_events",
"repos_url": "https://api.github.com/users/vermouthmjl/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/vermouthmjl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vermouthmjl/subscriptions",
"type": "User",
"url": "https://api.github.com/users/vermouthmjl",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2080/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2080/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 6 days, 20:17:39
|
https://api.github.com/repos/huggingface/datasets/issues/2078
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2078/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2078/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2078/events
|
https://github.com/huggingface/datasets/issues/2078
| 834,694,819
|
MDU6SXNzdWU4MzQ2OTQ4MTk=
| 2,078
|
MemoryError when computing WER metric
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/5707233?v=4",
"events_url": "https://api.github.com/users/diego-fustes/events{/privacy}",
"followers_url": "https://api.github.com/users/diego-fustes/followers",
"following_url": "https://api.github.com/users/diego-fustes/following{/other_user}",
"gists_url": "https://api.github.com/users/diego-fustes/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/diego-fustes",
"id": 5707233,
"login": "diego-fustes",
"node_id": "MDQ6VXNlcjU3MDcyMzM=",
"organizations_url": "https://api.github.com/users/diego-fustes/orgs",
"received_events_url": "https://api.github.com/users/diego-fustes/received_events",
"repos_url": "https://api.github.com/users/diego-fustes/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/diego-fustes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/diego-fustes/subscriptions",
"type": "User",
"url": "https://api.github.com/users/diego-fustes",
"user_view_type": "public"
}
|
[
{
"color": "25b21e",
"default": false,
"description": "A bug in a metric script",
"id": 2067393914,
"name": "metric bug",
"node_id": "MDU6TGFiZWwyMDY3MzkzOTE0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/metric%20bug"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] |
[
"Hi ! Thanks for reporting.\r\nWe're indeed using `jiwer` to compute the WER.\r\n\r\nMaybe instead of calling `jiwer.wer` once for all the preditions/references we can compute the WER iteratively to avoid memory issues ? I'm not too familial with `jiwer` but this must be possible.\r\n\r\nCurrently the code to compute the WER is defined here:\r\n\r\nhttps://github.com/huggingface/nlp/blob/349ac4398a3bcae6356f14c5754483383a60e8a4/metrics/wer/wer.py#L93-L94",
"Hi,\r\n\r\nI've just pushed a pull request that is related to this issue https://github.com/huggingface/datasets/pull/2169. It's not iterative, but it should avoid memory errors. It's based on the editdistance python library. An iterative implementation should be as easy as storing scores and words stepwise and dividing at the end. ",
"I see, this was solved by other thread. Ok, let me know if you want to switch the implementation for any reason :)",
"Thanks for diving into this anyway ^^'\r\nAs you said this actually got solved a few days ago",
"Someone created an issue https://github.com/jitsi/jiwer/issues/40 at jiwer which shows that this is still a problem in the current version. Would be curious to figure out how this can be fixed by jiwer... :) I assume that it runs of out memory because it's trying to compute the WER over (too many) test samples?",
"Hi !\r\n\r\nIt's computed iteratively so not sure what could go wrong\r\n\r\nhttps://github.com/huggingface/datasets/blob/8afd0ba8c27800a55ea69d9fcd702dc97d9c16d8/metrics/wer/wer.py#L100-L106\r\n\r\n@NiklasHoltmeyer what version of `datasets` are you running ?\r\n",
"One possible explanation might be that it is the user who is passing all the sentences in a single element to `wer.compute`?\r\n\r\nAs current implementation iterates over the elements of `predictions` and `references`, this can be problematic if `predictions` and `references` contain a single huge element each. \r\n\r\nThis could be the case, for example, with a single string with all sentences:\r\n```python\r\nresult[\"predicted\"] = \"One sentence. Other sentence.\"\r\n```\r\nor with a __double__ nested list of sentence lists\r\n```python\r\nresult[\"predicted\"] = [[ [\"One sentence.\"], [\"Other sentence\"] ]]\r\n```\r\n\r\nThe user should check the dimensions of the data structure passed to `predictions` and `references`.",
"Hi all,\r\n\r\nin my case I was using and older version of datasets and, as @albertvillanova points out, passing the full list of sentences for the metric calculation. The problem was in the way jiwer implements WER, as it tries to compute WER for the full list at once instead of doing it element-wise. I think that with the latest implementation of datasets, or by using the alternative WER function that I've contributed on this [pull request](https://github.com/huggingface/datasets/pull/2169) there shouldn't be memory errors.",
"@lhoestq i was using Datasets==1.5.0 with 1.6.1 it worked (atleast the first run) but 1.5.0 is not compatible with my preprocessing. i cant save my dataset to a parquet file while using the latest datasets version\r\n\r\n-> \r\n```\r\n File \"../preprocess_dataset.py\", line 132, in <module>\r\n pq.write_table(train_dataset.data, f'{resampled_data_dir}/{data_args.dataset_config_name}.train.parquet')\r\n File \"/usr/local/lib/python3.8/dist-packages/pyarrow/parquet.py\", line 1674, in write_table\r\n writer.write_table(table, row_group_size=row_group_size)\r\n File \"/usr/local/lib/python3.8/dist-packages/pyarrow/parquet.py\", line 588, in write_table\r\n self.writer.write_table(table, row_group_size=row_group_size)\r\nTypeError: Argument 'table' has incorrect type (expected pyarrow.lib.Table, got ConcatenationTable)\r\n``` \r\n\r\nif i do \r\n```\r\nimport pyarrow.parquet as pq\r\n...\r\n...\r\npq.write_table(train_dataset.data, 'train.parquet')\r\npq.write_table(eval_dataset.data, 'eval.parquet')\r\n```\r\n\r\nwhile using 1.6.1. and its working with 1.5.0\r\n",
"Hi ! You can pass dataset.data.table instead of dataset.data to pq.write_table",
"This seems to be working so far! Thanks!"
] | 2021-03-18T11:30:05
| 2021-05-01T08:31:49
| 2021-04-06T07:20:43
|
NONE
| null | null | null | null |
Hi, I'm trying to follow the ASR example to try Wav2Vec. This is the code that I use for WER calculation:
```
wer = load_metric("wer")
print(wer.compute(predictions=result["predicted"], references=result["target"]))
```
However, I receive the following exception:
`Traceback (most recent call last):
File "/home/diego/IpGlobal/wav2vec/test_wav2vec.py", line 51, in <module>
print(wer.compute(predictions=result["predicted"], references=result["target"]))
File "/home/diego/miniconda3/envs/wav2vec3.6/lib/python3.6/site-packages/datasets/metric.py", line 403, in compute
output = self._compute(predictions=predictions, references=references, **kwargs)
File "/home/diego/.cache/huggingface/modules/datasets_modules/metrics/wer/73b2d32b723b7fb8f204d785c00980ae4d937f12a65466f8fdf78706e2951281/wer.py", line 94, in _compute
return wer(references, predictions)
File "/home/diego/miniconda3/envs/wav2vec3.6/lib/python3.6/site-packages/jiwer/measures.py", line 81, in wer
truth, hypothesis, truth_transform, hypothesis_transform, **kwargs
File "/home/diego/miniconda3/envs/wav2vec3.6/lib/python3.6/site-packages/jiwer/measures.py", line 192, in compute_measures
H, S, D, I = _get_operation_counts(truth, hypothesis)
File "/home/diego/miniconda3/envs/wav2vec3.6/lib/python3.6/site-packages/jiwer/measures.py", line 273, in _get_operation_counts
editops = Levenshtein.editops(source_string, destination_string)
MemoryError`
My system has more than 10GB of available RAM. Looking at the code, I think that it could be related to the way jiwer does the calculation, as it is pasting all the sentences in a single string before calling Levenshtein editops function.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2078/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2078/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 18 days, 19:50:38
|
https://api.github.com/repos/huggingface/datasets/issues/2076
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2076/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2076/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2076/events
|
https://github.com/huggingface/datasets/issues/2076
| 834,445,296
|
MDU6SXNzdWU4MzQ0NDUyOTY=
| 2,076
|
Issue: Dataset download error
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/20436061?v=4",
"events_url": "https://api.github.com/users/XuhuiZhou/events{/privacy}",
"followers_url": "https://api.github.com/users/XuhuiZhou/followers",
"following_url": "https://api.github.com/users/XuhuiZhou/following{/other_user}",
"gists_url": "https://api.github.com/users/XuhuiZhou/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/XuhuiZhou",
"id": 20436061,
"login": "XuhuiZhou",
"node_id": "MDQ6VXNlcjIwNDM2MDYx",
"organizations_url": "https://api.github.com/users/XuhuiZhou/orgs",
"received_events_url": "https://api.github.com/users/XuhuiZhou/received_events",
"repos_url": "https://api.github.com/users/XuhuiZhou/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/XuhuiZhou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/XuhuiZhou/subscriptions",
"type": "User",
"url": "https://api.github.com/users/XuhuiZhou",
"user_view_type": "public"
}
|
[
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] |
open
| false
| null |
[] |
[
"Hi @XuhuiZhou, thanks for reporting this issue. \r\n\r\nIndeed, the old links are no longer valid (404 Not Found error), and the script must be updated with the new links to Google Drive.",
"It would be nice to update the urls indeed !\r\n\r\nTo do this, you just need to replace the urls in `iwslt2017.py` and then update the dataset_infos.json file with\r\n```\r\ndatasets-cli test ./datasets/iwslt2017 --all_configs --save_infos --ignore_verifications\r\n```",
"Is this a command to update my local files or fix the file Github repo in general? (I am not so familiar with the datasets-cli command here)\r\n\r\nI also took a brief look at the **Sharing your dataset** section, looks like I could fix that locally and push it to the repo? I guess we are \"canonical\" category?",
"This command will update your local file. Then you can open a Pull Request to push your fix to the github repo :)\r\nAnd yes you are right, it is a \"canonical\" dataset, i.e. a dataset script defined in this github repo (as opposed to dataset repositories of users on the huggingface hub)",
"Hi, thanks for the answer. \r\n\r\nI gave a try to the problem today. But I encountered an upload error: \r\n\r\n```\r\ngit push -u origin fix_link_iwslt\r\nEnter passphrase for key '/home2/xuhuizh/.ssh/id_rsa': \r\nERROR: Permission to huggingface/datasets.git denied to XuhuiZhou.\r\nfatal: Could not read from remote repository.\r\n\r\nPlease make sure you have the correct access rights\r\nand the repository exists.\r\n```\r\n\r\nAny insight here? \r\n\r\nBy the way, when I run the datasets-cli command, it shows the following error, but does not seem to be the error coming from `iwslt.py`\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/home2/xuhuizh/anaconda3/envs/UMT/bin/datasets-cli\", line 33, in <module>\r\n sys.exit(load_entry_point('datasets', 'console_scripts', 'datasets-cli')())\r\n File \"/home2/xuhuizh/projects/datasets/src/datasets/commands/datasets_cli.py\", line 35, in main\r\n service.run()\r\n File \"/home2/xuhuizh/projects/datasets/src/datasets/commands/test.py\", line 141, in run\r\n try_from_hf_gcs=False,\r\n File \"/home2/xuhuizh/projects/datasets/src/datasets/builder.py\", line 579, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"/home2/xuhuizh/projects/datasets/src/datasets/builder.py\", line 639, in _download_and_prepare\r\n self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), \"dataset source files\"\r\n File \"/home2/xuhuizh/projects/datasets/src/datasets/utils/info_utils.py\", line 32, in verify_checksums\r\n raise ExpectedMoreDownloadedFiles(str(set(expected_checksums) - set(recorded_checksums)))\r\ndatasets.utils.info_utils.ExpectedMoreDownloadedFiles: {'https://wit3.fbk.eu/archive/2017-01-trnmted//texts/DeEnItNlRo/DeEnItNlRo/DeEnItNlRo-DeEnItNlRo.tgz'}\r\n```",
"Hi ! To create a PR on this repo your must fork it and create a branch on your fork. See how to fork the repo [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#start-by-preparing-your-environment).\r\nAnd to make the command work without the `ExpectedMoreDownloadedFiles` error, you just need to use the `--ignore_verifications` flag.",
"Hi @XuhuiZhou,\r\n\r\nAs @lhoestq has well explained, you need to fork HF's repository, create a feature branch in your fork, push your changes to it and then open a Pull Request to HF's upstream repository. This is so because at HuggingFace Datasets we follow a development model called \"Fork and Pull Model\". You can find more information here:\r\n- [Understanding the GitHub flow](https://guides.github.com/introduction/flow/)\r\n- [Forking Projects](https://guides.github.com/activities/forking/)\r\n\r\nAlternatively, if you find all these steps too complicated, you can use the GitHub official command line tool: [GitHub CLI](https://cli.github.com/). Once installed, in order to create a Pull Request, you only need to use this command:\r\n```shell\r\ngh pr create --web\r\n```\r\nThis utility will automatically create the fork, push your changes and open a Pull Request, under the hood."
] | 2021-03-18T06:36:06
| 2021-03-22T11:52:31
| null |
NONE
| null | null | null | null |
The download link in `iwslt2017.py` file does not seem to work anymore.
For example, `FileNotFoundError: Couldn't find file at https://wit3.fbk.eu/archive/2017-01-trnted/texts/zh/en/zh-en.tgz`
Would be nice if we could modify it script and use the new downloadable link?
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2076/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2076/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/2075
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2075/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2075/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2075/events
|
https://github.com/huggingface/datasets/issues/2075
| 834,301,246
|
MDU6SXNzdWU4MzQzMDEyNDY=
| 2,075
|
ConnectionError: Couldn't reach common_voice.py
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/6188893?v=4",
"events_url": "https://api.github.com/users/LifaSun/events{/privacy}",
"followers_url": "https://api.github.com/users/LifaSun/followers",
"following_url": "https://api.github.com/users/LifaSun/following{/other_user}",
"gists_url": "https://api.github.com/users/LifaSun/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/LifaSun",
"id": 6188893,
"login": "LifaSun",
"node_id": "MDQ6VXNlcjYxODg4OTM=",
"organizations_url": "https://api.github.com/users/LifaSun/orgs",
"received_events_url": "https://api.github.com/users/LifaSun/received_events",
"repos_url": "https://api.github.com/users/LifaSun/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/LifaSun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LifaSun/subscriptions",
"type": "User",
"url": "https://api.github.com/users/LifaSun",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi @LifaSun, thanks for reporting this issue.\r\n\r\nSometimes, GitHub has some connectivity problems. Could you confirm that the problem persists?",
"@albertvillanova Thanks! It works well now. "
] | 2021-03-18T01:19:06
| 2021-03-20T10:29:41
| 2021-03-20T10:29:41
|
NONE
| null | null | null | null |
When I run:
from datasets import load_dataset, load_metric
common_voice_train = load_dataset("common_voice", "zh-CN", split="train+validation")
common_voice_test = load_dataset("common_voice", "zh-CN", split="test")
Got:
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/master/datasets/common_voice/common_voice.py
Version:
1.4.1
Thanks! @lhoestq @LysandreJik @thomwolf
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/6188893?v=4",
"events_url": "https://api.github.com/users/LifaSun/events{/privacy}",
"followers_url": "https://api.github.com/users/LifaSun/followers",
"following_url": "https://api.github.com/users/LifaSun/following{/other_user}",
"gists_url": "https://api.github.com/users/LifaSun/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/LifaSun",
"id": 6188893,
"login": "LifaSun",
"node_id": "MDQ6VXNlcjYxODg4OTM=",
"organizations_url": "https://api.github.com/users/LifaSun/orgs",
"received_events_url": "https://api.github.com/users/LifaSun/received_events",
"repos_url": "https://api.github.com/users/LifaSun/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/LifaSun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LifaSun/subscriptions",
"type": "User",
"url": "https://api.github.com/users/LifaSun",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2075/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2075/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 2 days, 9:10:35
|
https://api.github.com/repos/huggingface/datasets/issues/2071
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2071/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2071/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2071/events
|
https://github.com/huggingface/datasets/issues/2071
| 833,950,824
|
MDU6SXNzdWU4MzM5NTA4MjQ=
| 2,071
|
Multiprocessing is slower than single process
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4",
"events_url": "https://api.github.com/users/theo-m/events{/privacy}",
"followers_url": "https://api.github.com/users/theo-m/followers",
"following_url": "https://api.github.com/users/theo-m/following{/other_user}",
"gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/theo-m",
"id": 17948980,
"login": "theo-m",
"node_id": "MDQ6VXNlcjE3OTQ4OTgw",
"organizations_url": "https://api.github.com/users/theo-m/orgs",
"received_events_url": "https://api.github.com/users/theo-m/received_events",
"repos_url": "https://api.github.com/users/theo-m/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/theo-m/subscriptions",
"type": "User",
"url": "https://api.github.com/users/theo-m",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] |
[
"dupe of #1992"
] | 2021-03-17T16:08:58
| 2021-03-18T09:10:23
| 2021-03-18T09:10:23
|
CONTRIBUTOR
| null | null | null | null |
```python
# benchmark_filter.py
import logging
import sys
import time
from datasets import load_dataset, set_caching_enabled
if __name__ == "__main__":
set_caching_enabled(False)
logging.basicConfig(level=logging.DEBUG)
bc = load_dataset("bookcorpus")
now = time.time()
try:
bc["train"].filter(lambda x: len(x["text"]) < 64, num_proc=int(sys.argv[1]))
except Exception as e:
print(f"cancelled: {e}")
elapsed = time.time() - now
print(elapsed)
```
Running `python benchmark_filter.py 1` (20min+) is faster than `python benchmark_filter.py 2` (2hrs+)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4",
"events_url": "https://api.github.com/users/theo-m/events{/privacy}",
"followers_url": "https://api.github.com/users/theo-m/followers",
"following_url": "https://api.github.com/users/theo-m/following{/other_user}",
"gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/theo-m",
"id": 17948980,
"login": "theo-m",
"node_id": "MDQ6VXNlcjE3OTQ4OTgw",
"organizations_url": "https://api.github.com/users/theo-m/orgs",
"received_events_url": "https://api.github.com/users/theo-m/received_events",
"repos_url": "https://api.github.com/users/theo-m/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/theo-m/subscriptions",
"type": "User",
"url": "https://api.github.com/users/theo-m",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2071/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2071/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 17:01:25
|
https://api.github.com/repos/huggingface/datasets/issues/2070
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2070/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2070/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2070/events
|
https://github.com/huggingface/datasets/issues/2070
| 833,799,035
|
MDU6SXNzdWU4MzM3OTkwMzU=
| 2,070
|
ArrowInvalid issue for squad v2 dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/29818977?v=4",
"events_url": "https://api.github.com/users/MichaelYxWang/events{/privacy}",
"followers_url": "https://api.github.com/users/MichaelYxWang/followers",
"following_url": "https://api.github.com/users/MichaelYxWang/following{/other_user}",
"gists_url": "https://api.github.com/users/MichaelYxWang/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/MichaelYxWang",
"id": 29818977,
"login": "MichaelYxWang",
"node_id": "MDQ6VXNlcjI5ODE4OTc3",
"organizations_url": "https://api.github.com/users/MichaelYxWang/orgs",
"received_events_url": "https://api.github.com/users/MichaelYxWang/received_events",
"repos_url": "https://api.github.com/users/MichaelYxWang/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/MichaelYxWang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MichaelYxWang/subscriptions",
"type": "User",
"url": "https://api.github.com/users/MichaelYxWang",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi ! This error happens when you use `map` in batched mode and then your function doesn't return the same number of values per column.\r\n\r\nIndeed since you're using `map` in batched mode, `prepare_validation_features` must take a batch as input (i.e. a dictionary of multiple rows of the dataset), and return a batch.\r\n\r\nHowever it seems like `tokenized_examples` doesn't have the same number of elements in each field. One field seems to have `1180` elements while `candidate_attention_mask` only has `1178`."
] | 2021-03-17T13:51:49
| 2021-08-04T17:57:16
| 2021-08-04T17:57:16
|
NONE
| null | null | null | null |
Hello, I am using the huggingface official question answering example notebook (https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/question_answering.ipynb).
In the prepare_validation_features function, I made some modifications to tokenize a new set of quesions with the original contexts and save them in three different list called candidate_input_dis, candidate_attetion_mask and candidate_token_type_ids. When I try to run the next cell for dataset.map, I got the following error:
`ArrowInvalid: Column 1 named candidate_attention_mask expected length 1180 but got length 1178`
My code is as follows:
```
def generate_candidate_questions(examples):
val_questions = examples["question"]
candididate_questions = random.sample(datasets["train"]["question"], len(val_questions))
candididate_questions = [x[:max_length] for x in candididate_questions]
return candididate_questions
def prepare_validation_features(examples, use_mixing=False):
pad_on_right = tokenizer.padding_side == "right"
tokenized_examples = tokenizer(
examples["question" if pad_on_right else "context"],
examples["context" if pad_on_right else "question"],
truncation="only_second" if pad_on_right else "only_first",
max_length=max_length,
stride=doc_stride,
return_overflowing_tokens=True,
return_offsets_mapping=True,
padding="max_length",
)
if use_mixing:
candidate_questions = generate_candidate_questions(examples)
tokenized_candidates = tokenizer(
candidate_questions if pad_on_right else examples["context"],
examples["context"] if pad_on_right else candidate_questions,
truncation="only_second" if pad_on_right else "only_first",
max_length=max_length,
stride=doc_stride,
return_overflowing_tokens=True,
return_offsets_mapping=True,
padding="max_length",
)
sample_mapping = tokenized_examples.pop("overflow_to_sample_mapping")
tokenized_examples["example_id"] = []
if use_mixing:
tokenized_examples["candidate_input_ids"] = tokenized_candidates["input_ids"]
tokenized_examples["candidate_attention_mask"] = tokenized_candidates["attention_mask"]
tokenized_examples["candidate_token_type_ids"] = tokenized_candidates["token_type_ids"]
for i in range(len(tokenized_examples["input_ids"])):
sequence_ids = tokenized_examples.sequence_ids(i)
context_index = 1 if pad_on_right else 0
sample_index = sample_mapping[i]
tokenized_examples["example_id"].append(examples["id"][sample_index])
tokenized_examples["offset_mapping"][i] = [
(o if sequence_ids[k] == context_index else None)
for k, o in enumerate(tokenized_examples["offset_mapping"][i])
]
return tokenized_examples
validation_features = datasets["validation"].map(
lambda xs: prepare_validation_features(xs, True),
batched=True,
remove_columns=datasets["validation"].column_names
)
```
I guess this might happen because of the batched=True. I see similar issues in this repo related to arrow table length mismatch error, but in their cases, the numbers vary a lot. In my case, this error always happens when the expected length and unexpected length are very close. Thanks for the help!
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2070/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2070/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 140 days, 4:05:27
|
https://api.github.com/repos/huggingface/datasets/issues/2068
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2068/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2068/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2068/events
|
https://github.com/huggingface/datasets/issues/2068
| 833,602,832
|
MDU6SXNzdWU4MzM2MDI4MzI=
| 2,068
|
PyTorch not available error on SageMaker GPU docker though it is installed
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1651457?v=4",
"events_url": "https://api.github.com/users/sivakhno/events{/privacy}",
"followers_url": "https://api.github.com/users/sivakhno/followers",
"following_url": "https://api.github.com/users/sivakhno/following{/other_user}",
"gists_url": "https://api.github.com/users/sivakhno/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sivakhno",
"id": 1651457,
"login": "sivakhno",
"node_id": "MDQ6VXNlcjE2NTE0NTc=",
"organizations_url": "https://api.github.com/users/sivakhno/orgs",
"received_events_url": "https://api.github.com/users/sivakhno/received_events",
"repos_url": "https://api.github.com/users/sivakhno/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sivakhno/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sivakhno/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sivakhno",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"cc @philschmid ",
"Hey @sivakhno,\r\n\r\nhow does your `requirements.txt` look like to install the `datasets` library and which version of it are you running? Can you try to install `datasets>=1.4.0`",
"Hi @philschmid - thanks for suggestion. I am using `datasets==1.4.1`. \r\nI have also tried using `torch=1.6.0` (docker `763104351884.dkr.ecr.eu-central-1.amazonaws.com/pytorch-training:1.6.0-gpu-py3 `), but the error is the same. ",
"Could paste the code you use the start your training job and the fine-tuning script you run? ",
"@sivakhno this should be now fixed in `datasets>=1.5.0`. ",
"@philschmid Recently released tensorflow-macos seems to be missing. ",
"I've created a PR to add this. "
] | 2021-03-17T10:04:27
| 2021-06-14T04:47:30
| 2021-06-14T04:47:30
|
NONE
| null | null | null | null |
I get en error when running data loading using SageMaker SDK
```
File "main.py", line 34, in <module>
run_training()
File "main.py", line 25, in run_training
dm.setup('fit')
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/core/datamodule.py", line 92, in wrapped_fn
return fn(*args, **kwargs)
File "/opt/ml/code/data_module.py", line 103, in setup
self.dataset[split].set_format(type="torch", columns=self.columns)
File "/opt/conda/lib/python3.6/site-packages/datasets/fingerprint.py", line 337, in wrapper
out = func(self, *args, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 995, in set_format
_ = get_formatter(type, **format_kwargs)
File "/opt/conda/lib/python3.6/site-packages/datasets/formatting/__init__.py", line 114, in get_formatter
raise _FORMAT_TYPES_ALIASES_UNAVAILABLE[format_type]
ValueError: PyTorch needs to be installed to be able to return PyTorch tensors.
```
when trying to execute dataset loading using this notebook https://github.com/PyTorchLightning/pytorch-lightning/blob/master/notebooks/04-transformers-text-classification.ipynb, specifically lines
```
self.columns = [c for c in self.dataset[split].column_names if c in self.loader_columns]
self.dataset[split].set_format(type="torch", columns=self.columns)
```
The SageMaker docker image used is 763104351884.dkr.ecr.eu-central-1.amazonaws.com/pytorch-training:1.4.0-gpu-py3 .
By running container interactively I have checked that torch loading completes successfully by executing `https://github.com/huggingface/datasets/blob/master/src/datasets/config.py#L39`.
Also as a first line in the data loading module I have
```
import os
os.environ["USE_TF"] = "0"
os.environ["USE_TORCH"] = "1"
````
But unfortunately the error stills persists. Any suggestions would be appreciated as I am stack.
Many Thanks!
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2068/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2068/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 88 days, 18:43:03
|
https://api.github.com/repos/huggingface/datasets/issues/2067
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2067/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2067/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2067/events
|
https://github.com/huggingface/datasets/issues/2067
| 833,559,940
|
MDU6SXNzdWU4MzM1NTk5NDA=
| 2,067
|
Multiprocessing windows error
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47894090?v=4",
"events_url": "https://api.github.com/users/flozi00/events{/privacy}",
"followers_url": "https://api.github.com/users/flozi00/followers",
"following_url": "https://api.github.com/users/flozi00/following{/other_user}",
"gists_url": "https://api.github.com/users/flozi00/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/flozi00",
"id": 47894090,
"login": "flozi00",
"node_id": "MDQ6VXNlcjQ3ODk0MDkw",
"organizations_url": "https://api.github.com/users/flozi00/orgs",
"received_events_url": "https://api.github.com/users/flozi00/received_events",
"repos_url": "https://api.github.com/users/flozi00/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/flozi00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/flozi00/subscriptions",
"type": "User",
"url": "https://api.github.com/users/flozi00",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi ! Thanks for reporting.\r\nThis looks like a bug, could you try to provide a minimal code example that reproduces the issue ? This would be very helpful !\r\n\r\nOtherwise I can try to run the wav2vec2 code above on my side but probably not this week..",
"```\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset('glue', 'mrpc', split='train')\r\n\r\n\r\nupdated_dataset = dataset.map(lambda example: {'sentence1': 'My sentence: ' + example['sentence1']}, num_proc=4)\r\n\r\n```",
"\r\n\r\n\r\n\r\n\r\nI was able to copy some of the shell \r\nThis is repeating every half second\r\nWin 10, Anaconda with python 3.8, datasets installed from main branche\r\n```\r\n\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\spawn.py\", line 287, in _fixup_main_from_path\r\n _check_not_importing_main()\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\spawn.py\", line 116, in spawn_main\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\spawn.py\", line 134, in _check_not_importing_main\r\n main_content = runpy.run_path(main_path,\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\runpy.py\", line 265, in run_path\r\n exitcode = _main(fd, parent_sentinel)\r\n raise RuntimeError('''\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\spawn.py\", line 125, in _main\r\nRuntimeError:\r\n An attempt has been made to start a new process before the\r\n current process has finished its bootstrapping phase.\r\n\r\n This probably means that you are not using fork to start your\r\n child processes and you have forgotten to use the proper idiom\r\n in the main module:\r\n\r\n if __name__ == '__main__':\r\n freeze_support()\r\n ...\r\n\r\n The \"freeze_support()\" line can be omitted if the program\r\n is not going to be frozen to produce an executable. return _run_module_code(code, init_globals, run_name,\r\n prepare(preparation_data)\r\n\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\runpy.py\", line 97, in _run_module_code\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\spawn.py\", line 236, in prepare\r\n _run_code(code, mod_globals, init_globals,\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\runpy.py\", line 87, in _run_code\r\n _fixup_main_from_path(data['init_main_from_path'])\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\spawn.py\", line 287, in _fixup_main_from_path\r\n exec(code, run_globals)\r\n File \"F:\\Codes\\Python Apps\\asr\\test.py\", line 6, in <module>\r\n updated_dataset = dataset.map(lambda example: {'sentence1': 'My sentence: ' + example['sentence1']}, num_proc=4)\r\n main_content = runpy.run_path(main_path,\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\datasets\\arrow_dataset.py\", line 1370, in map\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\runpy.py\", line 265, in run_path\r\n with Pool(num_proc, initargs=(RLock(),), initializer=tqdm.set_lock) as pool:\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\context.py\", line 119, in Pool\r\n return _run_module_code(code, init_globals, run_name,\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\runpy.py\", line 97, in _run_module_code\r\n _run_code(code, mod_globals, init_globals,\r\n return Pool(processes, initializer, initargs, maxtasksperchild,\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\runpy.py\", line 87, in _run_code\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\pool.py\", line 212, in __init__\r\n exec(code, run_globals)\r\n File \"F:\\Codes\\Python Apps\\asr\\test.py\", line 6, in <module>\r\n self._repopulate_pool()\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\pool.py\", line 303, in _repopulate_pool\r\n updated_dataset = dataset.map(lambda example: {'sentence1': 'My sentence: ' + example['sentence1']}, num_proc=4)\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\datasets\\arrow_dataset.py\", line 1370, in map\r\n return self._repopulate_pool_static(self._ctx, self.Process,\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\pool.py\", line 326, in _repopulate_pool_static\r\n with Pool(num_proc, initargs=(RLock(),), initializer=tqdm.set_lock) as pool:\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\context.py\", line 119, in Pool\r\n w.start()\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\process.py\", line 121, in start\r\n return Pool(processes, initializer, initargs, maxtasksperchild,\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\pool.py\", line 212, in __init__\r\n self._popen = self._Popen(self)\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\context.py\", line 327, in _Popen\r\n self._repopulate_pool()\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\pool.py\", line 303, in _repopulate_pool\r\n return Popen(process_obj)\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\popen_spawn_win32.py\", line 45, in __init__\r\n return self._repopulate_pool_static(self._ctx, self.Process,\r\n prep_data = spawn.get_preparation_data(process_obj._name)\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\pool.py\", line 326, in _repopulate_pool_static\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\spawn.py\", line 154, in get_preparation_data\r\n _check_not_importing_main()\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\spawn.py\", line 134, in _check_not_importing_main\r\n w.start()\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\process.py\", line 121, in start\r\n raise RuntimeError('''\r\nRuntimeError:\r\n An attempt has been made to start a new process before the\r\n current process has finished its bootstrapping phase.\r\n\r\n This probably means that you are not using fork to start your\r\n child processes and you have forgotten to use the proper idiom\r\n in the main module:\r\n\r\n if __name__ == '__main__':\r\n freeze_support()\r\n ...\r\n```",
"Thanks this is really helpful !\r\nI'll try to reproduce on my side and come back to you",
"if __name__ == '__main__':\r\n\r\n\r\nThis line before calling the map function stops the error but the script still repeats endless",
"Indeed you needed `if __name__ == '__main__'` since accoding to [this stackoverflow post](https://stackoverflow.com/a/18205006):\r\n\r\n> On Windows the subprocesses will import (i.e. execute) the main module at start. You need to insert an if __name__ == '__main__': guard in the main module to avoid creating subprocesses recursively.\r\n\r\nRegarding the hanging issue, can you try to update `dill` and `multiprocess` ?",
"It's already on the newest version",
"```\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\shutil.py\", line 791, in move\r\n os.rename(src, real_dst)\r\nFileExistsError: [WinError 183] Eine Datei kann nicht erstellt werden, wenn sie bereits vorhanden ist: 'D:\\\\huggingfacecache\\\\common_voice\\\\de\\\\6.1.0\\\\0041e06ab061b91d0a23234a2221e87970a19cf3a81b20901474cffffeb7869f\\\\tmpx9fl_jg8' -> 'D:\\\\huggingfacecache\\\\common_voice\\\\de\\\\6.1.0\\\\0041e06ab061b91d0a23234a2221e87970a19cf3a81b20901474cffffeb7869f\\\\cache-9b4f203a63742dfc.arrow'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\spawn.py\", line 116, in spawn_main\r\n exitcode = _main(fd, parent_sentinel)\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\spawn.py\", line 125, in _main\r\n prepare(preparation_data)\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\spawn.py\", line 236, in prepare\r\n _fixup_main_from_path(data['init_main_from_path'])\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\spawn.py\", line 287, in _fixup_main_from_path\r\n main_content = runpy.run_path(main_path,\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\runpy.py\", line 265, in run_path\r\n return _run_module_code(code, init_globals, run_name,\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\runpy.py\", line 97, in _run_module_code\r\n _run_code(code, mod_globals, init_globals,\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\runpy.py\", line 87, in _run_code\r\n exec(code, run_globals)\r\n File \"F:\\Codes\\Python Apps\\asr\\cvtrain.py\", line 243, in <module>\r\n common_voice_train = common_voice_train.map(remove_special_characters, remove_columns=[\"sentence\"])\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\datasets\\arrow_dataset.py\", line 1339, in map\r\n return self._map_single(\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\datasets\\arrow_dataset.py\", line 203, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\datasets\\fingerprint.py\", line 337, in wrapper\r\n out = func(self, *args, **kwargs)\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\datasets\\arrow_dataset.py\", line 1646, in _map_single\r\n shutil.move(tmp_file.name, cache_file_name)\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\shutil.py\", line 805, in move\r\n copy_function(src, real_dst)\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\shutil.py\", line 435, in copy2\r\n copyfile(src, dst, follow_symlinks=follow_symlinks)\r\n 0%| | 0/27771 [00:00<?, ?ex/s] \r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\shutil.py\", line 264, in copyfile\r\n with open(src, 'rb') as fsrc, open(dst, 'wb') as fdst:\r\nOSError: [Errno 22] Invalid argument: 'D:\\\\huggingfacecache\\\\common_voice\\\\de\\\\6.1.0\\\\0041e06ab061b91d0a23234a2221e87970a19cf3a81b20901474cffffeb7869f\\\\cache-9b4f203a63742dfc.arrow'\r\n```\r\n\r\nI was adding freeze support before calling the mapping function like this\r\nif __name__ == '__main__':\r\n freeze_support()\r\n dataset.map(....)",
"Usually OSError of an arrow file on windows means that the file is currently opened as a dataset object, so you can't overwrite it until the dataset object falls out of scope.\r\nCan you make sure that there's no dataset object that loaded the `cache-9b4f203a63742dfc.arrow` file ?",
"Now I understand\r\nThe error occures because the script got restarted in another thread, so the object is already loaded.\r\nStill don't have an idea why a new thread starts the whole script again"
] | 2021-03-17T09:12:28
| 2021-08-04T17:59:08
| 2021-08-04T17:59:08
|
CONTRIBUTOR
| null | null | null | null |
As described here https://huggingface.co/blog/fine-tune-xlsr-wav2vec2
When using the num_proc argument on windows the whole Python environment crashes and hanging in loop.
For example at the map_to_array part.
An error occures because the cache file already exists and windows throws and error. After this the log crashes into an loop
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2067/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2067/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 140 days, 8:46:40
|
https://api.github.com/repos/huggingface/datasets/issues/2065
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2065/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2065/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2065/events
|
https://github.com/huggingface/datasets/issues/2065
| 833,291,432
|
MDU6SXNzdWU4MzMyOTE0MzI=
| 2,065
|
Only user permission of saved cache files, not group
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/57237365?v=4",
"events_url": "https://api.github.com/users/lorr1/events{/privacy}",
"followers_url": "https://api.github.com/users/lorr1/followers",
"following_url": "https://api.github.com/users/lorr1/following{/other_user}",
"gists_url": "https://api.github.com/users/lorr1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lorr1",
"id": 57237365,
"login": "lorr1",
"node_id": "MDQ6VXNlcjU3MjM3MzY1",
"organizations_url": "https://api.github.com/users/lorr1/orgs",
"received_events_url": "https://api.github.com/users/lorr1/received_events",
"repos_url": "https://api.github.com/users/lorr1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lorr1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lorr1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lorr1",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
}
] |
closed
| false
| null |
[] |
[
"Hi ! Thanks for reporting.\r\n\r\nCurrently there's no way to specify this.\r\n\r\nWhen loading/processing a dataset, the arrow file is written using a temporary file. Then once writing is finished, it's moved to the cache directory (using `shutil.move` [here](https://github.com/huggingface/datasets/blob/f6b8251eb975f66a568356d2a40d86442c03beb9/src/datasets/arrow_dataset.py#L1646))\r\n\r\nThat means it keeps the permissions specified by the `tempfile.NamedTemporaryFile` object, i.e. `-rw-------` instead of `-rw-r--r--`. Improving this could be a nice first contribution to the library :)",
"Hi @lhoestq,\r\nI looked into this and yes you're right. The `NamedTemporaryFile` is always created with mode 0600, which prevents group from reading the file. Should we change the permissions of `tmp_file.name` [here](https://github.com/huggingface/datasets/blob/f6b8251eb975f66a568356d2a40d86442c03beb9/src/datasets/arrow_dataset.py#L1871) and [here](https://github.com/huggingface/datasets/blob/f6b8251eb975f66a568356d2a40d86442c03beb9/src/datasets/arrow_dataset.py#L1590), post creation to 0644 inorder for group and others to read it?",
"Good idea :) we could even update the permissions after the file has been moved by shutil.move [here](https://github.com/huggingface/datasets/blob/f6b8251eb975f66a568356d2a40d86442c03beb9/src/datasets/arrow_dataset.py#L1899) and [here](https://github.com/huggingface/datasets/blob/f6b8251eb975f66a568356d2a40d86442c03beb9/src/datasets/arrow_dataset.py#L1646) actually.\r\nApparently they set the default 0600 for temporary files for security reasons, so let's update the umask only after the file has been moved",
"Would it be possible to actually set the umask based on a user provided argument? For example, a popular usecase my team has is using a shared file-system for processing datasets. This may involve writing/deleting other files, or changing filenames, which a -rw-r--r-- wouldn't fix. ",
"Note that you can get the cache files of a dataset with the `cache_files` attributes.\r\nThen you can `chmod` those files and all the other cache files in the same directory.\r\n\r\nMoreover we can probably keep the same permissions after each transform. This way you just need to set the permissions once after doing `load_dataset` for example, and then all the new transformed cached files will have the same permissions.\r\nWhat do you think ?",
"This means we'll check the permissions of other `cache_files` already created for a dataset before setting permissions for new `cache_files`?",
"You can just check the permission of `dataset.cache_files[0]` imo",
"> This way you just need to set the permissions once after doing load_dataset for example, and then all the new transformed cached files will have the same permissions.\r\n\r\nI was referring to this. Ensuring that newly generated `cache_files` have the same permissions",
"Yes exactly\r\n\r\nI imagine users can first do `load_dataset`, then chmod on the arrow files. After that all the new cache files could have the same permissions as the first arrow files. Opinions on this ?",
"Sounds nice but I feel this is a sub-part of the approach mentioned by @siddk. Instead of letting the user set new permissions by itself first and then making sure newly generated files have same permissions why don't we ask the user initially only what they want? What are your thoughts?",
"Yes sounds good. Should this be a parameter in `load_dataset` ? Or an env variable ? Or use the value of `os.umask` ?",
"Ideally it should be a parameter in `load_dataset` but I'm not sure how important it is for the users (considering only important things should go into `load_dataset` parameters)",
"I think it's fairly important; for context, our team uses a shared file-system where many folks run experiments based on datasets that are cached by other users.\r\n\r\nFor example, I might start a training run, downloading a dataset. Then, a couple of days later, a collaborator using the same repository might want to use the same dataset on the same shared filesystem, but won't be able to under the default permissions.\r\n\r\nBeing able to specify directly in the top-level `load_dataset()` call seems important, but an equally valid option would be to just inherit from the running user's `umask` (this should probably be the default anyway).\r\n\r\nSo basically, argument that takes a custom set of permissions, and by default, use the running user's umask!",
"Maybe let's start by defaulting to the user's umask !\r\nDo you want to give it a try @bhavitvyamalik ?",
"Yeah sure! Instead of using default `0o644` should I first extract umask of current user and then use `os.umask` on it? We can do it inside `Dataset` class so that all folders/files created during the call use running user's umask\r\n\r\n",
"You can get the umask using `os.umask` and then I guess you can just use `os.chmod` as in your previous PR, but with the right permissions depending on the umask.",
"FWIW, we have this issue with other caches - e.g. `transformers` model files. So probably will need to backport this into `transformers` as well.\r\n\r\nthanks @thomwolf for the pointer.",
"Hi @stas00,\r\nFor this should we use the same umask code in the respective model directory inside `TRANSFORMERS_CACHE`?",
"That sounds very right to me, @bhavitvyamalik ",
"The cluster I am working on does not allow me to change the permission of the files with os.chmod. I was wondering if there is any workaround for this? My cache is in a GCP bucket and I can't change file permissions once I mount it.",
"@vmurahari3 what error do you have exactly ?",
"I get a permission denied error on https://github.com/huggingface/datasets/blob/b8363e0539c6f0cb5de49af32962cf2eb4c47395/src/datasets/arrow_dataset.py#L2799. I suspect I don't have permissions to change group permissions. I am mounting a GCP bucket through [gcsfuse](https://github.com/GoogleCloudPlatform/gcsfuse). ",
"What @lhoestq is asking for is the full multi-line traceback - it's almost never enough to show the last line - a full stack is needed to get the context. Thank you!\r\n\r\nI wonder if a workaround is to try/except and then issue a warning if this fails?",
"Hello, I'm working on a project with a very similar setup to the one mentioned by @siddk, namely we have a shared cache directory in the team that we wish to use to avoid redundant dataset downloads. However, we're hitting `PermissionError` when one member of the team tries to reload a dataset that was downloaded by another team member (stack trace below).\r\n\r\nConcretely, we first create a shared directory and give everyone read, write, execute permissions as follows (I know this isn't best practice 🤫):\r\n\r\n```shell\r\nmkdir shared-folder\r\nchmod -R 777 shared-folder\r\n```\r\n\r\nWe then set the following in our `.bashrc` profiles:\r\n\r\n```\r\n# Hugging Face caches\r\nexport HUGGINGFACE_HUB_CACHE=/path/to/shared-folder\r\nexport HF_DATASETS_CACHE=/path/to/shared-folder\r\n\r\n# For shared access to the shared-folder directory\r\numask 000\r\n```\r\n\r\nNow, running e.g. `load_dataset(\"emotion\")` the first time works (as expected), but when another team member tries to load from cache we get something like:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"~/miniconda3/envs/llama-nol/lib/python3.10/site-packages/datasets/load.py\", line 1759, in load_dataset\r\n builder_instance = load_dataset_builder(\r\n File \"~/miniconda3/envs/llama-nol/lib/python3.10/site-packages/datasets/load.py\", line 1496, in load_dataset_builder\r\n dataset_module = dataset_module_factory(\r\n File \"~/miniconda3/envs/llama-nol/lib/python3.10/site-packages/datasets/load.py\", line 1218, in dataset_module_factory\r\n raise e1 from None\r\n File \"~/miniconda3/envs/llama-nol/lib/python3.10/site-packages/datasets/load.py\", line 1193, in dataset_module_factory\r\n ).get_module()\r\n File \"~/miniconda3/envs/llama-nol/lib/python3.10/site-packages/datasets/load.py\", line 903, in get_module\r\n local_path = self.download_loading_script()\r\n File \"~/miniconda3/envs/llama-nol/lib/python3.10/site-packages/datasets/load.py\", line 871, in download_loading_script\r\n return cached_path(file_path, download_config=download_config)\r\n File \"~/miniconda3/envs/llama-nol/lib/python3.10/site-packages/datasets/utils/file_utils.py\", line 210, in cached_path\r\n output_path = ExtractManager(cache_dir=download_config.cache_dir).extract(\r\n File \"~/miniconda3/envs/llama-nol/lib/python3.10/site-packages/datasets/utils/extract.py\", line 42, in extract\r\n extractor_format = self.extractor.infer_extractor_format(input_path)\r\n File \"~/miniconda3/envs/llama-nol/lib/python3.10/site-packages/datasets/utils/extract.py\", line 287, in infer_extractor_format\r\n if extractor.is_extractable(path, magic_number=magic_number):\r\n File \"~/miniconda3/envs/llama-nol/lib/python3.10/site-packages/datasets/utils/extract.py\", line 84, in is_extractable\r\n return tarfile.is_tarfile(path)\r\n File \"~/miniconda3/envs/llama-nol/lib/python3.10/tarfile.py\", line 2517, in is_tarfile\r\n t = open(name)\r\n File \"~/miniconda3/envs/llama-nol/lib/python3.10/tarfile.py\", line 1632, in open\r\n return func(name, \"r\", fileobj, **kwargs)\r\n File \"~/miniconda3/envs/llama-nol/lib/python3.10/tarfile.py\", line 1698, in gzopen\r\n fileobj = GzipFile(name, mode + \"b\", compresslevel, fileobj)\r\n File \"~/miniconda3/envs/llama-nol/lib/python3.10/gzip.py\", line 174, in __init__\r\n fileobj = self.myfileobj = builtins.open(filename, mode or 'rb')\r\nPermissionError: [Errno 13] Permission denied: '/path/to/shared-folder/downloads/4e7db366b1ea045d0faa083a2e47ac87326ad8e653f894763b0982c2a1e94078.cc96367835404d4195bf75b2602e6dbbfd2da9288170fc0b2298fc0e376ff52a.py'\r\n```\r\n\r\nIf I understand this comment from @bhavitvyamalik:\r\n\r\n> Yeah sure! Instead of using default `0o644` should I first extract umask of current user and then use `os.umask` on it? We can do it inside `Dataset` class so that all folders/files created during the call use running user's umask\r\n\r\nthe goal was to infer the `umask` of the user profile, but perhaps I misunderstood and the true solution is to chmod the `cache_files` as @lhoestq suggests above - is that correct?\r\n\r\ncc @natolambert @nazneenrajani @edbeeching ",
"Python files are stored in the modules caches, that you can modify by setting `HF_MODULES_CACHE` as well and set appropriate permissions\r\n\r\nThey are in different locations because the `HF_MODULES_CACHE` is added to the python path to be able to import the dataset scripts :)",
"Thanks a lot for the tip @lhoestq ! I didn't know about this extra cache - thanks :)"
] | 2021-03-17T00:20:22
| 2023-03-31T12:17:06
| 2021-05-10T06:45:29
|
NONE
| null | null | null | null |
Hello,
It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to continually reset the permission of the files. Do you know any ways around this or a way to correctly set the permissions?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/LysandreJik",
"id": 30755778,
"login": "LysandreJik",
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"type": "User",
"url": "https://api.github.com/users/LysandreJik",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2065/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2065/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 54 days, 6:25:07
|
https://api.github.com/repos/huggingface/datasets/issues/2061
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2061/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2061/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2061/events
|
https://github.com/huggingface/datasets/issues/2061
| 832,596,228
|
MDU6SXNzdWU4MzI1OTYyMjg=
| 2,061
|
Cannot load udpos subsets from xtreme dataset using load_dataset()
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/55791365?v=4",
"events_url": "https://api.github.com/users/adzcodez/events{/privacy}",
"followers_url": "https://api.github.com/users/adzcodez/followers",
"following_url": "https://api.github.com/users/adzcodez/following{/other_user}",
"gists_url": "https://api.github.com/users/adzcodez/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/adzcodez",
"id": 55791365,
"login": "adzcodez",
"node_id": "MDQ6VXNlcjU1NzkxMzY1",
"organizations_url": "https://api.github.com/users/adzcodez/orgs",
"received_events_url": "https://api.github.com/users/adzcodez/received_events",
"repos_url": "https://api.github.com/users/adzcodez/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/adzcodez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adzcodez/subscriptions",
"type": "User",
"url": "https://api.github.com/users/adzcodez",
"user_view_type": "public"
}
|
[
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
}
] |
closed
| false
| null |
[] |
[
"@lhoestq Adding \"_\" to the class labels in the dataset script will fix the issue.\r\n\r\nThe bigger issue IMO is that the data files are in conll format, but the examples are tokens, not sentences.",
"Hi ! Thanks for reporting @adzcodez \r\n\r\n\r\n> @lhoestq Adding \"_\" to the class labels in the dataset script will fix the issue.\r\n> \r\n> The bigger issue IMO is that the data files are in conll format, but the examples are tokens, not sentences.\r\n\r\nYou're right: \"_\" should be added to the list of labels, and the examples must be sequences of tokens, not singles tokens.\r\n",
"@lhoestq Can you please label this issue with the \"good first issue\" label? I'm not sure I'll find time to fix this.\r\n\r\nTo resolve it, the user should:\r\n1. add `\"_\"` to the list of labels\r\n2. transform the udpos subset to the conll format (I think the preprocessing logic can be borrowed from [the original repo](https://github.com/google-research/xtreme/blob/58a76a0d02458c4b3b6a742d3fd4ffaca80ff0de/utils_preprocess.py#L187-L204))\r\n3. update the dummy data\r\n4. update the dataset info\r\n5. [optional] add info about the data fields structure of the udpos subset to the dataset readme",
"I tried fixing this issue, but its working fine in the dev version : \"1.6.2.dev0\"\r\n\r\nI think somebody already fixed it. ",
"Hi,\r\n\r\nafter #2326, the lines with pos tags equal to `\"_\"` are filtered out when generating the dataset, so this fixes the KeyError described above. However, the udpos subset should be in the conll format i.e. it should yield sequences of tokens and not single tokens, so it would be great to see this fixed (feel free to borrow the logic from [here](https://github.com/google-research/xtreme/blob/58a76a0d02458c4b3b6a742d3fd4ffaca80ff0de/utils_preprocess.py#L187-L204) if you decide to work on this). ",
"Closed by #2466."
] | 2021-03-16T09:32:13
| 2021-06-18T11:54:11
| 2021-06-18T11:54:10
|
NONE
| null | null | null | null |
Hello,
I am trying to load the udpos English subset from xtreme dataset, but this faces an error during loading. I am using datasets v1.4.1, pip install. I have tried with other udpos languages which also fail, though loading a different subset altogether (such as XNLI) has no issue. I have also tried on Colab and faced the same error.
Reprex is:
`from datasets import load_dataset `
`dataset = load_dataset('xtreme', 'udpos.English')`
The error is:
`KeyError: '_'`
The full traceback is:
KeyError Traceback (most recent call last)
<ipython-input-5-7181359ea09d> in <module>
1 from datasets import load_dataset
----> 2 dataset = load_dataset('xtreme', 'udpos.English')
~\Anaconda3\envs\mlenv\lib\site-packages\datasets\load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, **config_kwargs)
738
739 # Download and prepare data
--> 740 builder_instance.download_and_prepare(
741 download_config=download_config,
742 download_mode=download_mode,
~\Anaconda3\envs\mlenv\lib\site-packages\datasets\builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
576 logger.warning("HF google storage unreachable. Downloading and preparing it from source")
577 if not downloaded_from_gcs:
--> 578 self._download_and_prepare(
579 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
580 )
~\Anaconda3\envs\mlenv\lib\site-packages\datasets\builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
654 try:
655 # Prepare split will record examples associated to the split
--> 656 self._prepare_split(split_generator, **prepare_split_kwargs)
657 except OSError as e:
658 raise OSError(
~\Anaconda3\envs\mlenv\lib\site-packages\datasets\builder.py in _prepare_split(self, split_generator)
977 generator, unit=" examples", total=split_info.num_examples, leave=False, disable=not_verbose
978 ):
--> 979 example = self.info.features.encode_example(record)
980 writer.write(example)
981 finally:
~\Anaconda3\envs\mlenv\lib\site-packages\datasets\features.py in encode_example(self, example)
946 def encode_example(self, example):
947 example = cast_to_python_objects(example)
--> 948 return encode_nested_example(self, example)
949
950 def encode_batch(self, batch):
~\Anaconda3\envs\mlenv\lib\site-packages\datasets\features.py in encode_nested_example(schema, obj)
840 # Nested structures: we allow dict, list/tuples, sequences
841 if isinstance(schema, dict):
--> 842 return {
843 k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj)
844 }
~\Anaconda3\envs\mlenv\lib\site-packages\datasets\features.py in <dictcomp>(.0)
841 if isinstance(schema, dict):
842 return {
--> 843 k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj)
844 }
845 elif isinstance(schema, (list, tuple)):
~\Anaconda3\envs\mlenv\lib\site-packages\datasets\features.py in encode_nested_example(schema, obj)
868 # ClassLabel will convert from string to int, TranslationVariableLanguages does some checks
869 elif isinstance(schema, (ClassLabel, TranslationVariableLanguages, Value, _ArrayXD)):
--> 870 return schema.encode_example(obj)
871 # Other object should be directly convertible to a native Arrow type (like Translation and Translation)
872 return obj
~\Anaconda3\envs\mlenv\lib\site-packages\datasets\features.py in encode_example(self, example_data)
647 # If a string is given, convert to associated integer
648 if isinstance(example_data, str):
--> 649 example_data = self.str2int(example_data)
650
651 # Allowing -1 to mean no label.
~\Anaconda3\envs\mlenv\lib\site-packages\datasets\features.py in str2int(self, values)
605 if value not in self._str2int:
606 value = value.strip()
--> 607 output.append(self._str2int[str(value)])
608 else:
609 # No names provided, try to integerize
KeyError: '_'
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2061/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2061/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 94 days, 2:21:57
|
https://api.github.com/repos/huggingface/datasets/issues/2059
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2059/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2059/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2059/events
|
https://github.com/huggingface/datasets/issues/2059
| 832,579,156
|
MDU6SXNzdWU4MzI1NzkxNTY=
| 2,059
|
Error while following docs to load the `ted_talks_iwslt` dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/40426312?v=4",
"events_url": "https://api.github.com/users/ekdnam/events{/privacy}",
"followers_url": "https://api.github.com/users/ekdnam/followers",
"following_url": "https://api.github.com/users/ekdnam/following{/other_user}",
"gists_url": "https://api.github.com/users/ekdnam/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ekdnam",
"id": 40426312,
"login": "ekdnam",
"node_id": "MDQ6VXNlcjQwNDI2MzEy",
"organizations_url": "https://api.github.com/users/ekdnam/orgs",
"received_events_url": "https://api.github.com/users/ekdnam/received_events",
"repos_url": "https://api.github.com/users/ekdnam/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ekdnam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ekdnam/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ekdnam",
"user_view_type": "public"
}
|
[
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] |
closed
| false
| null |
[] |
[
"@skyprince999 as you authored the PR for this dataset, any comments?",
"This has been fixed in #2064 by @mariosasko (thanks again !)\r\n\r\nThe fix is available on the master branch and we'll do a new release very soon :)"
] | 2021-03-16T09:12:19
| 2021-03-16T18:00:31
| 2021-03-16T18:00:07
|
NONE
| null | null | null | null |
I am currently trying to load the `ted_talks_iwslt` dataset into google colab.
The [docs](https://huggingface.co/datasets/ted_talks_iwslt) mention the following way of doing so.
```python
dataset = load_dataset("ted_talks_iwslt", language_pair=("it", "pl"), year="2014")
```
Executing it results in the error attached below.
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-6-7dcc67154ef9> in <module>()
----> 1 dataset = load_dataset("ted_talks_iwslt", language_pair=("it", "pl"), year="2014")
4 frames
/usr/local/lib/python3.7/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, **config_kwargs)
730 hash=hash,
731 features=features,
--> 732 **config_kwargs,
733 )
734
/usr/local/lib/python3.7/dist-packages/datasets/builder.py in __init__(self, writer_batch_size, *args, **kwargs)
927
928 def __init__(self, *args, writer_batch_size=None, **kwargs):
--> 929 super(GeneratorBasedBuilder, self).__init__(*args, **kwargs)
930 # Batch size used by the ArrowWriter
931 # It defines the number of samples that are kept in memory before writing them
/usr/local/lib/python3.7/dist-packages/datasets/builder.py in __init__(self, cache_dir, name, hash, features, **config_kwargs)
241 name,
242 custom_features=features,
--> 243 **config_kwargs,
244 )
245
/usr/local/lib/python3.7/dist-packages/datasets/builder.py in _create_builder_config(self, name, custom_features, **config_kwargs)
337 if "version" not in config_kwargs and hasattr(self, "VERSION") and self.VERSION:
338 config_kwargs["version"] = self.VERSION
--> 339 builder_config = self.BUILDER_CONFIG_CLASS(**config_kwargs)
340
341 # otherwise use the config_kwargs to overwrite the attributes
/root/.cache/huggingface/modules/datasets_modules/datasets/ted_talks_iwslt/024d06b1376b361e59245c5878ab8acf9a7576d765f2d0077f61751158e60914/ted_talks_iwslt.py in __init__(self, language_pair, year, **kwargs)
219 description=description,
220 version=datasets.Version("1.1.0", ""),
--> 221 **kwargs,
222 )
223
TypeError: __init__() got multiple values for keyword argument 'version'
```
How to resolve this?
PS: Thanks a lot @huggingface team for creating this great library!
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2059/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2059/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 8:47:48
|
https://api.github.com/repos/huggingface/datasets/issues/2058
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2058/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2058/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2058/events
|
https://github.com/huggingface/datasets/issues/2058
| 832,159,844
|
MDU6SXNzdWU4MzIxNTk4NDQ=
| 2,058
|
Is it possible to convert a `tfds` to HuggingFace `dataset`?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/6608232?v=4",
"events_url": "https://api.github.com/users/abarbosa94/events{/privacy}",
"followers_url": "https://api.github.com/users/abarbosa94/followers",
"following_url": "https://api.github.com/users/abarbosa94/following{/other_user}",
"gists_url": "https://api.github.com/users/abarbosa94/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/abarbosa94",
"id": 6608232,
"login": "abarbosa94",
"node_id": "MDQ6VXNlcjY2MDgyMzI=",
"organizations_url": "https://api.github.com/users/abarbosa94/orgs",
"received_events_url": "https://api.github.com/users/abarbosa94/received_events",
"repos_url": "https://api.github.com/users/abarbosa94/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/abarbosa94/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abarbosa94/subscriptions",
"type": "User",
"url": "https://api.github.com/users/abarbosa94",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi! You can either save the TF dataset to one of the formats supported by datasets (`parquet`, `csv`, `json`, ...) or pass a generator function to `Dataset.from_generator` that yields its examples."
] | 2021-03-15T20:18:47
| 2023-07-25T16:47:40
| 2023-07-25T16:47:40
|
CONTRIBUTOR
| null | null | null | null |
I was having some weird bugs with `C4`dataset version of HuggingFace, so I decided to try to download `C4`from `tfds`. I would like to know if it is possible to convert a tfds dataset to HuggingFace dataset format :)
I can also open a new issue reporting the bug I'm receiving with `datasets.load_dataset('c4','en')` in the future if you think that it would be useful.
Thanks!
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2058/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2058/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 861 days, 20:28:53
|
https://api.github.com/repos/huggingface/datasets/issues/2056
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2056/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2056/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2056/events
|
https://github.com/huggingface/datasets/issues/2056
| 831,718,397
|
MDU6SXNzdWU4MzE3MTgzOTc=
| 2,056
|
issue with opus100/en-fr dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dorost1234",
"id": 79165106,
"login": "dorost1234",
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dorost1234",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"@lhoestq I also deleted the cache and redownload the file and still the same issue, I appreciate any help on this. thanks ",
"Here please find the minimal code to reproduce the issue @lhoestq note this only happens with MT5TokenizerFast\r\n\r\n```\r\nfrom datasets import load_dataset\r\nfrom transformers import MT5TokenizerFast\r\n\r\ndef get_tokenized_dataset(dataset_name, dataset_config_name, tokenizer):\r\n datasets = load_dataset(dataset_name, dataset_config_name, script_version=\"master\")\r\n column_names = datasets[\"train\"].column_names\r\n text_column_name = \"translation\"\r\n def process_dataset(datasets):\r\n def process_function(examples):\r\n lang = \"fr\"\r\n return {\"src_texts\": [example[lang] for example in examples[text_column_name]]}\r\n datasets = datasets.map(\r\n process_function,\r\n batched=True,\r\n num_proc=None,\r\n remove_columns=column_names,\r\n load_from_cache_file=True,\r\n )\r\n return datasets\r\n datasets = process_dataset(datasets)\r\n text_column_name = \"src_texts\"\r\n column_names = [\"src_texts\"]\r\n def tokenize_function(examples):\r\n return tokenizer(examples[text_column_name], return_special_tokens_mask=True)\r\n tokenized_datasets = datasets.map(\r\n tokenize_function,\r\n batched=True,\r\n num_proc=None,\r\n remove_columns=column_names,\r\n load_from_cache_file=True\r\n )\r\n\r\nif __name__ == \"__main__\":\r\n tokenizer_kwargs = {\r\n \"cache_dir\": None,\r\n \"use_fast\": True,\r\n \"revision\": \"main\",\r\n \"use_auth_token\": None\r\n }\r\n tokenizer = MT5TokenizerFast.from_pretrained(\"google/mt5-small\", **tokenizer_kwargs)\r\n get_tokenized_dataset(dataset_name=\"opus100\", dataset_config_name=\"en-fr\", tokenizer=tokenizer)\r\n~ \r\n```",
"as per https://github.com/huggingface/tokenizers/issues/626 this looks like to be the tokenizer bug, I therefore, reported it there https://github.com/huggingface/tokenizers/issues/626 and I am closing this one."
] | 2021-03-15T11:32:42
| 2021-03-16T15:49:00
| 2021-03-16T15:48:59
|
NONE
| null | null | null | null |
Hi
I am running run_mlm.py code of huggingface repo with opus100/fr-en pair, I am getting this error, note that this error occurs for only this pairs and not the other pairs. Any idea why this is occurring? and how I can solve this?
Thanks a lot @lhoestq for your help in advance.
`
thread '<unnamed>' panicked at 'index out of bounds: the len is 617 but the index is 617', /__w/tokenizers/tokenizers/tokenizers/src/tokenizer/normalizer.rs:382:21
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
63%|██████████████████████████████████████████████████████████▊ | 626/1000 [00:27<00:16, 22.69ba/s]
Traceback (most recent call last):
File "run_mlm.py", line 550, in <module>
main()
File "run_mlm.py", line 412, in main
in zip(data_args.dataset_name, data_args.dataset_config_name)]
File "run_mlm.py", line 411, in <listcomp>
logger) for dataset_name, dataset_config_name\
File "/user/dara/dev/codes/seq2seq/data/tokenize_datasets.py", line 96, in get_tokenized_dataset
load_from_cache_file=not data_args.overwrite_cache,
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/dataset_dict.py", line 448, in map
for k, dataset in self.items()
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/dataset_dict.py", line 448, in <dictcomp>
for k, dataset in self.items()
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1309, in map
update_data=update_data,
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 204, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/fingerprint.py", line 337, in wrapper
out = func(self, *args, **kwargs)
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1574, in _map_single
batch, indices, check_same_num_examples=len(self.list_indexes()) > 0, offset=offset
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1490, in apply_function_on_filtered_inputs
function(*fn_args, effective_indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)
File "/user/dara/dev/codes/seq2seq/data/tokenize_datasets.py", line 89, in tokenize_function
return tokenizer(examples[text_column_name], return_special_tokens_mask=True)
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 2347, in __call__
**kwargs,
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 2532, in batch_encode_plus
**kwargs,
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/transformers/tokenization_utils_fast.py", line 384, in _batch_encode_plus
is_pretokenized=is_split_into_words,
pyo3_runtime.PanicException: index out of bounds: the len is 617 but the index is 617
`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dorost1234",
"id": 79165106,
"login": "dorost1234",
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dorost1234",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2056/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2056/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 1 day, 4:16:17
|
https://api.github.com/repos/huggingface/datasets/issues/2055
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2055/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2055/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2055/events
|
https://github.com/huggingface/datasets/issues/2055
| 831,684,312
|
MDU6SXNzdWU4MzE2ODQzMTI=
| 2,055
|
is there a way to override a dataset object saved with save_to_disk?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"events_url": "https://api.github.com/users/shamanez/events{/privacy}",
"followers_url": "https://api.github.com/users/shamanez/followers",
"following_url": "https://api.github.com/users/shamanez/following{/other_user}",
"gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/shamanez",
"id": 16892570,
"login": "shamanez",
"node_id": "MDQ6VXNlcjE2ODkyNTcw",
"organizations_url": "https://api.github.com/users/shamanez/orgs",
"received_events_url": "https://api.github.com/users/shamanez/received_events",
"repos_url": "https://api.github.com/users/shamanez/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shamanez/subscriptions",
"type": "User",
"url": "https://api.github.com/users/shamanez",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi\r\nYou can rename the arrow file and update the name in `state.json`",
"I tried this way, but when there is a mapping process to the dataset, it again uses a random cache name. atm, I am trying to use the following method by setting an exact cache file,\r\n\r\n```\r\n dataset_with_embedding =csv_dataset.map(\r\n partial(self.embed, ctx_encoder=ctx_encoder, ctx_tokenizer=self.context_tokenizer),\r\n batched=True,\r\n batch_size=1,\r\n features=new_features,\r\n cache_file_name=cache_arrow_path,\r\n load_from_cache_file=False\r\n )\r\n```\r\nSo here we set a cache_file_name , after this it uses the same file name when saving again and again. ",
"I'm not sure I understand your issue, can you elaborate ?\r\n\r\n`cache_file_name` is indeed an argument you can set to specify the cache file that will be used for the processed dataset. By default the file is named with something like `cache-<fingerprint>.arrow` where the fingerprint is a hash.",
"Let's say I am updating a set of embedding in a dataset that is around 40GB inside a training loop every 500 steps (Ex: calculating the embeddings in updated ctx_encoder in RAG and saving it to the passage path). So when we use **dataset_object.save_to_disk('passage_path_directory')** it will save the new dataset object every time with a random file name, especially when we do some transformations to dataset objects such as map or shards. This way, we keep collecting unwanted files that will eventually eat up all the disk space. \r\n\r\nBut if we can save the dataset object every time by a single name like **data_shard_1.arrow**, it will automatically remove the previous file and save the new one in the same directory. I found the above-mentioned code snippet useful to complete this task. \r\n\r\nIs this clear?"
] | 2021-03-15T10:50:53
| 2021-03-22T04:06:17
| 2021-03-22T04:06:17
|
NONE
| null | null | null | null |
At the moment when I use save_to_disk, it uses the arbitrary name for the arrow file. Is there a way to override such an object?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"events_url": "https://api.github.com/users/shamanez/events{/privacy}",
"followers_url": "https://api.github.com/users/shamanez/followers",
"following_url": "https://api.github.com/users/shamanez/following{/other_user}",
"gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/shamanez",
"id": 16892570,
"login": "shamanez",
"node_id": "MDQ6VXNlcjE2ODkyNTcw",
"organizations_url": "https://api.github.com/users/shamanez/orgs",
"received_events_url": "https://api.github.com/users/shamanez/received_events",
"repos_url": "https://api.github.com/users/shamanez/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shamanez/subscriptions",
"type": "User",
"url": "https://api.github.com/users/shamanez",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2055/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2055/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 6 days, 17:15:24
|
https://api.github.com/repos/huggingface/datasets/issues/2054
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2054/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2054/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2054/events
|
https://github.com/huggingface/datasets/issues/2054
| 831,597,665
|
MDU6SXNzdWU4MzE1OTc2NjU=
| 2,054
|
Could not find file for ZEST dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/26653468?v=4",
"events_url": "https://api.github.com/users/bhadreshpsavani/events{/privacy}",
"followers_url": "https://api.github.com/users/bhadreshpsavani/followers",
"following_url": "https://api.github.com/users/bhadreshpsavani/following{/other_user}",
"gists_url": "https://api.github.com/users/bhadreshpsavani/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bhadreshpsavani",
"id": 26653468,
"login": "bhadreshpsavani",
"node_id": "MDQ6VXNlcjI2NjUzNDY4",
"organizations_url": "https://api.github.com/users/bhadreshpsavani/orgs",
"received_events_url": "https://api.github.com/users/bhadreshpsavani/received_events",
"repos_url": "https://api.github.com/users/bhadreshpsavani/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bhadreshpsavani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhadreshpsavani/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bhadreshpsavani",
"user_view_type": "public"
}
|
[
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] |
closed
| false
| null |
[] |
[
"The zest dataset url was changed (allenai/zest#3) and #2057 should resolve this.",
"This has been fixed in #2057 by @matt-peters (thanks again !)\r\n\r\nThe fix is available on the master branch and we'll do a new release very soon :)",
"Thanks @lhoestq and @matt-peters ",
"I am closing this issue since its fixed!"
] | 2021-03-15T09:11:58
| 2021-05-03T09:30:24
| 2021-05-03T09:30:24
|
CONTRIBUTOR
| null | null | null | null |
I am trying to use zest dataset from Allen AI using below code in colab,
```
!pip install -q datasets
from datasets import load_dataset
dataset = load_dataset("zest")
```
I am getting the following error,
```
Using custom data configuration default
Downloading and preparing dataset zest/default (download: 5.53 MiB, generated: 19.96 MiB, post-processed: Unknown size, total: 25.48 MiB) to /root/.cache/huggingface/datasets/zest/default/0.0.0/1f7a230fbfc964d979bbca0f0130fbab3259fce547ee758ad8aa4f9c9bec6cca...
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
<ipython-input-6-18dbbc1a4b8a> in <module>()
1 from datasets import load_dataset
2
----> 3 dataset = load_dataset("zest")
9 frames
/usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token)
612 )
613 elif response is not None and response.status_code == 404:
--> 614 raise FileNotFoundError("Couldn't find file at {}".format(url))
615 _raise_if_offline_mode_is_enabled(f"Tried to reach {url}")
616 raise ConnectionError("Couldn't reach {}".format(url))
FileNotFoundError: Couldn't find file at https://ai2-datasets.s3-us-west-2.amazonaws.com/zest/zest.zip
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/26653468?v=4",
"events_url": "https://api.github.com/users/bhadreshpsavani/events{/privacy}",
"followers_url": "https://api.github.com/users/bhadreshpsavani/followers",
"following_url": "https://api.github.com/users/bhadreshpsavani/following{/other_user}",
"gists_url": "https://api.github.com/users/bhadreshpsavani/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bhadreshpsavani",
"id": 26653468,
"login": "bhadreshpsavani",
"node_id": "MDQ6VXNlcjI2NjUzNDY4",
"organizations_url": "https://api.github.com/users/bhadreshpsavani/orgs",
"received_events_url": "https://api.github.com/users/bhadreshpsavani/received_events",
"repos_url": "https://api.github.com/users/bhadreshpsavani/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bhadreshpsavani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhadreshpsavani/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bhadreshpsavani",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2054/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2054/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 49 days, 0:18:26
|
https://api.github.com/repos/huggingface/datasets/issues/2052
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2052/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2052/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2052/events
|
https://github.com/huggingface/datasets/issues/2052
| 831,135,704
|
MDU6SXNzdWU4MzExMzU3MDQ=
| 2,052
|
Timit_asr dataset repeats examples
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/7583522?v=4",
"events_url": "https://api.github.com/users/fermaat/events{/privacy}",
"followers_url": "https://api.github.com/users/fermaat/followers",
"following_url": "https://api.github.com/users/fermaat/following{/other_user}",
"gists_url": "https://api.github.com/users/fermaat/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/fermaat",
"id": 7583522,
"login": "fermaat",
"node_id": "MDQ6VXNlcjc1ODM1MjI=",
"organizations_url": "https://api.github.com/users/fermaat/orgs",
"received_events_url": "https://api.github.com/users/fermaat/received_events",
"repos_url": "https://api.github.com/users/fermaat/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/fermaat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fermaat/subscriptions",
"type": "User",
"url": "https://api.github.com/users/fermaat",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi,\r\n\r\nthis was fixed by #1995, so you can wait for the next release or install the package directly from the master branch with the following command: \r\n```bash\r\npip install git+https://github.com/huggingface/datasets\r\n```",
"Ty!"
] | 2021-03-14T11:43:43
| 2021-03-15T10:37:16
| 2021-03-15T10:37:16
|
NONE
| null | null | null | null |
Summary
When loading timit_asr dataset on datasets 1.4+, every row in the dataset is the same
Steps to reproduce
As an example, on this code there is the text from the training part:
Code snippet:
```
from datasets import load_dataset, load_metric
timit = load_dataset("timit_asr")
timit['train']['text']
#['Would such an act of refusal be useful?',
# 'Would such an act of refusal be useful?',
# 'Would such an act of refusal be useful?',
# 'Would such an act of refusal be useful?',
# 'Would such an act of refusal be useful?',
# 'Would such an act of refusal be useful?',
```
The same behavior happens for other columns
Expected behavior:
Different info on the actual timit_asr dataset
Actual behavior:
When loading timit_asr dataset on datasets 1.4+, every row in the dataset is the same. I've checked datasets 1.3 and the rows are different
Debug info
Streamlit version: (get it with $ streamlit version)
Python version: Python 3.6.12
Using Conda? PipEnv? PyEnv? Pex? Using pip
OS version: Centos-release-7-9.2009.1.el7.centos.x86_64
Additional information
You can check the same behavior on https://huggingface.co/datasets/viewer/?dataset=timit_asr
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/7583522?v=4",
"events_url": "https://api.github.com/users/fermaat/events{/privacy}",
"followers_url": "https://api.github.com/users/fermaat/followers",
"following_url": "https://api.github.com/users/fermaat/following{/other_user}",
"gists_url": "https://api.github.com/users/fermaat/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/fermaat",
"id": 7583522,
"login": "fermaat",
"node_id": "MDQ6VXNlcjc1ODM1MjI=",
"organizations_url": "https://api.github.com/users/fermaat/orgs",
"received_events_url": "https://api.github.com/users/fermaat/received_events",
"repos_url": "https://api.github.com/users/fermaat/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/fermaat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fermaat/subscriptions",
"type": "User",
"url": "https://api.github.com/users/fermaat",
"user_view_type": "public"
}
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2052/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2052/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 22:53:33
|
https://api.github.com/repos/huggingface/datasets/issues/2050
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2050/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2050/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2050/events
|
https://github.com/huggingface/datasets/issues/2050
| 831,006,551
|
MDU6SXNzdWU4MzEwMDY1NTE=
| 2,050
|
Build custom dataset to fine-tune Wav2Vec2
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/72882909?v=4",
"events_url": "https://api.github.com/users/Omarnabk/events{/privacy}",
"followers_url": "https://api.github.com/users/Omarnabk/followers",
"following_url": "https://api.github.com/users/Omarnabk/following{/other_user}",
"gists_url": "https://api.github.com/users/Omarnabk/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Omarnabk",
"id": 72882909,
"login": "Omarnabk",
"node_id": "MDQ6VXNlcjcyODgyOTA5",
"organizations_url": "https://api.github.com/users/Omarnabk/orgs",
"received_events_url": "https://api.github.com/users/Omarnabk/received_events",
"repos_url": "https://api.github.com/users/Omarnabk/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Omarnabk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Omarnabk/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Omarnabk",
"user_view_type": "public"
}
|
[
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] |
closed
| false
| null |
[] |
[
"@lhoestq - We could simply use the \"general\" json dataset for this no? ",
"Sure you can use the json loader\r\n```python\r\ndata_files = {\"train\": \"path/to/your/train_data.json\", \"test\": \"path/to/your/test_data.json\"}\r\ntrain_dataset = load_dataset(\"json\", data_files=data_files, split=\"train\")\r\ntest_dataset = load_dataset(\"json\", data_files=data_files, split=\"test\")\r\n```\r\n\r\nYou just need to make sure that the data contain the paths to the audio files.\r\nIf not, feel free to use `.map()` to add them.",
"Many thanks! that was what I was looking for. "
] | 2021-03-13T22:01:10
| 2021-03-15T09:27:28
| 2021-03-15T09:27:28
|
NONE
| null | null | null | null |
Thank you for your recent tutorial on how to finetune Wav2Vec2 on a custom dataset. The example you gave here (https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) was on the CommonVoice dataset. However, what if I want to load my own dataset? I have a manifest (transcript and their audio files) in a JSON file.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/72882909?v=4",
"events_url": "https://api.github.com/users/Omarnabk/events{/privacy}",
"followers_url": "https://api.github.com/users/Omarnabk/followers",
"following_url": "https://api.github.com/users/Omarnabk/following{/other_user}",
"gists_url": "https://api.github.com/users/Omarnabk/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Omarnabk",
"id": 72882909,
"login": "Omarnabk",
"node_id": "MDQ6VXNlcjcyODgyOTA5",
"organizations_url": "https://api.github.com/users/Omarnabk/orgs",
"received_events_url": "https://api.github.com/users/Omarnabk/received_events",
"repos_url": "https://api.github.com/users/Omarnabk/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Omarnabk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Omarnabk/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Omarnabk",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2050/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2050/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 1 day, 11:26:18
|
https://api.github.com/repos/huggingface/datasets/issues/2048
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2048/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2048/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2048/events
|
https://github.com/huggingface/datasets/issues/2048
| 830,953,431
|
MDU6SXNzdWU4MzA5NTM0MzE=
| 2,048
|
github is not always available - probably need a back up
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stas00",
"id": 10676103,
"login": "stas00",
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"repos_url": "https://api.github.com/users/stas00/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stas00",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[] | 2021-03-13T18:03:32
| 2022-04-01T15:27:10
| 2022-04-01T15:27:10
|
CONTRIBUTOR
| null | null | null | null |
Yesterday morning github wasn't working:
```
:/tmp$ wget https://raw.githubusercontent.com/huggingface/datasets/1.4.1/metrics/sacrebleu/sacrebleu.py--2021-03-12 18:35:59-- https://raw.githubusercontent.com/huggingface/datasets/1.4.1/metrics/sacrebleu/sacrebleu.py
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.111.133, 185.199.109.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... connected.
HTTP request sent, awaiting response... 500 Internal Server Error
2021-03-12 18:36:11 ERROR 500: Internal Server Error.
```
Suggestion: have a failover system and replicate the data on another system and reach there if gh isn't reachable? perhaps gh can be a master and the replicate a slave - so there is only one true source.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2048/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2048/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 383 days, 21:23:38
|
https://api.github.com/repos/huggingface/datasets/issues/2046
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2046/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2046/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2046/events
|
https://github.com/huggingface/datasets/issues/2046
| 830,423,033
|
MDU6SXNzdWU4MzA0MjMwMzM=
| 2,046
|
add_faisis_index gets very slow when doing it interatively
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"events_url": "https://api.github.com/users/shamanez/events{/privacy}",
"followers_url": "https://api.github.com/users/shamanez/followers",
"following_url": "https://api.github.com/users/shamanez/following{/other_user}",
"gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/shamanez",
"id": 16892570,
"login": "shamanez",
"node_id": "MDQ6VXNlcjE2ODkyNTcw",
"organizations_url": "https://api.github.com/users/shamanez/orgs",
"received_events_url": "https://api.github.com/users/shamanez/received_events",
"repos_url": "https://api.github.com/users/shamanez/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shamanez/subscriptions",
"type": "User",
"url": "https://api.github.com/users/shamanez",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"I think faiss automatically sets the number of threads to use to build the index.\r\nCan you check how many CPU cores are being used when you build the index in `use_own_knowleldge_dataset` as compared to this script ? Are there other programs running (maybe for rank>0) ?",
"Hi,\r\n I am running the add_faiss_index during the training process of the RAG from the master process (rank 0). But at the exact moment, I do not run any other process since I do it in every 5000 training steps. \r\n \r\n I think what you say is correct. It depends on the number of CPU cores. I did an experiment to compare the time taken to finish the add_faiss_index process on use_own_knowleldge_dataset.py vs the training loop thing. The training loop thing takes 40 mins more. It might be natural right? \r\n \r\n \r\n at the moment it uses around 40 cores of a 96 core machine (I am fine-tuning the entire process). ",
"Can you try to set the number of threads manually ?\r\nIf you set the same number of threads for both the `use_own_knowledge_dataset.py` and RAG training, it should take the same amount of time.\r\nYou can see how to set the number of thread in the faiss wiki: https://github.com/facebookresearch/faiss/wiki/Threads-and-asynchronous-calls",
"Ok, I will report the details too soon. I am the first one on the list and currently add_index being computed for the 3rd time in the loop. Actually seems like the time is taken to complete each interaction is the same, but around 1 hour more compared to running it without the training loop. A the moment this takes 5hrs and 30 mins. If there is any way to faster the process, an end-to-end rag will be perfect. So I will also try out with different thread numbers too. \r\n\r\n\r\n",
"@lhoestq on a different note, I read about using Faiss-GPU, but the documentation says we should use it when the dataset has the ability to fit into the GPU memory. Although this might work, in the long-term this is not that practical for me.\r\n\r\nhttps://github.com/matsui528/faiss_tips",
"@lhoestq \r\n\r\nHi, I executed the **use_own_dataset.py** script independently and ask a few of my friends to run their programs in the HPC machine at the same time. \r\n\r\n Once there are so many other processes are running the add_index function gets slows down naturally. So basically the speed of the add_index depends entirely on the number of CPU processes. Then I set the number of threads as you have mentioned and got actually the same time for RAG training and independat running. So you are correct! :) \r\n\r\n \r\n Then I added this [issue in Faiss repostiary](https://github.com/facebookresearch/faiss/issues/1767). I got an answer saying our current **IndexHNSWFlat** can get slow for 30 million vectors and it would be better to use alternatives. What do you think?",
"It's a matter of tradeoffs.\r\nHSNW is fast at query time but takes some time to build.\r\nA flat index is flat to build but is \"slow\" at query time.\r\nAn IVF index is probably a good choice for you: fast building and fast queries (but still slower queries than HSNW).\r\n\r\nNote that for an IVF index you would need to have an `nprobe` parameter (number of cells to visit for one query, there are `nlist` in total) that is not too small in order to have good retrieval accuracy, but not too big otherwise the queries will take too much time. From the faiss documentation:\r\n> The nprobe parameter is always a way of adjusting the tradeoff between speed and accuracy of the result. Setting nprobe = nlist gives the same result as the brute-force search (but slower).\r\n\r\nFrom my experience with indexes on DPR embeddings, setting nprobe around 1/4 of nlist gives really good retrieval accuracy and there's no need to have a value higher than that (or you would need to brute-force in order to see a difference).",
"@lhoestq \r\n\r\nThanks a lot for sharing all this prior knowledge. \r\n\r\nJust asking what would be a good nlist of parameters for 30 million embeddings?",
"When IVF is used alone, nlist should be between `4*sqrt(n)` and `16*sqrt(n)`.\r\nFor more details take a look at [this section of the Faiss wiki](https://github.com/facebookresearch/faiss/wiki/Guidelines-to-choose-an-index#how-big-is-the-dataset)",
"Thanks a lot. I was lost with calling the index from class and using faiss_index_factory. ",
"@lhoestq Thanks a lot for the help you have given to solve this issue. As per my experiments, IVF index suits well for my case and it is a lot faster. The use of this can make the entire RAG end-to-end trainable lot faster. So I will close this issue. Will do the final PR soon. "
] | 2021-03-12T20:27:18
| 2021-03-24T22:29:11
| 2021-03-24T22:29:11
|
NONE
| null | null | null | null |
As the below code suggests, I want to run add_faisis_index in every nth interaction from the training loop. I have 7.2 million documents. Usually, it takes 2.5 hours (if I run an as a separate process similar to the script given in rag/use_own_knowleldge_dataset.py). Now, this takes usually 5hrs. Is this normal? Any way to make this process faster?
@lhoestq
```
def training_step(self, batch, batch_idx) -> Dict:
if (not batch_idx==0) and (batch_idx%5==0):
print("******************************************************")
ctx_encoder=self.trainer.model.module.module.model.rag.ctx_encoder
model_copy =type(ctx_encoder)(self.config_dpr) # get a new instance #this will be load in the CPU
model_copy.load_state_dict(ctx_encoder.state_dict()) # copy weights and stuff
list_of_gpus = ['cuda:2','cuda:3']
c_dir='/custom/cache/dir'
kb_dataset = load_dataset("csv", data_files=[self.custom_config.csv_path], split="train", delimiter="\t", column_names=["title", "text"],cache_dir=c_dir)
print(kb_dataset)
n=len(list_of_gpus) #nunber of dedicated GPUs
kb_list=[kb_dataset.shard(n, i, contiguous=True) for i in range(n)]
#kb_dataset.save_to_disk('/hpc/gsir059/MY-Test/RAY/transformers/examples/research_projects/rag/haha-dir')
print(self.trainer.global_rank)
dataset_shards = self.re_encode_kb(model_copy.to(device=list_of_gpus[self.trainer.global_rank]),kb_list[self.trainer.global_rank])
output = [None for _ in list_of_gpus]
#self.trainer.accelerator_connector.accelerator.barrier("embedding_process")
dist.all_gather_object(output, dataset_shards)
#This creation and re-initlaization of the new index
if (self.trainer.global_rank==0): #saving will be done in the main process
combined_dataset = concatenate_datasets(output)
passages_path =self.config.passages_path
logger.info("saving the dataset with ")
#combined_dataset.save_to_disk('/hpc/gsir059/MY-Test/RAY/transformers/examples/research_projects/rag/MY-Passage')
combined_dataset.save_to_disk(passages_path)
logger.info("Add faiss index to the dataset that consist of embeddings")
embedding_dataset=combined_dataset
index = faiss.IndexHNSWFlat(768, 128, faiss.METRIC_INNER_PRODUCT)
embedding_dataset.add_faiss_index("embeddings", custom_index=index)
embedding_dataset.get_index("embeddings").save(self.config.index_path)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"events_url": "https://api.github.com/users/shamanez/events{/privacy}",
"followers_url": "https://api.github.com/users/shamanez/followers",
"following_url": "https://api.github.com/users/shamanez/following{/other_user}",
"gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/shamanez",
"id": 16892570,
"login": "shamanez",
"node_id": "MDQ6VXNlcjE2ODkyNTcw",
"organizations_url": "https://api.github.com/users/shamanez/orgs",
"received_events_url": "https://api.github.com/users/shamanez/received_events",
"repos_url": "https://api.github.com/users/shamanez/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shamanez/subscriptions",
"type": "User",
"url": "https://api.github.com/users/shamanez",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2046/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2046/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 12 days, 2:01:53
|
https://api.github.com/repos/huggingface/datasets/issues/2040
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2040/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2040/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2040/events
|
https://github.com/huggingface/datasets/issues/2040
| 830,169,387
|
MDU6SXNzdWU4MzAxNjkzODc=
| 2,040
|
ValueError: datasets' indices [1] come from memory and datasets' indices [0] come from disk
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/53626067?v=4",
"events_url": "https://api.github.com/users/simonschoe/events{/privacy}",
"followers_url": "https://api.github.com/users/simonschoe/followers",
"following_url": "https://api.github.com/users/simonschoe/following{/other_user}",
"gists_url": "https://api.github.com/users/simonschoe/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/simonschoe",
"id": 53626067,
"login": "simonschoe",
"node_id": "MDQ6VXNlcjUzNjI2MDY3",
"organizations_url": "https://api.github.com/users/simonschoe/orgs",
"received_events_url": "https://api.github.com/users/simonschoe/received_events",
"repos_url": "https://api.github.com/users/simonschoe/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/simonschoe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/simonschoe/subscriptions",
"type": "User",
"url": "https://api.github.com/users/simonschoe",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi ! To help me understand the situation, can you print the values of `load_from_disk(PATH_DATA_CLS_A)['train']._indices_data_files` and `load_from_disk(PATH_DATA_CLS_B)['train']._indices_data_files` ?\r\nThey should both have a path to an arrow file\r\n\r\nAlso note that from #2025 concatenating datasets will no longer have such restrictions.",
"Sure, thanks for the fast reply!\r\n\r\nFor dataset A: `[{'filename': 'drive/MyDrive/data_target_task/dataset_a/train/cache-4797266bf4db1eb7.arrow'}]`\r\nFor dataset B: `[]`\r\n\r\nNo clue why for B it returns nothing. `PATH_DATA_CLS_B` is exactly the same in `save_to_disk` and `load_from_disk`... Also I can verify that the folder physically exists under 'drive/MyDrive/data_target_task/dataset_b/'",
"In the next release you'll be able to concatenate any kinds of dataset (either from memory or from disk).\r\n\r\nFor now I'd suggest you to flatten the indices of the A and B datasets. This will remove the indices mapping and you will be able to concatenate them. You can flatten the indices with\r\n```python\r\ndataset = dataset.flatten_indices()\r\n```",
"Indeed this works. Not the most elegant solution, but it does the trick. Thanks a lot! "
] | 2021-03-12T14:27:00
| 2021-08-04T18:00:43
| 2021-08-04T18:00:43
|
NONE
| null | null | null | null |
Hi there,
I am trying to concat two datasets that I've previously saved to disk via `save_to_disk()` like so (note that both are saved as `DataDict`, `PATH_DATA_CLS_*` are `Path`-objects):
```python
concatenate_datasets([load_from_disk(PATH_DATA_CLS_A)['train'], load_from_disk(PATH_DATA_CLS_B)['train']])
```
Yielding the following error:
```python
ValueError: Datasets' indices should ALL come from memory, or should ALL come from disk.
However datasets' indices [1] come from memory and datasets' indices [0] come from disk.
```
Been trying to solve this for quite some time now. Both `DataDict` have been created by reading in a `csv` via `load_dataset` and subsequently processed using the various `datasets` methods (i.e. filter, map, remove col, rename col). Can't figure out tho...
`load_from_disk(PATH_DATA_CLS_A)['train']` yields:
```python
Dataset({
features: ['labels', 'text'],
num_rows: 785
})
```
`load_from_disk(PATH_DATA_CLS_B)['train']` yields:
```python
Dataset({
features: ['labels', 'text'],
num_rows: 3341
})
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2040/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2040/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 145 days, 3:33:43
|
https://api.github.com/repos/huggingface/datasets/issues/2038
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2038/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2038/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2038/events
|
https://github.com/huggingface/datasets/issues/2038
| 830,036,875
|
MDU6SXNzdWU4MzAwMzY4NzU=
| 2,038
|
outdated dataset_infos.json might fail verifications
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/2062185?v=4",
"events_url": "https://api.github.com/users/songfeng/events{/privacy}",
"followers_url": "https://api.github.com/users/songfeng/followers",
"following_url": "https://api.github.com/users/songfeng/following{/other_user}",
"gists_url": "https://api.github.com/users/songfeng/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/songfeng",
"id": 2062185,
"login": "songfeng",
"node_id": "MDQ6VXNlcjIwNjIxODU=",
"organizations_url": "https://api.github.com/users/songfeng/orgs",
"received_events_url": "https://api.github.com/users/songfeng/received_events",
"repos_url": "https://api.github.com/users/songfeng/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/songfeng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/songfeng/subscriptions",
"type": "User",
"url": "https://api.github.com/users/songfeng",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi ! Thanks for reporting.\r\n\r\nTo update the dataset_infos.json you can run:\r\n```\r\ndatasets-cli test ./datasets/doc2dial --all_configs --save_infos --ignore_verifications\r\n```",
"Fixed by #2041, thanks again @songfeng !"
] | 2021-03-12T11:41:54
| 2021-03-16T16:27:40
| 2021-03-16T16:27:40
|
CONTRIBUTOR
| null | null | null | null |
The [doc2dial/dataset_infos.json](https://github.com/huggingface/datasets/blob/master/datasets/doc2dial/dataset_infos.json) is outdated. It would fail data_loader when verifying download checksum etc..
Could you please update this file or point me how to update this file?
Thank you.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2038/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2038/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 4 days, 4:45:46
|
https://api.github.com/repos/huggingface/datasets/issues/2036
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2036/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2036/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2036/events
|
https://github.com/huggingface/datasets/issues/2036
| 829,909,258
|
MDU6SXNzdWU4Mjk5MDkyNTg=
| 2,036
|
Cannot load wikitext
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/19349207?v=4",
"events_url": "https://api.github.com/users/Gpwner/events{/privacy}",
"followers_url": "https://api.github.com/users/Gpwner/followers",
"following_url": "https://api.github.com/users/Gpwner/following{/other_user}",
"gists_url": "https://api.github.com/users/Gpwner/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Gpwner",
"id": 19349207,
"login": "Gpwner",
"node_id": "MDQ6VXNlcjE5MzQ5MjA3",
"organizations_url": "https://api.github.com/users/Gpwner/orgs",
"received_events_url": "https://api.github.com/users/Gpwner/received_events",
"repos_url": "https://api.github.com/users/Gpwner/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Gpwner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Gpwner/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Gpwner",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Solved!"
] | 2021-03-12T09:09:39
| 2021-03-15T08:45:02
| 2021-03-15T08:44:44
|
NONE
| null | null | null | null |
when I execute these codes
```
>>> from datasets import load_dataset
>>> test_dataset = load_dataset("wikitext")
```
I got an error,any help?
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/xxx/anaconda3/envs/transformer/lib/python3.7/site-packages/datasets/load.py", line 589, in load_dataset
path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True
File "/home/xxx/anaconda3/envs/transformer/lib/python3.7/site-packages/datasets/load.py", line 267, in prepare_module
local_path = cached_path(file_path, download_config=download_config)
File "/home/xxx/anaconda3/envs/transformer/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 308, in cached_path
use_etag=download_config.use_etag,
File "/home/xxx/anaconda3/envs/transformer/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 487, in get_from_cache
raise ConnectionError("Couldn't reach {}".format(url))
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/wikitext/wikitext.py
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/19349207?v=4",
"events_url": "https://api.github.com/users/Gpwner/events{/privacy}",
"followers_url": "https://api.github.com/users/Gpwner/followers",
"following_url": "https://api.github.com/users/Gpwner/following{/other_user}",
"gists_url": "https://api.github.com/users/Gpwner/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Gpwner",
"id": 19349207,
"login": "Gpwner",
"node_id": "MDQ6VXNlcjE5MzQ5MjA3",
"organizations_url": "https://api.github.com/users/Gpwner/orgs",
"received_events_url": "https://api.github.com/users/Gpwner/received_events",
"repos_url": "https://api.github.com/users/Gpwner/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Gpwner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Gpwner/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Gpwner",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2036/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2036/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 2 days, 23:35:05
|
https://api.github.com/repos/huggingface/datasets/issues/2035
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2035/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2035/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2035/events
|
https://github.com/huggingface/datasets/issues/2035
| 829,475,544
|
MDU6SXNzdWU4Mjk0NzU1NDQ=
| 2,035
|
wiki40b/wikipedia for almost all languages cannot be downloaded
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dorost1234",
"id": 79165106,
"login": "dorost1234",
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dorost1234",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Dear @lhoestq for wikipedia dataset I also get the same error, I greatly appreciate if you could have a look into this dataset as well. Below please find the command to reproduce the error:\r\n\r\n```\r\ndataset = load_dataset(\"wikipedia\", \"20200501.bg\")\r\nprint(dataset)\r\n```\r\n\r\nYour library is my only chance to be able training the models at scale and I am grateful for your help.\r\n\r\n",
"Hi @dorost1234,\r\nTry installing this library first, `pip install 'apache-beam[gcp]' --use-feature=2020-resolver` followed by loading dataset like this using beam runner.\r\n\r\n`dataset = load_dataset(\"wiki40b\", \"cs\", beam_runner='DirectRunner')`\r\n\r\n I also read in error stack trace that:\r\n\r\n> Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided in `load_dataset` or in the builder arguments. For big datasets it has to run on large-scale data processing tools like Dataflow, Spark, etc.\r\n\r\nWorked perfectly fine after this (Ignore these warnings)\r\n\r\n\r\n\r\n",
"For wikipedia dataset, looks like the files it's looking for are no longer available. For `bg`, I checked [here](https://dumps.wikimedia.org/bgwiki/). For this I think `dataset_infos.json` for this dataset has to made again? You'll have to load this dataset also using beam runner.\r\n\r\n",
"Hello @dorost1234,\r\n\r\nIndeed, Wikipedia datasets need a lot of preprocessing and this is done using Apache Beam. That is the reason why it is required that you install Apache Beam in order to preform this preprocessing.\r\n\r\nFor some specific default parameters (English Wikipedia), Hugging Face has already preprocessed the dataset for you (and it is stored in the cloud). That is the reason why you do not get the error for English: the preprocessing is already done by HF and you just get the preprocessed dataset; Apache Beam is not required in that case.",
"Hi\nI really appreciate if huggingface can kindly provide preprocessed\ndatasets, processing these datasets require sufficiently large resources\nand I do not have unfortunately access to, and perhaps many others too.\nthanks\n\nOn Fri, Mar 12, 2021 at 9:04 AM Albert Villanova del Moral <\n***@***.***> wrote:\n\n> Hello @dorost1234 <https://github.com/dorost1234>,\n>\n> Indeed, Wikipedia datasets need a lot of preprocessing and this is done\n> using Apache Beam. That is the reason why it is required that you install\n> Apache Beam in order to preform this preprocessing.\n>\n> For some specific default parameters (English Wikipedia), Hugging Face has\n> already preprocessed the dataset for you (and it is stored in the cloud).\n> That is the reason why you do not get the error for English: the\n> preprocessing is already done by HF and you just get the preprocessed\n> dataset; Apache Beam is not required in that case.\n>\n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/issues/2035#issuecomment-797310899>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AS37NMXACFQZAGMK4VGXRETTDHDI3ANCNFSM4ZA5R2UA>\n> .\n>\n",
"Hi everyone\r\nthanks for the helpful pointers, I did it as @bhavitvyamalik suggested, for me this freezes on this command for several hours, \r\n\r\n`Downloading and preparing dataset wiki40b/cs (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /users/dara/cache/datasets/wiki40b/cs/1.1.0/063778187363ffb294896eaa010fc254b42b73e31117c71573a953b0b0bf010f...\r\n`\r\n\r\nDo you know how long this takes? Any specific requirements the machine should have? like very large memory or so? @lhoestq \r\n\r\nthanks \r\n\r\n\r\n",
"HI @dorost1234, \r\nThe dataset size is 631.84 MiB so depending on your internet speed it'll take some time. You can monitor your internet speed meanwhile to see if it's downloading the dataset or not (use `nload` if you're using linux/mac to monitor the same). In my case it took around 3-4 mins. Since they haven't used `download_and_extract` here that's why there's no download progress bar.",
"Hi\r\nthanks, my internet speed should be good, but this really freezes for me, this is how I try to get this dataset:\r\n\r\n`from datasets import load_dataset\r\ndataset = load_dataset(\"wiki40b\", \"cs\", beam_runner='DirectRunner')`\r\n\r\nthe output I see if different also from what you see after writing this command:\r\n\r\n`Downloading and preparing dataset wiki40b/cs (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /users/dara/cache/datasets/wiki40b/cs/1.1.0/063778187363ffb294896eaa010fc254b42b73e31117c71573a953b0b0bf010f...`\r\n\r\ndo you have any idea why it might get freezed? anything I am missing @lhoestq @bhavitvyamalik. Do I need maybe to set anything special for apache-beam? \r\n\r\nthanks a lot \r\n\r\nOn Tue, Mar 16, 2021 at 9:03 AM Bhavitvya Malik ***@***.***>\r\nwrote:\r\n\r\n> HI @dorost1234 <https://github.com/dorost1234>,\r\n> The dataset size is 631.84 MiB so depending on your internet speed it'll\r\n> take some time. You can monitor your internet speed meanwhile to see if\r\n> it's downloading the dataset or not (use nload if you're using linux/mac\r\n> to monitor the same). In my case it took around 3-4 mins. Since they\r\n> haven't used download_and_extract here that's why there's no download\r\n> progress bar.\r\n>\r\n> —\r\n> You are receiving this because you were mentioned.\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/datasets/issues/2035#issuecomment-800044303>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AS37NMQIHNNLM2LGG6QKZ73TD4GDJANCNFSM4ZA5R2UA>\r\n> .\r\n>\r\n",
"I tried this on another machine (followed the same procedure I've mentioned above). This is what it shows (during the freeze period) for me:\r\n```\r\n>>> dataset = load_dataset(\"wiki40b\", \"cs\", beam_runner='DirectRunner')\r\nDownloading: 5.26kB [00:00, 1.23MB/s] \r\nDownloading: 1.40kB [00:00, 327kB/s] \r\nDownloading and preparing dataset wiki40b/cs (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/bhavitvya/.cache/huggingface/datasets/wiki40b/cs/1.1.0/063778187363ffb294896eaa010fc254b42b73e31117c71573a953b0b0bf010f...\r\nWARNING:apache_beam.internal.gcp.auth:Unable to find default credentials to use: The Application Default Credentials are not available. They are available if running in Google Compute Engine. Otherwise, the environment variable GOOGLE_APPLICATION_CREDENTIALS must be defined pointing to a file defining the credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information.\r\nConnecting anonymously.\r\nWARNING:apache_beam.io.tfrecordio:Couldn't find python-snappy so the implementation of _TFRecordUtil._masked_crc32c is not as fast as it could be.\r\n```\r\nAfter around 10 minutes, here's the loading of dataset:\r\n```\r\n100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:16<00:00, 16.42s/sources]\r\n100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 1.12sources/s]\r\n100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 1.14sources/s]\r\nDataset wiki40b downloaded and prepared to /home/bhavitvya/.cache/huggingface/datasets/wiki40b/cs/1.1.0/063778187363ffb294896eaa010fc254b42b73e31117c71573a953b0b0bf010f. Subsequent calls will reuse this data.\r\n```",
"Hi\r\nI honestly also now tried on another machine and nothing shows up after\r\nhours of waiting. Are you sure you have not set any specific setting? maybe\r\ngoogle cloud which seems it is used here, needs some credential setting?\r\nthanks for any suggestions on this\r\n\r\nOn Tue, Mar 16, 2021 at 10:02 AM Bhavitvya Malik ***@***.***>\r\nwrote:\r\n\r\n> I tried this on another machine (followed the same procedure I've\r\n> mentioned above). This is what it shows (during the freeze period) for me:\r\n>\r\n> >>> dataset = load_dataset(\"wiki40b\", \"cs\", beam_runner='DirectRunner')\r\n> Downloading: 5.26kB [00:00, 1.23MB/s]\r\n> Downloading: 1.40kB [00:00, 327kB/s]\r\n> Downloading and preparing dataset wiki40b/cs (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/bhavitvya/.cache/huggingface/datasets/wiki40b/cs/1.1.0/063778187363ffb294896eaa010fc254b42b73e31117c71573a953b0b0bf010f...\r\n> WARNING:apache_beam.internal.gcp.auth:Unable to find default credentials to use: The Application Default Credentials are not available. They are available if running in Google Compute Engine. Otherwise, the environment variable GOOGLE_APPLICATION_CREDENTIALS must be defined pointing to a file defining the credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information.\r\n> Connecting anonymously.\r\n> WARNING:apache_beam.io.tfrecordio:Couldn't find python-snappy so the implementation of _TFRecordUtil._masked_crc32c is not as fast as it could be.\r\n>\r\n> After around 10 minutes, here's the loading of dataset:\r\n>\r\n> 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:16<00:00, 16.42s/sources]\r\n> 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 1.12sources/s]\r\n> 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 1.14sources/s]\r\n> Dataset wiki40b downloaded and prepared to /home/bhavitvya/.cache/huggingface/datasets/wiki40b/cs/1.1.0/063778187363ffb294896eaa010fc254b42b73e31117c71573a953b0b0bf010f. Subsequent calls will reuse this data.\r\n>\r\n> —\r\n> You are receiving this because you were mentioned.\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/datasets/issues/2035#issuecomment-800081772>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AS37NMX6A2ZTRZUIIZVFRCDTD4NC3ANCNFSM4ZA5R2UA>\r\n> .\r\n>\r\n",
"Closing as `apache-beam`/`tensorflow_datasets` are no longer needed to load `wikipedia`/`wiki40b`."
] | 2021-03-11T19:54:54
| 2024-03-15T16:09:49
| 2024-03-15T16:09:48
|
NONE
| null | null | null | null |
Hi
I am trying to download the data as below:
```
from datasets import load_dataset
dataset = load_dataset("wiki40b", "cs")
print(dataset)
```
I am getting this error. @lhoestq I will be grateful if you could assist me with this error. For almost all languages except english I am getting this error.
I really need majority of languages in this dataset to be able to train my models for a deadline and your great scalable super well-written library is my only hope to train the models at scale while being low on resources.
thank you very much.
```
(fast) dara@vgne046:/user/dara/dev/codes/seq2seq$ python test_data.py
Downloading and preparing dataset wiki40b/cs (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to temp/dara/cache_home_2/datasets/wiki40b/cs/1.1.0/063778187363ffb294896eaa010fc254b42b73e31117c71573a953b0b0bf010f...
Traceback (most recent call last):
File "test_data.py", line 3, in <module>
dataset = load_dataset("wiki40b", "cs")
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/load.py", line 746, in load_dataset
use_auth_token=use_auth_token,
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/builder.py", line 579, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/builder.py", line 1105, in _download_and_prepare
import apache_beam as beam
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/apache_beam-2.28.0-py3.7-linux-x86_64.egg/apache_beam/__init__.py", line 96, in <module>
from apache_beam import io
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/apache_beam-2.28.0-py3.7-linux-x86_64.egg/apache_beam/io/__init__.py", line 23, in <module>
from apache_beam.io.avroio import *
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/apache_beam-2.28.0-py3.7-linux-x86_64.egg/apache_beam/io/avroio.py", line 55, in <module>
import avro
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 668, in _load_unlocked
File "<frozen importlib._bootstrap>", line 638, in _load_backward_compatible
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/avro_python3-1.9.2.1-py3.7.egg/avro/__init__.py", line 34, in <module>
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/avro_python3-1.9.2.1-py3.7.egg/avro/__init__.py", line 30, in LoadResource
NotADirectoryError: [Errno 20] Not a directory: '/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/avro_python3-1.9.2.1-py3.7.egg/avro/VERSION.txt'
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2035/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2035/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 1099 days, 20:14:54
|
https://api.github.com/repos/huggingface/datasets/issues/2032
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2032/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2032/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2032/events
|
https://github.com/huggingface/datasets/issues/2032
| 829,250,912
|
MDU6SXNzdWU4MjkyNTA5MTI=
| 2,032
|
Use Arrow filtering instead of writing a new arrow file for Dataset.filter
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4",
"events_url": "https://api.github.com/users/theo-m/events{/privacy}",
"followers_url": "https://api.github.com/users/theo-m/followers",
"following_url": "https://api.github.com/users/theo-m/following{/other_user}",
"gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/theo-m",
"id": 17948980,
"login": "theo-m",
"node_id": "MDQ6VXNlcjE3OTQ4OTgw",
"organizations_url": "https://api.github.com/users/theo-m/orgs",
"received_events_url": "https://api.github.com/users/theo-m/received_events",
"repos_url": "https://api.github.com/users/theo-m/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/theo-m/subscriptions",
"type": "User",
"url": "https://api.github.com/users/theo-m",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4",
"events_url": "https://api.github.com/users/theo-m/events{/privacy}",
"followers_url": "https://api.github.com/users/theo-m/followers",
"following_url": "https://api.github.com/users/theo-m/following{/other_user}",
"gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/theo-m",
"id": 17948980,
"login": "theo-m",
"node_id": "MDQ6VXNlcjE3OTQ4OTgw",
"organizations_url": "https://api.github.com/users/theo-m/orgs",
"received_events_url": "https://api.github.com/users/theo-m/received_events",
"repos_url": "https://api.github.com/users/theo-m/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/theo-m/subscriptions",
"type": "User",
"url": "https://api.github.com/users/theo-m",
"user_view_type": "public"
}
] |
[
"Actually table.filter returns a new table in memory, which can fill users RAM.\r\n\r\nTherefore it's not a good solution if we want to keep supporting bigger than RAM datastes"
] | 2021-03-11T15:18:50
| 2024-01-19T13:26:32
| 2024-01-19T13:26:32
|
MEMBER
| null | null | null | null |
Currently the filter method reads the dataset batch by batch to write a new, filtered, arrow file on disk. Therefore all the reading + writing can take some time.
Using a mask directly on the arrow table doesn't do any read or write operation therefore it's significantly quicker.
I think there are two cases:
- if the dataset doesn't have an indices mapping, then one can simply use the arrow filtering on the main arrow table `dataset._data.filter(...)`
- if the dataset an indices mapping, then the mask should be applied on the indices mapping table `dataset._indices.filter(...)`
The indices mapping is used to map between the idx at `dataset[idx]` in `__getitem__` and the idx in the actual arrow table.
The new filter method should therefore be faster, and allow users to pass either a filtering function (that returns a boolean given an example), or directly a mask.
Feel free to discuss this idea in this thread :)
One additional note: the refactor at #2025 would make all the pickle-related stuff work directly with the arrow filtering, so that we only need to change the Dataset.filter method without having to deal with pickle.
cc @theo-m @gchhablani
related issues: #1796 #1949
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 4,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 4,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2032/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2032/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 1043 days, 22:07:42
|
https://api.github.com/repos/huggingface/datasets/issues/2031
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2031/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2031/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2031/events
|
https://github.com/huggingface/datasets/issues/2031
| 829,122,778
|
MDU6SXNzdWU4MjkxMjI3Nzg=
| 2,031
|
wikipedia.py generator that extracts XML doesn't release memory
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/6331508?v=4",
"events_url": "https://api.github.com/users/miyamonz/events{/privacy}",
"followers_url": "https://api.github.com/users/miyamonz/followers",
"following_url": "https://api.github.com/users/miyamonz/following{/other_user}",
"gists_url": "https://api.github.com/users/miyamonz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/miyamonz",
"id": 6331508,
"login": "miyamonz",
"node_id": "MDQ6VXNlcjYzMzE1MDg=",
"organizations_url": "https://api.github.com/users/miyamonz/orgs",
"received_events_url": "https://api.github.com/users/miyamonz/received_events",
"repos_url": "https://api.github.com/users/miyamonz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/miyamonz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/miyamonz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/miyamonz",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi @miyamonz \r\nThanks for investigating this issue, good job !\r\nIt would be awesome to integrate your fix in the library, could you open a pull request ?",
"OK! I'll send it later."
] | 2021-03-11T12:51:24
| 2021-03-22T08:33:52
| 2021-03-22T08:33:52
|
CONTRIBUTOR
| null | null | null | null |
I tried downloading Japanese wikipedia, but it always failed because of out of memory maybe.
I found that the generator function that extracts XML data in wikipedia.py doesn't release memory in the loop.
https://github.com/huggingface/datasets/blob/13a5b7db992ad5cf77895e4c0f76595314390418/datasets/wikipedia/wikipedia.py#L464-L502
`root.clear()` intend to clear memory, but it doesn't.
https://github.com/huggingface/datasets/blob/13a5b7db992ad5cf77895e4c0f76595314390418/datasets/wikipedia/wikipedia.py#L490
https://github.com/huggingface/datasets/blob/13a5b7db992ad5cf77895e4c0f76595314390418/datasets/wikipedia/wikipedia.py#L494
I replaced them with `elem.clear()`, then it seems to work correctly.
here is the notebook to reproduce it.
https://gist.github.com/miyamonz/dc06117302b6e85fa51cbf46dde6bb51#file-xtract_content-ipynb
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/6331508?v=4",
"events_url": "https://api.github.com/users/miyamonz/events{/privacy}",
"followers_url": "https://api.github.com/users/miyamonz/followers",
"following_url": "https://api.github.com/users/miyamonz/following{/other_user}",
"gists_url": "https://api.github.com/users/miyamonz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/miyamonz",
"id": 6331508,
"login": "miyamonz",
"node_id": "MDQ6VXNlcjYzMzE1MDg=",
"organizations_url": "https://api.github.com/users/miyamonz/orgs",
"received_events_url": "https://api.github.com/users/miyamonz/received_events",
"repos_url": "https://api.github.com/users/miyamonz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/miyamonz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/miyamonz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/miyamonz",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2031/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2031/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 10 days, 19:42:28
|
https://api.github.com/repos/huggingface/datasets/issues/2029
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2029/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2029/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2029/events
|
https://github.com/huggingface/datasets/issues/2029
| 829,097,290
|
MDU6SXNzdWU4MjkwOTcyOTA=
| 2,029
|
Loading a faiss index KeyError
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/24982805?v=4",
"events_url": "https://api.github.com/users/nbroad1881/events{/privacy}",
"followers_url": "https://api.github.com/users/nbroad1881/followers",
"following_url": "https://api.github.com/users/nbroad1881/following{/other_user}",
"gists_url": "https://api.github.com/users/nbroad1881/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/nbroad1881",
"id": 24982805,
"login": "nbroad1881",
"node_id": "MDQ6VXNlcjI0OTgyODA1",
"organizations_url": "https://api.github.com/users/nbroad1881/orgs",
"received_events_url": "https://api.github.com/users/nbroad1881/received_events",
"repos_url": "https://api.github.com/users/nbroad1881/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/nbroad1881/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nbroad1881/subscriptions",
"type": "User",
"url": "https://api.github.com/users/nbroad1881",
"user_view_type": "public"
}
|
[
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] |
closed
| false
| null |
[] |
[
"In your code `dataset2` doesn't contain the \"embeddings\" column, since it is created from the pandas DataFrame with columns \"text\" and \"label\".\r\n\r\nTherefore when you call `dataset2[embeddings_name]`, you get a `KeyError`.\r\n\r\nIf you want the \"embeddings\" column back, you can create `dataset2` with\r\n```python\r\ndataset2 = load_from_disk(dataset_filename)\r\n```\r\nwhere `dataset_filename` is the place where you saved you dataset with the embeddings in the first place.",
"Ok in that case HF should fix their misleading example at https://huggingface.co/docs/datasets/faiss_and_ea.html#adding-a-faiss-index \r\n\r\nI copy-pasted it here.\r\n\r\n> When you are done with your queries you can save your index on disk:\r\n> \r\n> ```python\r\n> ds_with_embeddings.save_faiss_index('embeddings', 'my_index.faiss')\r\n> ```\r\n> Then reload it later:\r\n> \r\n> ```python\r\n> ds = load_dataset('crime_and_punish', split='train[:100]')\r\n> ds.load_faiss_index('embeddings', 'my_index.faiss')\r\n> ```",
"Hi !\r\n\r\nThe code of the example is valid.\r\nAn index is a search engine, it's not considered a column of a dataset.\r\nWhen you do `ds.load_faiss_index(\"embeddings\", 'my_index.faiss')`, it attaches an index named \"embeddings\" to the dataset but it doesn't re-add the \"embeddings\" column. You can list the indexes of a dataset by using `ds.list_indexes()`.\r\n\r\nIf I understand correctly by reading this example you thought that it was re-adding the \"embeddings\" column.\r\nThis looks misleading indeed, and we should add a note to make it more explicit that it doesn't store the column that was used to build the index.\r\n\r\nFeel free to open a PR to suggest an improvement on the documentation if you want to contribute :)",
"> If I understand correctly by reading this example you thought that it was re-adding the \"embeddings\" column.\r\nYes. I was trying to use the dataset in RAG and it complained that the dataset didn't have the right columns. No problems when loading the dataset with `load_from_disk` and then doing `load_faiss_index`\r\n\r\nWhat I learned was\r\n1. column and index are different\r\n2. loading the index does not create a column\r\n3. the column is not needed to be able to use the index\r\n4. RAG needs both the embeddings column and the index\r\n\r\nIf I can come up with a way to articulate this in the right spot in the docs, I'll open a PR"
] | 2021-03-11T12:16:13
| 2021-03-12T00:21:09
| 2021-03-12T00:21:09
|
NONE
| null | null | null | null |
I've recently been testing out RAG and DPR embeddings, and I've run into an issue that is not apparent in the documentation.
The basic steps are:
1. Create a dataset (dataset1)
2. Create an embeddings column using DPR
3. Add a faiss index to the dataset
4. Save faiss index to a file
5. Create a new dataset (dataset2) with the same text and label information as dataset1
6. Try to load the faiss index from file to dataset2
7. Get `KeyError: "Column embeddings not in the dataset"`
I've made a colab notebook that should show exactly what I did. Please switch to GPU runtime; I didn't check on CPU.
https://colab.research.google.com/drive/1X0S9ZuZ8k0ybcoei4w7so6dS_WrABmIx?usp=sharing
Ubuntu Version
VERSION="18.04.5 LTS (Bionic Beaver)"
datasets==1.4.1
faiss==1.5.3
faiss-gpu==1.7.0
torch==1.8.0+cu101
transformers==4.3.3
NVIDIA-SMI 460.56
Driver Version: 460.32.03
CUDA Version: 11.2
Tesla K80
I was basically following the steps here: https://huggingface.co/docs/datasets/faiss_and_ea.html#adding-a-faiss-index
I included the exact code from the documentation at the end of the notebook to show that they don't work either.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/24982805?v=4",
"events_url": "https://api.github.com/users/nbroad1881/events{/privacy}",
"followers_url": "https://api.github.com/users/nbroad1881/followers",
"following_url": "https://api.github.com/users/nbroad1881/following{/other_user}",
"gists_url": "https://api.github.com/users/nbroad1881/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/nbroad1881",
"id": 24982805,
"login": "nbroad1881",
"node_id": "MDQ6VXNlcjI0OTgyODA1",
"organizations_url": "https://api.github.com/users/nbroad1881/orgs",
"received_events_url": "https://api.github.com/users/nbroad1881/received_events",
"repos_url": "https://api.github.com/users/nbroad1881/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/nbroad1881/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nbroad1881/subscriptions",
"type": "User",
"url": "https://api.github.com/users/nbroad1881",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2029/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2029/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 12:04:56
|
https://api.github.com/repos/huggingface/datasets/issues/2026
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2026/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2026/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2026/events
|
https://github.com/huggingface/datasets/issues/2026
| 828,194,467
|
MDU6SXNzdWU4MjgxOTQ0Njc=
| 2,026
|
KeyError on using map after renaming a column
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gchhablani",
"id": 29076344,
"login": "gchhablani",
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gchhablani",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi,\r\n\r\nActually, the error occurs due to these two lines:\r\n```python\r\nraw_dataset.set_format('torch',columns=['img','label'])\r\nraw_dataset = raw_dataset.rename_column('img','image')\r\n```\r\n`Dataset.rename_column` doesn't update the `_format_columns` attribute, previously defined by `Dataset.set_format`, with a new column name which is why this new column is missing in the output.",
"Hi @mariosasko,\n\nThanks for opening a PR on this :)\nWhy does the old name also disappear?",
"I just merged a @mariosasko 's PR that fixes this issue.\r\nIf it happens again, feel free to re-open :)"
] | 2021-03-10T18:54:17
| 2021-03-11T14:39:34
| 2021-03-11T14:38:40
|
CONTRIBUTOR
| null | null | null | null |
Hi,
I'm trying to use `cifar10` dataset. I want to rename the `img` feature to `image` in order to make it consistent with `mnist`, which I'm also planning to use. By doing this, I was trying to avoid modifying `prepare_train_features` function.
Here is what I try:
```python
transform = Compose([ToPILImage(),ToTensor(),Normalize([0.0,0.0,0.0],[1.0,1.0,1.0])])
def prepare_features(examples):
images = []
labels = []
print(examples)
for example_idx, example in enumerate(examples["image"]):
if transform is not None:
images.append(transform(examples["image"][example_idx].permute(2,0,1)))
else:
images.append(examples["image"][example_idx].permute(2,0,1))
labels.append(examples["label"][example_idx])
output = {"label":labels, "image":images}
return output
raw_dataset = load_dataset('cifar10')
raw_dataset.set_format('torch',columns=['img','label'])
raw_dataset = raw_dataset.rename_column('img','image')
features = datasets.Features({
"image": datasets.Array3D(shape=(3,32,32),dtype="float32"),
"label": datasets.features.ClassLabel(names=[
"airplane",
"automobile",
"bird",
"cat",
"deer",
"dog",
"frog",
"horse",
"ship",
"truck",
]),
})
train_dataset = raw_dataset.map(prepare_features, features = features,batched=True, batch_size=10000)
```
The error:
```python
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-54-bf29672c53ee> in <module>()
14 ]),
15 })
---> 16 train_dataset = raw_dataset.map(prepare_features, features = features,batched=True, batch_size=10000)
2 frames
/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint)
1287 test_inputs = self[:2] if batched else self[0]
1288 test_indices = [0, 1] if batched else 0
-> 1289 update_data = does_function_return_dict(test_inputs, test_indices)
1290 logger.info("Testing finished, running the mapping function on the dataset")
1291
/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in does_function_return_dict(inputs, indices)
1258 fn_args = [inputs] if input_columns is None else [inputs[col] for col in input_columns]
1259 processed_inputs = (
-> 1260 function(*fn_args, indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)
1261 )
1262 does_return_dict = isinstance(processed_inputs, Mapping)
<ipython-input-52-b4dccbafb70d> in prepare_features(examples)
3 labels = []
4 print(examples)
----> 5 for example_idx, example in enumerate(examples["image"]):
6 if transform is not None:
7 images.append(transform(examples["image"][example_idx].permute(2,0,1)))
KeyError: 'image'
```
The print statement inside returns this:
```python
{'label': tensor([6, 9])}
```
Apparently, both `img` and `image` do not exist after renaming.
Note that this code works fine with `img` everywhere.
Notebook: https://colab.research.google.com/drive/1SzESAlz3BnVYrgQeJ838vbMp1OsukiA2?usp=sharing
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2026/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2026/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 19:44:23
|
https://api.github.com/repos/huggingface/datasets/issues/2022
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2022/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2022/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2022/events
|
https://github.com/huggingface/datasets/issues/2022
| 827,435,033
|
MDU6SXNzdWU4Mjc0MzUwMzM=
| 2,022
|
ValueError when rename_column on splitted dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/53626067?v=4",
"events_url": "https://api.github.com/users/simonschoe/events{/privacy}",
"followers_url": "https://api.github.com/users/simonschoe/followers",
"following_url": "https://api.github.com/users/simonschoe/following{/other_user}",
"gists_url": "https://api.github.com/users/simonschoe/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/simonschoe",
"id": 53626067,
"login": "simonschoe",
"node_id": "MDQ6VXNlcjUzNjI2MDY3",
"organizations_url": "https://api.github.com/users/simonschoe/orgs",
"received_events_url": "https://api.github.com/users/simonschoe/received_events",
"repos_url": "https://api.github.com/users/simonschoe/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/simonschoe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/simonschoe/subscriptions",
"type": "User",
"url": "https://api.github.com/users/simonschoe",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi,\r\n\r\nThis is a bug so thanks for reporting it. `Dataset.__setstate__` is the problem, which is called when `Dataset.rename_column` tries to copy the dataset with `copy.deepcopy(self)`. This only happens if the `split` arg in `load_dataset` was defined as `ReadInstruction`.\r\n\r\nTo overcome this issue, use the named splits API (for now):\r\n```python\r\ntrain_ds, test_ds = load_dataset(\r\n path='csv', \r\n delimiter='\\t', \r\n data_files=text_files, \r\n split=['train[:90%]', 'train[-10%:]'],\r\n)\r\n\r\ntrain_ds = train_ds.rename_column('sentence', 'text')\r\n```",
"This has been fixed in #2043 , thanks @mariosasko \r\nThe fix is available on master and we'll do a new release soon :)\r\n\r\nfeel free to re-open if you still have issues"
] | 2021-03-10T09:40:38
| 2025-02-05T13:36:07
| 2021-03-16T14:05:05
|
NONE
| null | null | null | null |
Hi there,
I am loading `.tsv` file via `load_dataset` and subsequently split the rows into training and test set via the `ReadInstruction` API like so:
```python
split = {
'train': ReadInstruction('train', to=90, unit='%'),
'test': ReadInstruction('train', from_=-10, unit='%')
}
dataset = load_dataset(
path='csv', # use 'text' loading script to load from local txt-files
delimiter='\t', # xxx
data_files=text_files, # list of paths to local text files
split=split, # xxx
)
dataset
```
Part of output:
```python
DatasetDict({
train: Dataset({
features: ['sentence', 'sentiment'],
num_rows: 900
})
test: Dataset({
features: ['sentence', 'sentiment'],
num_rows: 100
})
})
```
Afterwards I'd like to rename the 'sentence' column to 'text' in order to be compatible with my modelin pipeline. If I run the following code I experience a `ValueError` however:
```python
dataset['train'].rename_column('sentence', 'text')
```
```python
/usr/local/lib/python3.7/dist-packages/datasets/splits.py in __init__(self, name)
353 for split_name in split_names_from_instruction:
354 if not re.match(_split_re, split_name):
--> 355 raise ValueError(f"Split name should match '{_split_re}'' but got '{split_name}'.")
356
357 def __str__(self):
ValueError: Split name should match '^\w+(\.\w+)*$'' but got 'ReadInstruction('.
```
In particular, these behavior does not arise if I use the deprecated `rename_column_` method. Any idea what causes the error? Would assume something in the way I defined the split.
Thanks in advance! :)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2022/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2022/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 6 days, 4:24:27
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.