url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 48
51
| id
int64 600M
3.67B
| node_id
stringlengths 18
24
| number
int64 2
7.88k
| title
stringlengths 1
290
| user
dict | labels
listlengths 0
4
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
4
| comments
listlengths 0
30
| created_at
timestamp[s]date 2020-04-14 18:18:51
2025-11-26 16:16:56
| updated_at
timestamp[s]date 2020-04-29 09:23:05
2025-11-30 03:52:07
| closed_at
timestamp[s]date 2020-04-29 09:23:05
2025-11-21 12:31:19
⌀ | author_association
stringclasses 4
values | type
null | active_lock_reason
null | draft
null | pull_request
null | body
stringlengths 0
228k
⌀ | closed_by
dict | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | state_reason
stringclasses 4
values | sub_issues_summary
dict | issue_dependencies_summary
dict | is_pull_request
bool 1
class | closed_at_time_taken
duration[s] |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/730
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/730/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/730/comments
|
https://api.github.com/repos/huggingface/datasets/issues/730/events
|
https://github.com/huggingface/datasets/issues/730
| 721,073,812
|
MDU6SXNzdWU3MjEwNzM4MTI=
| 730
|
Possible caching bug
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/3375489?v=4",
"events_url": "https://api.github.com/users/ArneBinder/events{/privacy}",
"followers_url": "https://api.github.com/users/ArneBinder/followers",
"following_url": "https://api.github.com/users/ArneBinder/following{/other_user}",
"gists_url": "https://api.github.com/users/ArneBinder/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ArneBinder",
"id": 3375489,
"login": "ArneBinder",
"node_id": "MDQ6VXNlcjMzNzU0ODk=",
"organizations_url": "https://api.github.com/users/ArneBinder/orgs",
"received_events_url": "https://api.github.com/users/ArneBinder/received_events",
"repos_url": "https://api.github.com/users/ArneBinder/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ArneBinder/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArneBinder/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ArneBinder",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] |
[
"Thanks for reporting. That's a bug indeed.\r\nApparently only the `data_files` parameter is taken into account right now in `DatasetBuilder._create_builder_config` but it should also be the case for `config_kwargs` (or at least the instantiated `builder_config`)",
"Hi, does this bug be fixed? when I load JSON files, I get the same errors by the command \r\n`!python3 run.py --do_train --task qa --dataset squad-retrain-data/train-v2.0.json --output_dir ./re_trained_model/`\r\n\r\nchange the dateset to load json by refering to https://huggingface.co/docs/datasets/loading.html\r\n`dataset = datasets.load_dataset('json', data_files=args.dataset)`\r\n\r\nErrors:\r\n`Downloading and preparing dataset json/default (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/json/default-c1e124ad488911b8/0.0.0/45636811569ec4a6630521c18235dfbbab83b7ab572e3393c5ba68ccabe98264...\r\n`",
"```ds = load_dataset(\"csv\", data_files={'train': 'train.csv', 'test': 'test.csv'})```\r\n\r\nGives the output\r\n```Using custom data configuration default-5c8ae7c208631aca```\r\n\r\nand the code hangs there.",
"> `ds = load_dataset(\"csv\", data_files={'train': 'train.csv', 'test': 'test.csv'})`\r\n> \r\n> Gives the output `Using custom data configuration default-5c8ae7c208631aca`\r\n> \r\n> and the code hangs there.\r\n\r\nHave you solved it? I met this problem too!",
"Can you Ctrl+C to kill the process and share the stacktrace here ? It should show at which location in the code it was hanging",
"I had the same issue and solved it by downgrading the datasets version from 2.7.0 -> 2.6.1\r\npip install -q datasets==2.6.1",
"> I had the same issue and solved it by downgrading the datasets version from 2.7.0 -> 2.6.1 pip install -q datasets==2.6.1\r\n\r\nThanks, it works for me"
] | 2020-10-14T02:02:34
| 2022-11-22T01:45:54
| 2020-10-29T09:36:01
|
NONE
| null | null | null | null |
The following code with `test1.txt` containing just "🤗🤗🤗":
```
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="latin_1")
print(dataset[0])
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="utf-8")
print(dataset[0])
```
produces this output:
```
Downloading and preparing dataset text/default-15600e4d83254059 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/arne/.cache/huggingface/datasets/text/default-15600e4d83254059/0.0.0/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155...
Dataset text downloaded and prepared to /home/arne/.cache/huggingface/datasets/text/default-15600e4d83254059/0.0.0/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155. Subsequent calls will reuse this data.
{'text': 'ð\x9f¤\x97ð\x9f¤\x97ð\x9f¤\x97'}
Using custom data configuration default
Reusing dataset text (/home/arne/.cache/huggingface/datasets/text/default-15600e4d83254059/0.0.0/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155)
{'text': 'ð\x9f¤\x97ð\x9f¤\x97ð\x9f¤\x97'}
```
Just changing the order (and deleting the temp files):
```
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="utf-8")
print(dataset[0])
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="latin_1")
print(dataset[0])
```
produces this:
```
Using custom data configuration default
Downloading and preparing dataset text/default-15600e4d83254059 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/arne/.cache/huggingface/datasets/text/default-15600e4d83254059/0.0.0/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155...
Dataset text downloaded and prepared to /home/arne/.cache/huggingface/datasets/text/default-15600e4d83254059/0.0.0/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155. Subsequent calls will reuse this data.
{'text': '🤗🤗🤗'}
Using custom data configuration default
Reusing dataset text (/home/arne/.cache/huggingface/datasets/text/default-15600e4d83254059/0.0.0/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155)
{'text': '🤗🤗🤗'}
```
Is it intended that the cache path does not depend on the config entries?
tested with datasets==1.1.2 and python==3.8.5
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/730/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/730/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 15 days, 7:33:27
|
https://api.github.com/repos/huggingface/datasets/issues/729
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/729/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/729/comments
|
https://api.github.com/repos/huggingface/datasets/issues/729/events
|
https://github.com/huggingface/datasets/issues/729
| 719,558,876
|
MDU6SXNzdWU3MTk1NTg4NzY=
| 729
|
Better error message when one forgets to call `add_batch` before `compute`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sgugger",
"id": 35901082,
"login": "sgugger",
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"repos_url": "https://api.github.com/users/sgugger/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sgugger",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[] | 2020-10-12T17:59:22
| 2020-10-29T15:18:24
| 2020-10-29T15:18:24
|
CONTRIBUTOR
| null | null | null | null |
When using metrics, if for some reason a user forgets to call `add_batch` to a metric before `compute` (with no arguments), the error message is a bit cryptic and could probably be made clearer.
## Reproducer
```python
import datasets
import torch
from datasets import Metric
class GatherMetric(Metric):
def _info(self):
return datasets.MetricInfo(
description="description",
citation="citation",
inputs_description="kwargs",
features=datasets.Features({
'predictions': datasets.Value('int64'),
'references': datasets.Value('int64'),
}),
codebase_urls=[],
reference_urls=[],
format='numpy'
)
def _compute(self, predictions, references):
return {"predictions": predictions, "labels": references}
metric = GatherMetric(cache_dir="test-metric")
inputs = torch.randint(0, 2, (1024,))
targets = torch.randint(0, 2, (1024,))
batch_size = 8
for i in range(0, 1024, batch_size):
pass # User forgets to call `add_batch`
result = metric.compute()
```
## Stack trace:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-13-267729d187fa> in <module>
3 pass
4 # metric.add_batch(predictions=inputs[i:i+batch_size], references=targets[i:i+batch_size])
----> 5 result = metric.compute()
~/git/datasets/src/datasets/metric.py in compute(self, *args, **kwargs)
380 if predictions is not None:
381 self.add_batch(predictions=predictions, references=references)
--> 382 self._finalize()
383
384 self.cache_file_name = None
~/git/datasets/src/datasets/metric.py in _finalize(self)
343 elif self.process_id == 0:
344 # Let's acquire a lock on each node files to be sure they are finished writing
--> 345 file_paths, filelocks = self._get_all_cache_files()
346
347 # Read the predictions and references
~/git/datasets/src/datasets/metric.py in _get_all_cache_files(self)
280 filelocks = []
281 for process_id, file_path in enumerate(file_paths):
--> 282 filelock = FileLock(file_path + ".lock")
283 try:
284 filelock.acquire(timeout=self.timeout)
TypeError: unsupported operand type(s) for +: 'NoneType' and 'str'
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/729/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/729/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 16 days, 21:19:02
|
https://api.github.com/repos/huggingface/datasets/issues/728
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/728/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/728/comments
|
https://api.github.com/repos/huggingface/datasets/issues/728/events
|
https://github.com/huggingface/datasets/issues/728
| 719,555,780
|
MDU6SXNzdWU3MTk1NTU3ODA=
| 728
|
Passing `cache_dir` to a metric does not work
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sgugger",
"id": 35901082,
"login": "sgugger",
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"repos_url": "https://api.github.com/users/sgugger/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sgugger",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[] | 2020-10-12T17:55:14
| 2020-10-29T09:34:42
| 2020-10-29T09:34:42
|
CONTRIBUTOR
| null | null | null | null |
When passing `cache_dir` to a custom metric, the folder is concatenated to itself at some point and this results in a FileNotFoundError:
## Reproducer
```python
import datasets
import torch
from datasets import Metric
class GatherMetric(Metric):
def _info(self):
return datasets.MetricInfo(
description="description",
citation="citation",
inputs_description="kwargs",
features=datasets.Features({
'predictions': datasets.Value('int64'),
'references': datasets.Value('int64'),
}),
codebase_urls=[],
reference_urls=[],
format='numpy'
)
def _compute(self, predictions, references):
return {"predictions": predictions, "labels": references}
metric = GatherMetric(cache_dir="test-metric")
inputs = torch.randint(0, 2, (1024,))
targets = torch.randint(0, 2, (1024,))
batch_size = 8
for i in range(0, 1024, batch_size):
metric.add_batch(predictions=inputs[i:i+batch_size], references=targets[i:i+batch_size])
result = metric.compute()
```
## Stack trace:
```
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
~/git/datasets/src/datasets/metric.py in _finalize(self)
349 reader = ArrowReader(path=self.data_dir, info=DatasetInfo(features=self.features))
--> 350 self.data = Dataset(**reader.read_files([{"filename": f} for f in file_paths]))
351 except FileNotFoundError:
~/git/datasets/src/datasets/arrow_reader.py in read_files(self, files, original_instructions)
227 # Prepend path to filename
--> 228 pa_table = self._read_files(files)
229 files = copy.deepcopy(files)
~/git/datasets/src/datasets/arrow_reader.py in _read_files(self, files)
166 for f_dict in files:
--> 167 pa_table: pa.Table = self._get_dataset_from_filename(f_dict)
168 pa_tables.append(pa_table)
~/git/datasets/src/datasets/arrow_reader.py in _get_dataset_from_filename(self, filename_skip_take)
291 )
--> 292 mmap = pa.memory_map(filename)
293 f = pa.ipc.open_stream(mmap)
~/.pyenv/versions/3.7.9/envs/base/lib/python3.7/site-packages/pyarrow/io.pxi in pyarrow.lib.memory_map()
~/.pyenv/versions/3.7.9/envs/base/lib/python3.7/site-packages/pyarrow/io.pxi in pyarrow.lib.MemoryMappedFile._open()
~/.pyenv/versions/3.7.9/envs/base/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status()
~/.pyenv/versions/3.7.9/envs/base/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status()
FileNotFoundError: [Errno 2] Failed to open local file 'test-metric/gather_metric/default/test-metric/gather_metric/default/default_experiment-1-0.arrow'. Detail: [errno 2] No such file or directory
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
<ipython-input-17-e42d43cc981f> in <module>
2 for i in range(0, 1024, batch_size):
3 metric.add_batch(predictions=inputs[i:i+batch_size], references=targets[i:i+batch_size])
----> 4 result = metric.compute()
~/git/datasets/src/datasets/metric.py in compute(self, *args, **kwargs)
380 if predictions is not None:
381 self.add_batch(predictions=predictions, references=references)
--> 382 self._finalize()
383
384 self.cache_file_name = None
~/git/datasets/src/datasets/metric.py in _finalize(self)
351 except FileNotFoundError:
352 raise ValueError(
--> 353 "Error in finalize: another metric instance is already using the local cache file. "
354 "Please specify an experiment_id to avoid colision between distributed metric instances."
355 )
ValueError: Error in finalize: another metric instance is already using the local cache file. Please specify an experiment_id to avoid colision between distributed metric instances.
```
The code works when we remove the `cache_dir=...` from the metric.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/728/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/728/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 16 days, 15:39:28
|
https://api.github.com/repos/huggingface/datasets/issues/727
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/727/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/727/comments
|
https://api.github.com/repos/huggingface/datasets/issues/727/events
|
https://github.com/huggingface/datasets/issues/727
| 719,386,366
|
MDU6SXNzdWU3MTkzODYzNjY=
| 727
|
Parallel downloads progress bar flickers
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[] | 2020-10-12T13:36:05
| 2020-10-12T13:36:05
| null |
MEMBER
| null | null | null | null |
When there are parallel downloads using the download manager, the tqdm progress bar flickers since all the progress bars are on the same line.
To fix that we could simply specify `position=i` for i=0 to n the number of files to download when instantiating the tqdm progress bar.
Another way would be to have one "master" progress bar that tracks the number of finished downloads, and then one progress bar per process that show the current downloads.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/727/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/727/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/726
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/726/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/726/comments
|
https://api.github.com/repos/huggingface/datasets/issues/726/events
|
https://github.com/huggingface/datasets/issues/726
| 719,313,754
|
MDU6SXNzdWU3MTkzMTM3NTQ=
| 726
|
"Checksums didn't match for dataset source files" error while loading openwebtext dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/16469472?v=4",
"events_url": "https://api.github.com/users/SparkJiao/events{/privacy}",
"followers_url": "https://api.github.com/users/SparkJiao/followers",
"following_url": "https://api.github.com/users/SparkJiao/following{/other_user}",
"gists_url": "https://api.github.com/users/SparkJiao/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/SparkJiao",
"id": 16469472,
"login": "SparkJiao",
"node_id": "MDQ6VXNlcjE2NDY5NDcy",
"organizations_url": "https://api.github.com/users/SparkJiao/orgs",
"received_events_url": "https://api.github.com/users/SparkJiao/received_events",
"repos_url": "https://api.github.com/users/SparkJiao/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/SparkJiao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SparkJiao/subscriptions",
"type": "User",
"url": "https://api.github.com/users/SparkJiao",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi try, to provide more information please.\r\n\r\nExample code in a colab to reproduce the error, details on what you are trying to do and what you were expected and details on your environment (OS, PyPi packages version).",
"> Hi try, to provide more information please.\r\n> \r\n> Example code in a colab to reproduce the error, details on what you are trying to do and what you were expected and details on your environment (OS, PyPi packages version).\r\n\r\nI have update the description, sorry for the incomplete issue by mistake.",
"Hi, I have manually downloaded the compressed dataset `openwebtext.tar.xz' and use the following command to preprocess the examples:\r\n```\r\n>>> dataset = load_dataset('/home/admin/workspace/datasets/datasets-master/datasets-master/datasets/openwebtext', data_dir='/home/admin/workspace/datasets')\r\nUsing custom data configuration default\r\nDownloading and preparing dataset openwebtext/default (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/admin/.cache/huggingface/datasets/openwebtext/default/0.0.0/5c636399c7155da97c982d0d70ecdce30fbca66a4eb4fc768ad91f8331edac02...\r\nDataset openwebtext downloaded and prepared to /home/admin/.cache/huggingface/datasets/openwebtext/default/0.0.0/5c636399c7155da97c982d0d70ecdce30fbca66a4eb4fc768ad91f8331edac02. Subsequent calls will reuse this data.\r\n>>> len(dataset['train'])\r\n74571\r\n>>>\r\n```\r\nThe size of the pre-processed example file is only 354MB, however the processed bookcorpus dataset is 4.6g. Are there any problems?",
"NonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n\r\ni got this issue when i try to work on my own datasets kindly tell me, from where i can get checksums of train and dev file in my github repo",
"Hi, I got the similar issue for xnli dataset while working on colab with python3.7. \r\n\r\n`nlp.load_dataset(path = 'xnli')`\r\n\r\nThe above command resulted in following issue : \r\n```\r\nNonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https://www.nyu.edu/projects/bowman/xnli/XNLI-1.0.zip']\r\n```\r\n\r\nAny idea how to fix this ?",
"Did anyone figure out how to fix this error?",
"Fixed by:\r\n- #2857",
"Says fixed but I'm still getting it. \r\n\r\ncommand:\r\n\r\n dataset = load_dataset(\"ted_talks_iwslt\", language_pair=(\"en\", \"es\"), year=\"2014\",download_mode=\"force_redownload\")\r\n\r\ngot:\r\n\r\nUsing custom data configuration en_es_2014-35a2d3350a0f9823\r\nDownloading and preparing dataset ted_talks_iwslt/en_es_2014 (download: 2.15 KiB, generated: Unknown size, post-processed: Unknown size, total: 2.15 KiB) to /home/ken/.cache/huggingface/datasets/ted_talks_iwslt/en_es_2014-35a2d3350a0f9823/1.1.0/43935b3fe470c753a023642e1f54b068c590847f9928bd3f2ec99f15702ad6a6...\r\nDownloading:\r\n2.21k/? [00:00<00:00, 141kB/s]\r\n\r\nNonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https://drive.google.com/u/0/uc?id=1Cz1Un9p8Xn9IpEMMrg2kXSDt0dnjxc4z&export=download']"
] | 2020-10-12T11:45:10
| 2022-02-17T17:53:54
| 2022-02-15T10:38:57
|
NONE
| null | null | null | null |
Hi,
I have encountered this problem during loading the openwebtext dataset:
```
>>> dataset = load_dataset('openwebtext')
Downloading and preparing dataset openwebtext/plain_text (download: 12.00 GiB, generated: 37.04 GiB, post-processed: Unknown size, total: 49.03 GiB) to /home/admin/.cache/huggingface/datasets/openwebtext/plain_text/1.0.0/5c636399c7155da97c982d0d70ecdce30fbca66a4eb4fc768ad91f8331edac02...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/admin/workspace/anaconda3/envs/torch1.6-py3.7/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset
ignore_verifications=ignore_verifications,
File "/home/admin/workspace/anaconda3/envs/torch1.6-py3.7/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/admin/workspace/anaconda3/envs/torch1.6-py3.7/lib/python3.7/site-packages/datasets/builder.py", line 536, in _download_and_prepare
self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), "dataset source files"
File "/home/admin/workspace/anaconda3/envs/torch1.6-py3.7/lib/python3.7/site-packages/datasets/utils/info_utils.py", line 39, in verify_checksums
raise NonMatchingChecksumError(error_msg + str(bad_urls))
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://zenodo.org/record/3834942/files/openwebtext.tar.xz']
```
I think this problem is caused because the released dataset has changed. Or I should download the dataset manually?
Sorry for release the unfinised issue by mistake.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/726/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/726/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 490 days, 22:53:47
|
https://api.github.com/repos/huggingface/datasets/issues/724
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/724/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/724/comments
|
https://api.github.com/repos/huggingface/datasets/issues/724/events
|
https://github.com/huggingface/datasets/issues/724
| 718,947,700
|
MDU6SXNzdWU3MTg5NDc3MDA=
| 724
|
need to redirect /nlp to /datasets and remove outdated info
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stas00",
"id": 10676103,
"login": "stas00",
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"repos_url": "https://api.github.com/users/stas00/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stas00",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Should be fixed now: \r\n\r\n\r\n\r\nNot sure I understand what you mean by the second part?\r\n",
"Thank you!\r\n\r\n> Not sure I understand what you mean by the second part?\r\n\r\nCompare the 2:\r\n* https://huggingface.co/datasets/wikihow\r\n* https://huggingface.co/nlp/viewer/?dataset=wikihow&config=all\r\nCan you see the difference? 2nd has formatting, 1st doesn't.\r\n",
"For context, those are two different pages (not an old vs new one), one is from the dataset viewer (you can browse data inside the datasets) while the other is just a basic reference page displayed some metadata about the dataset.\r\n\r\nFor the second one, we'll move to markdown parsing soon, so it'll be formatted better.",
"I understand. I was just flagging the lack of markup issue."
] | 2020-10-11T23:12:12
| 2020-10-14T17:00:12
| 2020-10-14T17:00:12
|
CONTRIBUTOR
| null | null | null | null |
It looks like the website still has all the `nlp` data, e.g.: https://huggingface.co/nlp/viewer/?dataset=wikihow&config=all
should probably redirect to: https://huggingface.co/datasets/wikihow
also for some reason the new information is slightly borked. If you look at the old one it was nicely formatted and had the links marked up, the new one is just a jumble of text in one chunk and no markup for links (i.e. not clickable).
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stas00",
"id": 10676103,
"login": "stas00",
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"repos_url": "https://api.github.com/users/stas00/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stas00",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/724/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/724/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 2 days, 17:48:00
|
https://api.github.com/repos/huggingface/datasets/issues/723
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/723/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/723/comments
|
https://api.github.com/repos/huggingface/datasets/issues/723/events
|
https://github.com/huggingface/datasets/issues/723
| 718,926,723
|
MDU6SXNzdWU3MTg5MjY3MjM=
| 723
|
Adding pseudo-labels to datasets
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sshleifer",
"id": 6045025,
"login": "sshleifer",
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sshleifer",
"user_view_type": "public"
}
|
[] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sshleifer",
"id": 6045025,
"login": "sshleifer",
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sshleifer",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sshleifer",
"id": 6045025,
"login": "sshleifer",
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sshleifer",
"user_view_type": "public"
}
] |
[
"Nice ! :)\r\nIt's indeed the first time we have such contributions so we'll have to figure out the appropriate way to integrate them.\r\nCould you add details on what they could be used for ?\r\n",
"They can be used as training data for a smaller model.",
"Sounds just like a regular dataset to me then, no?",
"A new configuration for those datasets should do the job then.\r\nNote that until now datasets like xsum only had one configuration. It means that users didn't have to specify the configuration name when loading the dataset. If we add new configs, users that update the lib will have to update their code to specify the default/standard configuration name (not the one with pseudo labels).",
"Could also be a `user-namespace` dataset maybe?",
"Oh yes why not. I'm more in favor of this actually since pseudo labels are things that users (not dataset authors in general) can compute by themselves and share with the community",
"\r\n\r\nI assume I should (for example) rename the xsum dir, change the URL, and put the modified dir somewhere in S3?",
"You can use the `datasets-cli` to upload the folder with your version of xsum with the pseudo labels.\r\n\r\n```\r\ndatasets-cli upload_dataset path/to/xsum\r\n```"
] | 2020-10-11T21:05:45
| 2021-08-03T05:11:51
| 2021-08-03T05:11:51
|
CONTRIBUTOR
| null | null | null | null |
I recently [uploaded pseudo-labels](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/precomputed_pseudo_labels.md) for CNN/DM, XSUM and WMT16-en-ro to s3, and thom mentioned I should add them to this repo.
Since pseudo-labels are just a large model's generations on an existing dataset, what is the right way to structure this contribution.
I read https://huggingface.co/docs/datasets/add_dataset.html, but it doesn't really cover this type of contribution.
I could, for example, make a new directory, `xsum_bart_pseudolabels` for each set of pseudolabels or add some sort of parametrization to `xsum.py`: https://github.com/huggingface/datasets/blob/5f4c6e830f603830117877b8990a0e65a2386aa6/datasets/xsum/xsum.py
What do you think @lhoestq ?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/723/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/723/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 295 days, 8:06:06
|
https://api.github.com/repos/huggingface/datasets/issues/721
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/721/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/721/comments
|
https://api.github.com/repos/huggingface/datasets/issues/721/events
|
https://github.com/huggingface/datasets/issues/721
| 718,647,147
|
MDU6SXNzdWU3MTg2NDcxNDc=
| 721
|
feat(dl_manager): add support for ftp downloads
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4",
"events_url": "https://api.github.com/users/AmitMY/events{/privacy}",
"followers_url": "https://api.github.com/users/AmitMY/followers",
"following_url": "https://api.github.com/users/AmitMY/following{/other_user}",
"gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/AmitMY",
"id": 5757359,
"login": "AmitMY",
"node_id": "MDQ6VXNlcjU3NTczNTk=",
"organizations_url": "https://api.github.com/users/AmitMY/orgs",
"received_events_url": "https://api.github.com/users/AmitMY/received_events",
"repos_url": "https://api.github.com/users/AmitMY/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions",
"type": "User",
"url": "https://api.github.com/users/AmitMY",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"We only support http by default for downloading.\r\nIf you really need to use ftp, then feel free to use a library that allows to download through ftp in your dataset script (I see that you've started working on #722 , that's awesome !). The users will get a message to install the extra library when they load the dataset.\r\n\r\nTo make the download_manager work with a custom downloader, you can call `download_manager.download_custom` instead of `download_manager.download_and_extract`. The expected arguments are the following:\r\n```\r\nurl_or_urls: url or `list`/`dict` of urls to download and extract. Each\r\n url is a `str`.\r\ncustom_download: Callable with signature (src_url: str, dst_path: str) -> Any\r\n as for example `tf.io.gfile.copy`, that lets you download from google storage\r\n```\r\n",
"Also maybe it coud be interesting to have a direct support of ftp inside the `datasets` library. Do you know any good libraries that we might consider adding as a (optional ?) dependency ?",
"Downloading an `ftp` file is as simple as:\r\n```python\r\nimport urllib \r\nurllib.urlretrieve('ftp://server/path/to/file', 'file')\r\n```\r\n\r\nI believe this should be supported by the library, as its not using any dependency and is trivial amount of code.",
"I know its unorthodox, but I added `ftp` download support to `file_utils` in the same PR https://github.com/huggingface/datasets/pull/722\r\nSo its possible to understand the interaction of the download component with the ftp download ability",
"Awesome ! I'll take a look :)",
"@AmitMY Can you now download the Phoenix2014 Dataset?",
"@hoanganhpham1006 yes.\r\nSee pull request https://github.com/huggingface/datasets/pull/722 , it has a loader for this dataset, mostly ready.\r\nThere's one issue that delays it being merged - https://github.com/huggingface/datasets/issues/741 - regarding memory consumption.",
"The problem which I have now is that this dataset seems does not allow to download? Can you share it with me pls",
"The dataset loader is not yet ready, because of that issue.\r\nIf you want to just download the dataset the old-fashioned way, just go to: https://www-i6.informatik.rwth-aachen.de/ftp/pub/rwth-phoenix/2016/phoenix-2014-T.v3.tar.gz (the ftp link is now broken, and its available over https)",
"Got it, thank you so much!",
"FTP downloads are supported."
] | 2020-10-10T15:50:20
| 2022-02-15T10:44:44
| 2022-02-15T10:44:43
|
CONTRIBUTOR
| null | null | null | null |
I am working on a new dataset (#302) and encounter a problem downloading it.
```python
# This is the official download link from https://www-i6.informatik.rwth-aachen.de/~koller/RWTH-PHOENIX-2014-T/
_URL = "ftp://wasserstoff.informatik.rwth-aachen.de/pub/rwth-phoenix/2016/phoenix-2014-T.v3.tar.gz"
dl_manager.download_and_extract(_URL)
```
I get an error:
> ValueError: unable to parse ftp://wasserstoff.informatik.rwth-aachen.de/pub/rwth-phoenix/2016/phoenix-2014-T.v3.tar.gz as a URL or as a local path
I checked, and indeed you don't consider `ftp` as a remote file.
https://github.com/huggingface/datasets/blob/4c2af707a6955cf4b45f83ac67990395327c5725/src/datasets/utils/file_utils.py#L188
Adding `ftp` to that list does not immediately solve the issue, so there probably needs to be some extra work.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/721/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/721/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 492 days, 18:54:23
|
https://api.github.com/repos/huggingface/datasets/issues/720
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/720/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/720/comments
|
https://api.github.com/repos/huggingface/datasets/issues/720/events
|
https://github.com/huggingface/datasets/issues/720
| 716,581,266
|
MDU6SXNzdWU3MTY1ODEyNjY=
| 720
|
OSError: Cannot find data file when not using the dummy dataset in RAG
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/4112135?v=4",
"events_url": "https://api.github.com/users/josemlopez/events{/privacy}",
"followers_url": "https://api.github.com/users/josemlopez/followers",
"following_url": "https://api.github.com/users/josemlopez/following{/other_user}",
"gists_url": "https://api.github.com/users/josemlopez/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/josemlopez",
"id": 4112135,
"login": "josemlopez",
"node_id": "MDQ6VXNlcjQxMTIxMzU=",
"organizations_url": "https://api.github.com/users/josemlopez/orgs",
"received_events_url": "https://api.github.com/users/josemlopez/received_events",
"repos_url": "https://api.github.com/users/josemlopez/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/josemlopez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/josemlopez/subscriptions",
"type": "User",
"url": "https://api.github.com/users/josemlopez",
"user_view_type": "public"
}
|
[] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
] |
[
"Same issue here. I will be digging further, but it looks like the [script](https://github.com/huggingface/datasets/blob/master/datasets/wiki_dpr/wiki_dpr.py#L132) is attempting to open a file that is not downloaded yet. \r\n\r\n```\r\n99dcbca09109e58502e6b9271d4d3f3791b43f61f3161a76b25d2775ab1a4498.lock\r\n```\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nUnpicklingError Traceback (most recent call last)\r\n~/anaconda3/envs/eqa/lib/python3.7/site-packages/numpy/lib/npyio.py in load(file, mmap_mode, allow_pickle, fix_imports, encoding)\r\n 446 try:\r\n--> 447 return pickle.load(fid, **pickle_kwargs)\r\n 448 except Exception:\r\n\r\nUnpicklingError: pickle data was truncated\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nOSError Traceback (most recent call last)\r\n~/src/datasets/src/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 559 \r\n--> 560 if verify_infos:\r\n 561 verify_splits(self.info.splits, split_dict)\r\n\r\n~/src/datasets/src/datasets/builder.py in _prepare_split(self, split_generator)\r\n 847 writer.write(example)\r\n--> 848 finally:\r\n 849 num_examples, num_bytes = writer.finalize()\r\n\r\n~/anaconda3/envs/eqa/lib/python3.7/site-packages/tqdm/notebook.py in __iter__(self, *args, **kwargs)\r\n 227 try:\r\n--> 228 for obj in super(tqdm_notebook, self).__iter__(*args, **kwargs):\r\n 229 # return super(tqdm...) will not catch exception\r\n\r\n~/anaconda3/envs/eqa/lib/python3.7/site-packages/tqdm/std.py in __iter__(self)\r\n 1132 try:\r\n-> 1133 for obj in iterable:\r\n 1134 yield obj\r\n\r\n/hdd/rag/cache/huggingface/modules/datasets_modules/datasets/wiki_dpr/14b973bf2a456087ff69c0fd34526684eed22e48e0dfce4338f9a22b965ce7c2/wiki_dpr.py in _generate_examples(self, data_file, vectors_files)\r\n 131 break\r\n--> 132 vecs = np.load(open(vectors_files.pop(0), \"rb\"), allow_pickle=True)\r\n 133 vec_idx = 0\r\n\r\n~/anaconda3/envs/eqa/lib/python3.7/site-packages/numpy/lib/npyio.py in load(file, mmap_mode, allow_pickle, fix_imports, encoding)\r\n 449 raise IOError(\r\n--> 450 \"Failed to interpret file %s as a pickle\" % repr(file))\r\n 451 \r\n\r\nOSError: Failed to interpret file <_io.BufferedReader name='/hdd/rag/downloads/99dcbca09109e58502e6b9271d4d3f3791b43f61f3161a76b25d2775ab1a4498'> as a pickle\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nOSError Traceback (most recent call last)\r\n<ipython-input-8-24351ff8ce44> in <module>\r\n 4 retriever = RagRetriever.from_pretrained(\"facebook/rag-sequence-nq\", \r\n 5 index_name=\"exact\",\r\n----> 6 use_dummy_dataset=False)\r\n\r\n~/src/transformers/src/transformers/retrieval_rag.py in from_pretrained(cls, retriever_name_or_path, **kwargs)\r\n 321 generator_tokenizer = rag_tokenizer.generator\r\n 322 return cls(\r\n--> 323 config, question_encoder_tokenizer=question_encoder_tokenizer, generator_tokenizer=generator_tokenizer\r\n 324 )\r\n 325 \r\n\r\n~/src/transformers/src/transformers/retrieval_rag.py in __init__(self, config, question_encoder_tokenizer, generator_tokenizer)\r\n 310 self.config = config\r\n 311 if self._init_retrieval:\r\n--> 312 self.init_retrieval()\r\n 313 \r\n 314 @classmethod\r\n\r\n~/src/transformers/src/transformers/retrieval_rag.py in init_retrieval(self)\r\n 338 \r\n 339 logger.info(\"initializing retrieval\")\r\n--> 340 self.index.init_index()\r\n 341 \r\n 342 def postprocess_docs(self, docs, input_strings, prefix, n_docs, return_tensors=None):\r\n\r\n~/src/transformers/src/transformers/retrieval_rag.py in init_index(self)\r\n 248 split=self.dataset_split,\r\n 249 index_name=self.index_name,\r\n--> 250 dummy=self.use_dummy_dataset,\r\n 251 )\r\n 252 self.dataset.set_format(\"numpy\", columns=[\"embeddings\"], output_all_columns=True)\r\n\r\n~/src/datasets/src/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs)\r\n 615 builder_instance.download_and_prepare(\r\n 616 download_config=download_config,\r\n--> 617 download_mode=download_mode,\r\n 618 ignore_verifications=ignore_verifications,\r\n 619 )\r\n\r\n~/src/datasets/src/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)\r\n 481 # Sync info\r\n 482 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values())\r\n--> 483 self.info.download_checksums = dl_manager.get_recorded_sizes_checksums()\r\n 484 self.info.size_in_bytes = self.info.dataset_size + self.info.download_size\r\n 485 # Save info\r\n\r\n~/src/datasets/src/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 560 if verify_infos:\r\n 561 verify_splits(self.info.splits, split_dict)\r\n--> 562 \r\n 563 # Update the info object with the splits.\r\n 564 self.info.splits = split_dict\r\n\r\nOSError: Cannot find data file.\r\n```\r\n\r\nThank you.",
"An update on my end. This seems like a transient issue. Reran the script from scratch overnight with no errors. ",
"Closing this one. Feel free to re-open if you have other questions about this issue"
] | 2020-10-07T14:27:13
| 2020-12-23T14:04:31
| 2020-12-23T14:04:31
|
NONE
| null | null | null | null |
## Environment info
transformers version: 3.3.1
Platform: Linux-4.19
Python version: 3.7.7
PyTorch version (GPU?): 1.6.0
Tensorflow version (GPU?): No
Using GPU in script?: Yes
Using distributed or parallel set-up in script?: No
## To reproduce
Steps to reproduce the behaviour:
```
import os
os.environ['HF_DATASETS_CACHE'] = '/workspace/notebooks/POCs/cache'
from transformers import RagTokenizer, RagRetriever, RagTokenForGeneration
tokenizer = RagTokenizer.from_pretrained("facebook/rag-token-nq")
retriever = RagRetriever.from_pretrained("facebook/rag-token-nq", index_name="exact", use_dummy_dataset=False)
```
Plese note that I'm using the whole dataset: **use_dummy_dataset=False**
After around 4 hours (downloading and some other things) this is returned:
```
Downloading and preparing dataset wiki_dpr/psgs_w100.nq.exact (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /workspace/notebooks/POCs/cache/wiki_dpr/psgs_w100.nq.exact/0.0.0/14b973bf2a456087ff69c0fd34526684eed22e48e0dfce4338f9a22b965ce7c2...
---------------------------------------------------------------------------
UnpicklingError Traceback (most recent call last)
/opt/conda/lib/python3.7/site-packages/numpy/lib/npyio.py in load(file, mmap_mode, allow_pickle, fix_imports, encoding)
459 try:
--> 460 return pickle.load(fid, **pickle_kwargs)
461 except Exception:
UnpicklingError: pickle data was truncated
During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last)
/opt/conda/lib/python3.7/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
552 # Prepare split will record examples associated to the split
--> 553 self._prepare_split(split_generator, **prepare_split_kwargs)
554 except OSError:
/opt/conda/lib/python3.7/site-packages/datasets/builder.py in _prepare_split(self, split_generator)
840 for key, record in utils.tqdm(
--> 841 generator, unit=" examples", total=split_info.num_examples, leave=False, disable=not_verbose
842 ):
/opt/conda/lib/python3.7/site-packages/tqdm/notebook.py in __iter__(self, *args, **kwargs)
217 try:
--> 218 for obj in super(tqdm_notebook, self).__iter__(*args, **kwargs):
219 # return super(tqdm...) will not catch exception
/opt/conda/lib/python3.7/site-packages/tqdm/std.py in __iter__(self)
1128 try:
-> 1129 for obj in iterable:
1130 yield obj
~/.cache/huggingface/modules/datasets_modules/datasets/wiki_dpr/14b973bf2a456087ff69c0fd34526684eed22e48e0dfce4338f9a22b965ce7c2/wiki_dpr.py in _generate_examples(self, data_file, vectors_files)
131 break
--> 132 vecs = np.load(open(vectors_files.pop(0), "rb"), allow_pickle=True)
133 vec_idx = 0
/opt/conda/lib/python3.7/site-packages/numpy/lib/npyio.py in load(file, mmap_mode, allow_pickle, fix_imports, encoding)
462 raise IOError(
--> 463 "Failed to interpret file %s as a pickle" % repr(file))
464 finally:
OSError: Failed to interpret file <_io.BufferedReader name='/workspace/notebooks/POCs/cache/downloads/f34d5f091294259b4ca90e813631e69a6ded660d71b6cbedf89ddba50df94448'> as a pickle
During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last)
<ipython-input-10-f28df370ac47> in <module>
1 # ln -s /workspace/notebooks/POCs/cache /root/.cache/huggingface/datasets
----> 2 retriever = RagRetriever.from_pretrained("facebook/rag-token-nq", index_name="exact", use_dummy_dataset=False)
/opt/conda/lib/python3.7/site-packages/transformers/retrieval_rag.py in from_pretrained(cls, retriever_name_or_path, **kwargs)
307 generator_tokenizer = rag_tokenizer.generator
308 return cls(
--> 309 config, question_encoder_tokenizer=question_encoder_tokenizer, generator_tokenizer=generator_tokenizer
310 )
311
/opt/conda/lib/python3.7/site-packages/transformers/retrieval_rag.py in __init__(self, config, question_encoder_tokenizer, generator_tokenizer)
298 self.config = config
299 if self._init_retrieval:
--> 300 self.init_retrieval()
301
302 @classmethod
/opt/conda/lib/python3.7/site-packages/transformers/retrieval_rag.py in init_retrieval(self)
324
325 logger.info("initializing retrieval")
--> 326 self.index.init_index()
327
328 def postprocess_docs(self, docs, input_strings, prefix, n_docs, return_tensors=None):
/opt/conda/lib/python3.7/site-packages/transformers/retrieval_rag.py in init_index(self)
238 split=self.dataset_split,
239 index_name=self.index_name,
--> 240 dummy=self.use_dummy_dataset,
241 )
242 self.dataset.set_format("numpy", columns=["embeddings"], output_all_columns=True)
/opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs)
609 download_config=download_config,
610 download_mode=download_mode,
--> 611 ignore_verifications=ignore_verifications,
612 )
613
/opt/conda/lib/python3.7/site-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)
474 if not downloaded_from_gcs:
475 self._download_and_prepare(
--> 476 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
477 )
478 # Sync info
/opt/conda/lib/python3.7/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
553 self._prepare_split(split_generator, **prepare_split_kwargs)
554 except OSError:
--> 555 raise OSError("Cannot find data file. " + (self.manual_download_instructions or ""))
556
557 if verify_infos:
OSError: Cannot find data file.
```
Thanks
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/720/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/720/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 76 days, 23:37:18
|
https://api.github.com/repos/huggingface/datasets/issues/712
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/712/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/712/comments
|
https://api.github.com/repos/huggingface/datasets/issues/712/events
|
https://github.com/huggingface/datasets/issues/712
| 714,242,316
|
MDU6SXNzdWU3MTQyNDIzMTY=
| 712
|
Error in the notebooks/Overview.ipynb notebook
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/850012?v=4",
"events_url": "https://api.github.com/users/subhrm/events{/privacy}",
"followers_url": "https://api.github.com/users/subhrm/followers",
"following_url": "https://api.github.com/users/subhrm/following{/other_user}",
"gists_url": "https://api.github.com/users/subhrm/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/subhrm",
"id": 850012,
"login": "subhrm",
"node_id": "MDQ6VXNlcjg1MDAxMg==",
"organizations_url": "https://api.github.com/users/subhrm/orgs",
"received_events_url": "https://api.github.com/users/subhrm/received_events",
"repos_url": "https://api.github.com/users/subhrm/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/subhrm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/subhrm/subscriptions",
"type": "User",
"url": "https://api.github.com/users/subhrm",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Do this:\r\n``` python\r\nsquad_dataset = list_datasets(with_details=True)[datasets.index('squad')]\r\npprint(squad_dataset.__dict__) # It's a simple python dataclass\r\n```",
"Thanks! This worked. I have created a PR to fix this in the notebook. "
] | 2020-10-04T05:58:31
| 2020-10-05T16:25:40
| 2020-10-05T16:25:40
|
CONTRIBUTOR
| null | null | null | null |
Hi,
I got the following error in **cell number 3** while exploring the **Overview.ipynb** notebook in google colab. I used the [link ](https://colab.research.google.com/github/huggingface/datasets/blob/master/notebooks/Overview.ipynb) provided in the main README file to open it in colab.
```python
# You can access various attributes of the datasets before downloading them
squad_dataset = list_datasets()[datasets.index('squad')]
pprint(squad_dataset.__dict__) # It's a simple python dataclass
```
Error message
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-5-8dc805c4949c> in <module>()
2 squad_dataset = list_datasets()[datasets.index('squad')]
3
----> 4 pprint(squad_dataset.__dict__) # It's a simple python dataclass
AttributeError: 'str' object has no attribute '__dict__'
```
The object `squad_dataset` is a `str` not a `dataclass` .
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/712/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/712/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 1 day, 10:27:09
|
https://api.github.com/repos/huggingface/datasets/issues/709
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/709/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/709/comments
|
https://api.github.com/repos/huggingface/datasets/issues/709/events
|
https://github.com/huggingface/datasets/issues/709
| 714,067,902
|
MDU6SXNzdWU3MTQwNjc5MDI=
| 709
|
How to use similarity settings other then "BM25" in Elasticsearch index ?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/431890?v=4",
"events_url": "https://api.github.com/users/nsankar/events{/privacy}",
"followers_url": "https://api.github.com/users/nsankar/followers",
"following_url": "https://api.github.com/users/nsankar/following{/other_user}",
"gists_url": "https://api.github.com/users/nsankar/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/nsankar",
"id": 431890,
"login": "nsankar",
"node_id": "MDQ6VXNlcjQzMTg5MA==",
"organizations_url": "https://api.github.com/users/nsankar/orgs",
"received_events_url": "https://api.github.com/users/nsankar/received_events",
"repos_url": "https://api.github.com/users/nsankar/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/nsankar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nsankar/subscriptions",
"type": "User",
"url": "https://api.github.com/users/nsankar",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Datasets does not use elasticsearch API to define custom similarity. If you want to use a custom similarity, the best would be to run a curl request directly to your elasticsearch instance (see sample hereafter, directly from ES documentation), then you should be able to use `my_similarity` in your configuration passed to datasets\r\n\r\n```\r\ncurl -X PUT \"localhost:9200/index?pretty\" -H 'Content-Type: application/json' -d'\r\n{\r\n \"settings\": {\r\n \"index\": {\r\n \"similarity\": {\r\n \"my_similarity\": {\r\n \"type\": \"DFR\",\r\n \"basic_model\": \"g\",\r\n \"after_effect\": \"l\",\r\n \"normalization\": \"h2\",\r\n \"normalization.h2.c\": \"3.0\"\r\n }\r\n }\r\n }\r\n }\r\n}\r\n'\r\n\r\n```"
] | 2020-10-03T11:18:49
| 2022-10-04T17:19:37
| 2022-10-04T17:19:37
|
NONE
| null | null | null | null |
**QUESTION : How should we use other similarity algorithms supported by Elasticsearch other than "BM25" ?**
**ES Reference**
https://www.elastic.co/guide/en/elasticsearch/reference/current/index-modules-similarity.html
**HF doc reference:**
https://huggingface.co/docs/datasets/faiss_and_ea.html
**context :**
========
I used the latest Elasticsearch server version 7.9.2
When I set DFR which is one of the other similarity algorithms supported by elasticsearch in the mapping, I get an error
For example DFR that I had tried in the first instance in mappings as below.,
`"mappings": {"properties": {"text": {"type": "text", "analyzer": "standard", "similarity": "DFR"}}},`
I get the following error
RequestError: RequestError(400, 'mapper_parsing_exception', 'Unknown Similarity type [DFR] for field [text]')
The other thing as another option I had tried was to declare "similarity": "my_similarity" within settings and then assigning "my_similarity" inside the mappings as below
`es_config = {
"settings": {
"number_of_shards": 1,
**"similarity": "my_similarity"**: {
"type": "DFR",
"basic_model": "g",
"after_effect": "l",
"normalization": "h2",
"normalization.h2.c": "3.0"
} ,
"analysis": {"analyzer": {"stop_standard": {"type": "standard", " stopwords": "_english_"}}},
},
"mappings": {"properties": {"text": {"type": "text", "analyzer": "standard", "similarity": "my_similarity"}}},
}`
For this , I got the following error
RequestError: RequestError(400, 'illegal_argument_exception', 'unknown setting [index.similarity] please check that any required plugins are installed, or check the breaking changes documentation for removed settings')
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/709/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/709/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 731 days, 6:00:48
|
https://api.github.com/repos/huggingface/datasets/issues/708
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/708/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/708/comments
|
https://api.github.com/repos/huggingface/datasets/issues/708/events
|
https://github.com/huggingface/datasets/issues/708
| 714,020,953
|
MDU6SXNzdWU3MTQwMjA5NTM=
| 708
|
Datasets performance slow? - 6.4x slower than in memory dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/38154?v=4",
"events_url": "https://api.github.com/users/eugeneware/events{/privacy}",
"followers_url": "https://api.github.com/users/eugeneware/followers",
"following_url": "https://api.github.com/users/eugeneware/following{/other_user}",
"gists_url": "https://api.github.com/users/eugeneware/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/eugeneware",
"id": 38154,
"login": "eugeneware",
"node_id": "MDQ6VXNlcjM4MTU0",
"organizations_url": "https://api.github.com/users/eugeneware/orgs",
"received_events_url": "https://api.github.com/users/eugeneware/received_events",
"repos_url": "https://api.github.com/users/eugeneware/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/eugeneware/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eugeneware/subscriptions",
"type": "User",
"url": "https://api.github.com/users/eugeneware",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Facing a similar issue here. My model using SQuAD dataset takes about 1h to process with in memory data and more than 2h with datasets directly.",
"And if you use in-memory-data with datasets with `load_dataset(..., keep_in_memory=True)`?",
"Thanks for the tip @thomwolf ! I did not see that flag in the docs. I'll try with that.",
"We should add it indeed and also maybe a specific section with all the tips for maximal speed. What do you think @lhoestq @SBrandeis @yjernite ?",
"By default the datasets loaded with `load_dataset` live on disk.\r\nIt's possible to load them in memory by using some transforms like `.map(..., keep_in_memory=True)`.\r\n\r\nSmall correction to @thomwolf 's comment above: currently we don't have the `keep_in_memory` parameter for `load_dataset` AFAIK but it would be nice to add it indeed :)",
"Yes indeed we should add it!",
"Great! Thanks a lot.\r\n\r\nI did a test using `map(..., keep_in_memory=True)` and also a test using in-memory only data.\r\n\r\n```python\r\nfeatures = dataset.map(tokenize, batched=True, remove_columns=dataset['train'].column_names)\r\nfeatures.set_format(type='torch', columns=['input_ids', 'token_type_ids', 'attention_mask'])\r\n\r\nfeatures_in_memory = dataset.map(tokenize, batched=True, keep_in_memory=True, remove_columns=dataset['train'].column_names)\r\nfeatures_in_memory.set_format(type='torch', columns=['input_ids', 'token_type_ids', 'attention_mask'])\r\n\r\nin_memory = [features['train'][i] for i in range(len(features['train']))]\r\n```\r\n\r\nFor using the features without any tweak, I got **1min17s** for copying the entire DataLoader to CUDA:\r\n\r\n```\r\n%%time\r\n\r\nfor i, batch in enumerate(DataLoader(features['train'], batch_size=16, num_workers=4)):\r\n batch['input_ids'].to(device)\r\n```\r\n\r\nFor using the features mapped with `keep_in_memory=True`, I also got **1min17s** for copying the entire DataLoader to CUDA:\r\n\r\n```\r\n%%time\r\n\r\nfor i, batch in enumerate(DataLoader(features_in_memory['train'], batch_size=16, num_workers=4)):\r\n batch['input_ids'].to(device)\r\n```\r\n\r\nAnd for the case using every element in memory, converted from the original dataset, I got **12.5s**:\r\n\r\n```\r\n%%time\r\n\r\nfor i, batch in enumerate(DataLoader(in_memory, batch_size=16, num_workers=4)):\r\n batch['input_ids'].to(device)\r\n```\r\n\r\nTaking a closer look in my SQuAD code, using a profiler, I see a lot of calls to `posix read` api. It seems that it is really reliying on disk, which results in a very high train time.",
"I am having the same issue here. When loading from memory I can get the GPU up to 70% util but when loading after mapping I can only get 40%.\r\n\r\nIn disk:\r\n```\r\nbook_corpus = load_dataset('bookcorpus', 'plain_text', cache_dir='/home/ad/Desktop/bookcorpus', split='train[:20%]')\r\nbook_corpus = book_corpus.map(encode, batched=True, num_proc=20, load_from_cache_file=True, batch_size=2500)\r\nbook_corpus.set_format(type='torch', columns=['text', \"input_ids\", \"attention_mask\", \"token_type_ids\"])\r\n\r\ntraining_args = TrainingArguments(\r\n output_dir=\"./mobile_bert_big\",\r\n overwrite_output_dir=True,\r\n num_train_epochs=1,\r\n per_device_train_batch_size=32,\r\n per_device_eval_batch_size=16,\r\n save_steps=50,\r\n save_total_limit=2,\r\n logging_first_step=True,\r\n warmup_steps=100,\r\n logging_steps=50,\r\n eval_steps=100,\r\n no_cuda=False,\r\n gradient_accumulation_steps=16,\r\n fp16=True)\r\n\r\ntrainer = Trainer(\r\n model=model,\r\n args=training_args,\r\n data_collator=data_collator,\r\n train_dataset=book_corpus,\r\n tokenizer=tokenizer)\r\n```\r\n\r\nIn disk I can only get 0,17 it/s:\r\n`[ 13/28907 01:03 < 46:03:27, 0.17 it/s, Epoch 0.00/1] `\r\n\r\nIf I load it with torch.utils.data.Dataset()\r\n```\r\nclass BCorpusDataset(torch.utils.data.Dataset):\r\n def __init__(self, encodings):\r\n self.encodings = encodings\r\n\r\n def __getitem__(self, idx):\r\n item = [torch.tensor(val[idx]) for key, val in self.encodings.items()][0]\r\n return item\r\n\r\n def __len__(self):\r\n length = [len(val) for key, val in self.encodings.items()][0]\r\n return length\r\n\r\n**book_corpus = book_corpus.select([i for i in range(16*2000)])** # filtering to not have 20% of BC in memory...\r\nbook_corpus = book_corpus(book_corpus)\r\n```\r\nI can get:\r\n` [ 5/62 00:09 < 03:03, 0.31 it/s, Epoch 0.06/1]`\r\n\r\nBut obviously I can not get BookCorpus in memory xD\r\n\r\nEDIT: it is something weird. If i load in disk 1% of bookcorpus:\r\n```\r\nbook_corpus = load_dataset('bookcorpus', 'plain_text', cache_dir='/home/ad/Desktop/bookcorpus', split='train[:1%]')\r\n```\r\n\r\nI can get 0.28 it/s, (the same that in memory) but if I load 20% of bookcorpus:\r\n```\r\nbook_corpus = load_dataset('bookcorpus', 'plain_text', cache_dir='/home/ad/Desktop/bookcorpus', split='train[:20%]')\r\n```\r\nI get again 0.17 it/s. \r\n\r\nI am missing something? I think it is something related to size, and not disk or in-memory.",
"There is a way to increase the batches read from memory? or multiprocessed it? I think that one of two or it is reading with just 1 core o it is reading very small chunks from disk and left my GPU at 0 between batches",
"My fault! I had not seen the `dataloader_num_workers` in `TrainingArguments` ! Now I can parallelize and go fast! Sorry, and thanks."
] | 2020-10-03T06:44:07
| 2021-02-12T14:13:28
| 2021-02-12T14:13:28
|
NONE
| null | null | null | null |
I've been very excited about this amazing datasets project. However, I've noticed that the performance can be substantially slower than using an in-memory dataset.
Now, this is expected I guess, due to memory mapping data using arrow files, and you don't get anything for free. But I was surprised at how much slower.
For example, in the `yelp_polarity` dataset (560000 datapoints, or 17500 batches of 32), it was taking me 3:31 to just get process the data and get it on the GPU (no model involved). Whereas, the equivalent in-memory dataset would finish in just 0:33.
Is this expected? Given that one of the goals of this project is also accelerate dataset processing, this seems a bit slower than I would expect. I understand the advantages of being able to work on datasets that exceed memory, and that's very exciting to me, but thought I'd open this issue to discuss.
For reference I'm running a AMD Ryzen Threadripper 1900X 8-Core Processor CPU, with 128 GB of RAM and an NVME SSD Samsung 960 EVO. I'm running with an RTX Titan 24GB GPU.
I can see with `iotop` that the dataset gets quickly loaded into the system read buffers, and thus doesn't incur any additional IO reads. Thus in theory, all the data *should* be in RAM, but in my benchmark code below it's still 6.4 times slower.
What am I doing wrong? And is there a way to force the datasets to completely load into memory instead of being memory mapped in cases where you want maximum performance?
At 3:31 for 17500 batches, that's 12ms per batch. Does this 12ms just become insignificant as a proportion of forward and backward passes in practice, and thus it's not worth worrying about this in practice?
In any case, here's my code `benchmark.py`. If you run it with an argument of `memory` it will copy the data into memory before executing the same test.
``` py
import sys
from datasets import load_dataset
from transformers import DataCollatorWithPadding, BertTokenizerFast
from torch.utils.data import DataLoader
from tqdm import tqdm
if __name__ == '__main__':
tokenizer = BertTokenizerFast.from_pretrained('bert-base-cased')
collate_fn = DataCollatorWithPadding(tokenizer, padding=True)
ds = load_dataset('yelp_polarity')
def do_tokenize(x):
return tokenizer(x['text'], truncation=True)
ds = ds.map(do_tokenize, batched=True)
ds.set_format('torch', ['input_ids', 'token_type_ids', 'attention_mask'])
if len(sys.argv) == 2 and sys.argv[1] == 'memory':
# copy to memory - probably a faster way to do this - but demonstrates the point
# approximately 530 batches per second - 17500 batches in 0:33
print('using memory')
_ds = [data for data in tqdm(ds['train'])]
else:
# approximately 83 batches per second - 17500 batches in 3:31
print('using datasets')
_ds = ds['train']
dl = DataLoader(_ds, shuffle=True, collate_fn=collate_fn, batch_size=32, num_workers=4)
for data in tqdm(dl):
for k, v in data.items():
data[k] = v.to('cuda')
```
For reference, my conda environment is [here](https://gist.github.com/05b6101518ff70ed42a858b302a0405d)
Once again, I'm very excited about this library, and how easy it is to load datasets, and to do so without worrying about system memory constraints.
Thanks for all your great work.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 4,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 4,
"url": "https://api.github.com/repos/huggingface/datasets/issues/708/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/708/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 132 days, 7:29:21
|
https://api.github.com/repos/huggingface/datasets/issues/707
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/707/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/707/comments
|
https://api.github.com/repos/huggingface/datasets/issues/707/events
|
https://github.com/huggingface/datasets/issues/707
| 713,954,666
|
MDU6SXNzdWU3MTM5NTQ2NjY=
| 707
|
Requirements should specify pyarrow<1
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/918541?v=4",
"events_url": "https://api.github.com/users/mathcass/events{/privacy}",
"followers_url": "https://api.github.com/users/mathcass/followers",
"following_url": "https://api.github.com/users/mathcass/following{/other_user}",
"gists_url": "https://api.github.com/users/mathcass/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mathcass",
"id": 918541,
"login": "mathcass",
"node_id": "MDQ6VXNlcjkxODU0MQ==",
"organizations_url": "https://api.github.com/users/mathcass/orgs",
"received_events_url": "https://api.github.com/users/mathcass/received_events",
"repos_url": "https://api.github.com/users/mathcass/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mathcass/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mathcass/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mathcass",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hello @mathcass I would want to work on this issue. May I do the same? ",
"@punitaojha, certainly. Feel free to work on this. Let me know if you need any help or clarity.",
"Hello @mathcass \r\n1. I did fork the repository and clone the same on my local system. \r\n\r\n2. Then learnt about how we can publish our package on pypi.org. Also, found some instructions on same in setup.py documentation.\r\n\r\n3. Then I Perplexity document link that you shared above. I created a colab link from there keep both tensorflow and pytorch means a mixed option and tried to run it in colab but I encountered no errors at a point where you mentioned. Can you help me to figure out the issue. \r\n\r\n4.Here is the link of the colab file with my saved responses. \r\nhttps://colab.research.google.com/drive/1hfYz8Ira39FnREbxgwa_goZWpOojp2NH?usp=sharing",
"Also, please share some links which made you conclude that pyarrow < 1 would help. ",
"Access granted for the colab link. ",
"Thanks for looking at this @punitaojha and thanks for sharing the notebook. \r\n\r\nI just tried to reproduce this on my own (based on the environment where I had this issue) and I can't reproduce it somehow. If I run into this again, I'll include some steps to reproduce it. I'll close this as invalid. \r\n\r\nThanks again. ",
"I am sorry for hijacking this closed issue, but I believe I was able to reproduce this very issue. Strangely enough, it also turned out that running `pip install \"pyarrow<1\" --upgrade` did indeed fix the issue (PyArrow was installed in version `0.14.1` in my case).\r\n\r\nPlease see the Colab below:\r\n\r\nhttps://colab.research.google.com/drive/15QQS3xWjlKW2aK0J74eEcRFuhXUddUST\r\n\r\nThanks!"
] | 2020-10-02T23:39:39
| 2020-12-04T08:22:39
| 2020-10-04T20:50:28
|
NONE
| null | null | null | null |
I was looking at the docs on [Perplexity](https://huggingface.co/transformers/perplexity.html) via GPT2. When you load datasets and try to load Wikitext, you get the error,
```
module 'pyarrow' has no attribute 'PyExtensionType'
```
I traced it back to datasets having installed PyArrow 1.0.1 but there's not pinning in the setup file.
https://github.com/huggingface/datasets/blob/e86a2a8f869b91654e782c9133d810bb82783200/setup.py#L68
Downgrading by installing `pip install "pyarrow<1"` resolved the issue.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/918541?v=4",
"events_url": "https://api.github.com/users/mathcass/events{/privacy}",
"followers_url": "https://api.github.com/users/mathcass/followers",
"following_url": "https://api.github.com/users/mathcass/following{/other_user}",
"gists_url": "https://api.github.com/users/mathcass/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mathcass",
"id": 918541,
"login": "mathcass",
"node_id": "MDQ6VXNlcjkxODU0MQ==",
"organizations_url": "https://api.github.com/users/mathcass/orgs",
"received_events_url": "https://api.github.com/users/mathcass/received_events",
"repos_url": "https://api.github.com/users/mathcass/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mathcass/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mathcass/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mathcass",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/707/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/707/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 1 day, 21:10:49
|
https://api.github.com/repos/huggingface/datasets/issues/705
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/705/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/705/comments
|
https://api.github.com/repos/huggingface/datasets/issues/705/events
|
https://github.com/huggingface/datasets/issues/705
| 713,709,100
|
MDU6SXNzdWU3MTM3MDkxMDA=
| 705
|
TypeError: '<' not supported between instances of 'NamedSplit' and 'NamedSplit'
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/12713359?v=4",
"events_url": "https://api.github.com/users/pvcastro/events{/privacy}",
"followers_url": "https://api.github.com/users/pvcastro/followers",
"following_url": "https://api.github.com/users/pvcastro/following{/other_user}",
"gists_url": "https://api.github.com/users/pvcastro/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/pvcastro",
"id": 12713359,
"login": "pvcastro",
"node_id": "MDQ6VXNlcjEyNzEzMzU5",
"organizations_url": "https://api.github.com/users/pvcastro/orgs",
"received_events_url": "https://api.github.com/users/pvcastro/received_events",
"repos_url": "https://api.github.com/users/pvcastro/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/pvcastro/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pvcastro/subscriptions",
"type": "User",
"url": "https://api.github.com/users/pvcastro",
"user_view_type": "public"
}
|
[] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
] |
[
"Hi !\r\nThanks for reporting :) \r\nIndeed this is an issue on the `datasets` side.\r\nI'm creating a PR",
"Thanks @lhoestq !"
] | 2020-10-02T15:27:55
| 2020-10-05T08:14:59
| 2020-10-05T08:14:59
|
NONE
| null | null | null | null |
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.3.1 (installed from master)
- `datasets` version: 1.0.2 (installed as a dependency from transformers)
- Platform: Linux-4.15.0-118-generic-x86_64-with-debian-stretch-sid
- Python version: 3.7.9
I'm testing my own text classification dataset using [this example](https://github.com/huggingface/transformers/tree/master/examples/text-classification#run-generic-text-classification-script-in-tensorflow) from transformers. The dataset is split into train / dev / test, and in csv format, containing just a text and a label columns, using comma as sep. Here's a sample:
```
text,label
"Registra-se a presença do acadêmico <name> . <REL_SEP> Ao me deparar com a descrição de dois autores no polo ativo da ação junto ao PJe , margem esquerda foi informado pela procuradora do reclamante que se trata de uma reclamação trabalhista individual . <REL_SEP> Diante disso , face a ausência injustificada do autor <name> , determina-se o ARQUIVAMENTO do presente processo , com relação a este , nos termos do [[ art . 844 da CLT ]] . <REL_SEP> CUSTAS AUTOR - DISPENSADO <REL_SEP> Custas pelo autor no importe de R $326,82 , calculadas sobre R $16.341,03 , dispensadas na forma da lei , em virtude da concessão dos benefícios da Justiça Gratuita , ora deferida . <REL_SEP> Cientes os presentes . <REL_SEP> Audiência encerrada às 8h42min . <REL_SEP> <name> <REL_SEP> Juíza do Trabalho <REL_SEP> Ata redigida por << <name> >> , Secretário de Audiência .",NO_RELATION
```
However, @Santosh-Gupta reported in #7351 that he had the exact same problem using the ChemProt dataset. His colab notebook is referenced in the following section.
## To reproduce
Steps to reproduce the behavior:
1. Created a new conda environment using conda env -n transformers python=3.7
2. Cloned transformers master, `cd` into it and installed using pip install --editable . -r examples/requirements.txt
3. Installed tensorflow with `pip install tensorflow`
3. Ran `run_tf_text_classification.py` with the following parameters:
```
--train_file <DATASET_PATH>/train.csv \
--dev_file <DATASET_PATH>/dev.csv \
--test_file <DATASET_PATH>/test.csv \
--label_column_id 1 \
--model_name_or_path neuralmind/bert-base-portuguese-cased \
--output_dir <OUTPUT_PATH> \
--num_train_epochs 4 \
--per_device_train_batch_size 4 \
--per_device_eval_batch_size 4 \
--do_train \
--do_eval \
--do_predict \
--logging_steps 1000 \
--evaluate_during_training \
--save_steps 1000 \
--overwrite_output_dir \
--overwrite_cache
```
I have also copied [@Santosh-Gupta 's colab notebook](https://colab.research.google.com/drive/11APei6GjphCZbH5wD9yVlfGvpIkh8pwr?usp=sharing) as a reference.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
Here is the stack trace:
```
2020-10-02 07:33:41.622011: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
/media/discoD/repositorios/transformers_pedro/src/transformers/training_args.py:333: FutureWarning: The `evaluate_during_training` argument is deprecated in favor of `evaluation_strategy` (which has more options)
FutureWarning,
2020-10-02 07:33:43.471648: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcuda.so.1
2020-10-02 07:33:43.471791: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-02 07:33:43.472664: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: GeForce GTX 1070 computeCapability: 6.1
coreClock: 1.7085GHz coreCount: 15 deviceMemorySize: 7.92GiB deviceMemoryBandwidth: 238.66GiB/s
2020-10-02 07:33:43.472684: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
2020-10-02 07:33:43.472765: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
2020-10-02 07:33:43.472809: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10
2020-10-02 07:33:43.472848: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10
2020-10-02 07:33:43.474209: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10
2020-10-02 07:33:43.474276: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10
2020-10-02 07:33:43.561219: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7
2020-10-02 07:33:43.561397: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-02 07:33:43.562345: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-02 07:33:43.563219: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0
2020-10-02 07:33:43.563595: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2020-10-02 07:33:43.570091: I tensorflow/core/platform/profile_utils/cpu_utils.cc:104] CPU Frequency: 3591830000 Hz
2020-10-02 07:33:43.570494: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x560842432400 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-10-02 07:33:43.570511: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
2020-10-02 07:33:43.570702: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-02 07:33:43.571599: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: GeForce GTX 1070 computeCapability: 6.1
coreClock: 1.7085GHz coreCount: 15 deviceMemorySize: 7.92GiB deviceMemoryBandwidth: 238.66GiB/s
2020-10-02 07:33:43.571633: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
2020-10-02 07:33:43.571645: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
2020-10-02 07:33:43.571654: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10
2020-10-02 07:33:43.571664: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10
2020-10-02 07:33:43.571691: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10
2020-10-02 07:33:43.571704: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10
2020-10-02 07:33:43.571718: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7
2020-10-02 07:33:43.571770: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-02 07:33:43.572641: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-02 07:33:43.573475: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0
2020-10-02 07:33:47.139227: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1257] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-10-02 07:33:47.139265: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1263] 0
2020-10-02 07:33:47.139272: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1276] 0: N
2020-10-02 07:33:47.140323: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-02 07:33:47.141248: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-02 07:33:47.142085: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-02 07:33:47.142854: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1402] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 5371 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1070, pci bus id: 0000:01:00.0, compute capability: 6.1)
2020-10-02 07:33:47.146317: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x5608b95dc5c0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2020-10-02 07:33:47.146336: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): GeForce GTX 1070, Compute Capability 6.1
10/02/2020 07:33:47 - INFO - __main__ - n_replicas: 1, distributed training: False, 16-bits training: False
10/02/2020 07:33:47 - INFO - __main__ - Training/evaluation parameters TFTrainingArguments(output_dir='/media/discoD/models/datalawyer/pedidos/transformers_tf', overwrite_output_dir=True, do_train=True, do_eval=True, do_predict=True, evaluate_during_training=True, evaluation_strategy=<EvaluationStrategy.STEPS: 'steps'>, prediction_loss_only=False, per_device_train_batch_size=4, per_device_eval_batch_size=4, per_gpu_train_batch_size=None, per_gpu_eval_batch_size=None, gradient_accumulation_steps=1, learning_rate=5e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=4.0, max_steps=-1, warmup_steps=0, logging_dir='runs/Oct02_07-33-43_user-XPS-8700', logging_first_step=False, logging_steps=1000, save_steps=1000, save_total_limit=None, no_cuda=False, seed=42, fp16=False, fp16_opt_level='O1', local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=1000, dataloader_num_workers=0, past_index=-1, run_name='/media/discoD/models/datalawyer/pedidos/transformers_tf', disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=False, tpu_name=None, xla=False)
10/02/2020 07:33:53 - INFO - filelock - Lock 140407857405776 acquired on /home/user/.cache/huggingface/datasets/e0f1e9ed46db1e2429189f06b479cbd4075c0976104c1aacf8f77d9a53d2ad87.03756fef6da334f50a7ff73608e21b5018229944ca250416ce7352e25d84a552.py.lock
10/02/2020 07:33:53 - INFO - filelock - Lock 140407857405776 released on /home/user/.cache/huggingface/datasets/e0f1e9ed46db1e2429189f06b479cbd4075c0976104c1aacf8f77d9a53d2ad87.03756fef6da334f50a7ff73608e21b5018229944ca250416ce7352e25d84a552.py.lock
Using custom data configuration default
Traceback (most recent call last):
File "run_tf_text_classification.py", line 283, in <module>
main()
File "run_tf_text_classification.py", line 222, in main
max_seq_length=data_args.max_seq_length,
File "run_tf_text_classification.py", line 43, in get_tfds
ds = datasets.load_dataset("csv", data_files=files)
File "/media/discoD/anaconda3/envs/transformers/lib/python3.7/site-packages/datasets/load.py", line 604, in load_dataset
**config_kwargs,
File "/media/discoD/anaconda3/envs/transformers/lib/python3.7/site-packages/datasets/builder.py", line 158, in __init__
**config_kwargs,
File "/media/discoD/anaconda3/envs/transformers/lib/python3.7/site-packages/datasets/builder.py", line 269, in _create_builder_config
for key in sorted(data_files.keys()):
TypeError: '<' not supported between instances of 'NamedSplit' and 'NamedSplit'
```
## Expected behavior
Should be able to run the text-classification example as described in [https://github.com/huggingface/transformers/tree/master/examples/text-classification#run-generic-text-classification-script-in-tensorflow](https://github.com/huggingface/transformers/tree/master/examples/text-classification#run-generic-text-classification-script-in-tensorflow)
Originally opened this issue at transformers' repository: [https://github.com/huggingface/transformers/issues/7535](https://github.com/huggingface/transformers/issues/7535). @jplu instructed me to open here, since according to [this](https://github.com/huggingface/transformers/issues/7535#issuecomment-702778885) evidence, the problem is from datasets.
Thanks!
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/705/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/705/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 2 days, 16:47:04
|
https://api.github.com/repos/huggingface/datasets/issues/699
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/699/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/699/comments
|
https://api.github.com/repos/huggingface/datasets/issues/699/events
|
https://github.com/huggingface/datasets/issues/699
| 713,395,642
|
MDU6SXNzdWU3MTMzOTU2NDI=
| 699
|
XNLI dataset is not loading
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/14936525?v=4",
"events_url": "https://api.github.com/users/imadarsh1001/events{/privacy}",
"followers_url": "https://api.github.com/users/imadarsh1001/followers",
"following_url": "https://api.github.com/users/imadarsh1001/following{/other_user}",
"gists_url": "https://api.github.com/users/imadarsh1001/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/imadarsh1001",
"id": 14936525,
"login": "imadarsh1001",
"node_id": "MDQ6VXNlcjE0OTM2NTI1",
"organizations_url": "https://api.github.com/users/imadarsh1001/orgs",
"received_events_url": "https://api.github.com/users/imadarsh1001/received_events",
"repos_url": "https://api.github.com/users/imadarsh1001/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/imadarsh1001/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/imadarsh1001/subscriptions",
"type": "User",
"url": "https://api.github.com/users/imadarsh1001",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"also i tried below code to solve checksum error \r\n`datasets-cli test ./datasets/xnli --save_infos --all_configs`\r\n\r\nand it shows \r\n\r\n```\r\n2020-10-02 07:06:16.588760: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\r\nTraceback (most recent call last):\r\n File \"/opt/conda/lib/python3.7/site-packages/datasets/load.py\", line 268, in prepare_module\r\n local_path = cached_path(file_path, download_config=download_config)\r\n File \"/opt/conda/lib/python3.7/site-packages/datasets/utils/file_utils.py\", line 308, in cached_path\r\n use_etag=download_config.use_etag,\r\n File \"/opt/conda/lib/python3.7/site-packages/datasets/utils/file_utils.py\", line 474, in get_from_cache\r\n raise FileNotFoundError(\"Couldn't find file at {}\".format(url))\r\nFileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/1.0.2/datasets/./datasets/xnli/xnli.py\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/opt/conda/lib/python3.7/site-packages/datasets/load.py\", line 279, in prepare_module\r\n local_path = cached_path(file_path, download_config=download_config)\r\n File \"/opt/conda/lib/python3.7/site-packages/datasets/utils/file_utils.py\", line 308, in cached_path\r\n use_etag=download_config.use_etag,\r\n File \"/opt/conda/lib/python3.7/site-packages/datasets/utils/file_utils.py\", line 474, in get_from_cache\r\n raise FileNotFoundError(\"Couldn't find file at {}\".format(url))\r\nFileNotFoundError: Couldn't find file at https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/./datasets/xnli/xnli.py\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/opt/conda/bin/datasets-cli\", line 36, in <module>\r\n service.run()\r\n File \"/opt/conda/lib/python3.7/site-packages/datasets/commands/test.py\", line 76, in run\r\n module_path, hash = prepare_module(path)\r\n File \"/opt/conda/lib/python3.7/site-packages/datasets/load.py\", line 283, in prepare_module\r\n combined_path, github_file_path, file_path\r\nFileNotFoundError: Couldn't find file locally at ./datasets/xnli/xnli.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.0.2/datasets/./datasets/xnli/xnli.py or https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/./datasets/xnli/xnli.py\r\n```\r\n\r\n",
"Hi !\r\nYes the download url changed.\r\nIt's updated on the master branch. I'm doing a release today to fix that :)",
"the issue is fixed with latest release \r\n\r\n"
] | 2020-10-02T06:53:16
| 2020-10-03T17:45:52
| 2020-10-03T17:43:37
|
NONE
| null | null | null | null |
`dataset = datasets.load_dataset(path='xnli')`
showing below error
```
/opt/conda/lib/python3.7/site-packages/nlp/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name)
36 if len(bad_urls) > 0:
37 error_msg = "Checksums didn't match" + for_verification_name + ":\n"
---> 38 raise NonMatchingChecksumError(error_msg + str(bad_urls))
39 logger.info("All the checksums matched successfully" + for_verification_name)
40
NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://www.nyu.edu/projects/bowman/xnli/XNLI-1.0.zip']
```
I think URL is now changed to "https://cims.nyu.edu/~sbowman/xnli/XNLI-MT-1.0.zip"
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/14936525?v=4",
"events_url": "https://api.github.com/users/imadarsh1001/events{/privacy}",
"followers_url": "https://api.github.com/users/imadarsh1001/followers",
"following_url": "https://api.github.com/users/imadarsh1001/following{/other_user}",
"gists_url": "https://api.github.com/users/imadarsh1001/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/imadarsh1001",
"id": 14936525,
"login": "imadarsh1001",
"node_id": "MDQ6VXNlcjE0OTM2NTI1",
"organizations_url": "https://api.github.com/users/imadarsh1001/orgs",
"received_events_url": "https://api.github.com/users/imadarsh1001/received_events",
"repos_url": "https://api.github.com/users/imadarsh1001/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/imadarsh1001/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/imadarsh1001/subscriptions",
"type": "User",
"url": "https://api.github.com/users/imadarsh1001",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/699/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/699/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 1 day, 10:50:21
|
https://api.github.com/repos/huggingface/datasets/issues/691
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/691/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/691/comments
|
https://api.github.com/repos/huggingface/datasets/issues/691/events
|
https://github.com/huggingface/datasets/issues/691
| 712,389,499
|
MDU6SXNzdWU3MTIzODk0OTk=
| 691
|
Add UI filter to filter datasets based on task
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/7589415?v=4",
"events_url": "https://api.github.com/users/praateekmahajan/events{/privacy}",
"followers_url": "https://api.github.com/users/praateekmahajan/followers",
"following_url": "https://api.github.com/users/praateekmahajan/following{/other_user}",
"gists_url": "https://api.github.com/users/praateekmahajan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/praateekmahajan",
"id": 7589415,
"login": "praateekmahajan",
"node_id": "MDQ6VXNlcjc1ODk0MTU=",
"organizations_url": "https://api.github.com/users/praateekmahajan/orgs",
"received_events_url": "https://api.github.com/users/praateekmahajan/received_events",
"repos_url": "https://api.github.com/users/praateekmahajan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/praateekmahajan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/praateekmahajan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/praateekmahajan",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
| null |
[] |
[
"Already supported."
] | 2020-10-01T00:56:18
| 2022-02-15T10:46:50
| 2022-02-15T10:46:50
|
NONE
| null | null | null | null |
This is great work, so huge shoutout to contributors and huggingface.
The [/nlp/viewer](https://huggingface.co/nlp/viewer/) is great and the [/datasets](https://huggingface.co/datasets) page is great. I was wondering if in both or either places we can have a filter that selects if a dataset is good for the following tasks (non exhaustive list)
- Classification
- Multi label
- Multi class
- Q&A
- Summarization
- Translation
I believe this feature might have some value, for folks trying to find datasets for a particular task, and then testing their model capabilities.
Thank you :)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/691/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/691/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 502 days, 9:50:32
|
https://api.github.com/repos/huggingface/datasets/issues/690
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/690/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/690/comments
|
https://api.github.com/repos/huggingface/datasets/issues/690/events
|
https://github.com/huggingface/datasets/issues/690
| 712,150,321
|
MDU6SXNzdWU3MTIxNTAzMjE=
| 690
|
XNLI dataset: NonMatchingChecksumError
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/13307358?v=4",
"events_url": "https://api.github.com/users/xiey1/events{/privacy}",
"followers_url": "https://api.github.com/users/xiey1/followers",
"following_url": "https://api.github.com/users/xiey1/following{/other_user}",
"gists_url": "https://api.github.com/users/xiey1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/xiey1",
"id": 13307358,
"login": "xiey1",
"node_id": "MDQ6VXNlcjEzMzA3MzU4",
"organizations_url": "https://api.github.com/users/xiey1/orgs",
"received_events_url": "https://api.github.com/users/xiey1/received_events",
"repos_url": "https://api.github.com/users/xiey1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/xiey1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xiey1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/xiey1",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Thanks for reporting.\r\nThe data file must have been updated by the host.\r\nI'll update the checksum with the new one.",
"Well actually it looks like the link isn't working anymore :(",
"The new link is https://cims.nyu.edu/~sbowman/xnli/XNLI-1.0.zip\r\nI'll update the dataset script",
"I'll do a release in the next few days to make the fix available for everyone.\r\nIn the meantime you can load `xnli` with\r\n```\r\nxnli = load_dataset('xnli', script_version=\"master\")\r\n```\r\nThis will use the latest version of the xnli script (available on master branch), instead of the old one.",
"That's awesome! Thanks a lot!"
] | 2020-09-30T17:50:03
| 2020-10-01T17:15:08
| 2020-10-01T14:01:14
|
NONE
| null | null | null | null |
Hi,
I tried to download "xnli" dataset in colab using
`xnli = load_dataset(path='xnli')`
but got 'NonMatchingChecksumError' error
`NonMatchingChecksumError Traceback (most recent call last)
<ipython-input-27-a87bedc82eeb> in <module>()
----> 1 xnli = load_dataset(path='xnli')
3 frames
/usr/local/lib/python3.6/dist-packages/datasets/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name)
37 if len(bad_urls) > 0:
38 error_msg = "Checksums didn't match" + for_verification_name + ":\n"
---> 39 raise NonMatchingChecksumError(error_msg + str(bad_urls))
40 logger.info("All the checksums matched successfully" + for_verification_name)
41
NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://www.nyu.edu/projects/bowman/xnli/XNLI-1.0.zip']`
The same code worked well several days ago in colab but stopped working now. Thanks!
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/690/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/690/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 20:11:11
|
https://api.github.com/repos/huggingface/datasets/issues/687
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/687/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/687/comments
|
https://api.github.com/repos/huggingface/datasets/issues/687/events
|
https://github.com/huggingface/datasets/issues/687
| 711,664,810
|
MDU6SXNzdWU3MTE2NjQ4MTA=
| 687
|
`ArrowInvalid` occurs while running `Dataset.map()` function
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/5601012?v=4",
"events_url": "https://api.github.com/users/peinan/events{/privacy}",
"followers_url": "https://api.github.com/users/peinan/followers",
"following_url": "https://api.github.com/users/peinan/following{/other_user}",
"gists_url": "https://api.github.com/users/peinan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/peinan",
"id": 5601012,
"login": "peinan",
"node_id": "MDQ6VXNlcjU2MDEwMTI=",
"organizations_url": "https://api.github.com/users/peinan/orgs",
"received_events_url": "https://api.github.com/users/peinan/received_events",
"repos_url": "https://api.github.com/users/peinan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/peinan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/peinan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/peinan",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi !\r\n\r\nThis is because `encode` expects one single text as input (str), or one tokenized text (List[str]).\r\nI believe that you actually wanted to use `encode_batch` which expects a batch of texts.\r\nHowever this method is only available for our \"fast\" tokenizers (ex: BertTokenizerFast).\r\nBertJapanese is not one of them unfortunately and I don't think it will be added for now (see https://github.com/huggingface/transformers/pull/7141)...\r\ncc @thomwolf for confirmation.\r\n\r\nTherefore what I'd suggest for now is disable batching and process one text at a time using `encode`.\r\nNote that you can make it faster by using multiprocessing:\r\n\r\n```python\r\nnum_proc = None # Specify here the number of processes if you want to use multiprocessing. ex: num_proc = 4\r\nencoded = train_ds.map(\r\n lambda example: {'tokens': t.encode(example['title'], max_length=1000)}, num_proc=num_proc\r\n)\r\n```\r\n",
"Thank you very much for the kind and precise suggestion!\r\nI'm looking forward to seeing BertJapaneseTokenizer built into the \"fast\" tokenizers.\r\n\r\nI tried `map` with multiprocessing as follows, and it worked!\r\n\r\n```python\r\n# There was a Pickle problem if I use `lambda` for multiprocessing\r\ndef encode(examples):\r\n return {'tokens': t.encode(examples['title'], max_length=1000)}\r\n\r\nnum_proc = 8\r\nencoded = train_ds.map(encode, num_proc=num_proc)\r\n```\r\n\r\nThank you again!"
] | 2020-09-30T06:16:50
| 2020-09-30T09:53:03
| 2020-09-30T09:53:03
|
NONE
| null | null | null | null |
It seems to fail to process the final batch. This [colab](https://colab.research.google.com/drive/1_byLZRHwGP13PHMkJWo62Wp50S_Z2HMD?usp=sharing) can reproduce the error.
Code:
```python
# train_ds = Dataset(features: {
# 'title': Value(dtype='string', id=None),
# 'score': Value(dtype='float64', id=None)
# }, num_rows: 99999)
# suggested in #665
class PicklableTokenizer(BertJapaneseTokenizer):
def __getstate__(self):
state = dict(self.__dict__)
state['do_lower_case'] = self.word_tokenizer.do_lower_case
state['never_split'] = self.word_tokenizer.never_split
del state['word_tokenizer']
return state
def __setstate(self):
do_lower_case = state.pop('do_lower_case')
never_split = state.pop('never_split')
self.__dict__ = state
self.word_tokenizer = MecabTokenizer(
do_lower_case=do_lower_case, never_split=never_split
)
t = PicklableTokenizer.from_pretrained('bert-base-japanese-whole-word-masking')
encoded = train_ds.map(
lambda examples: {'tokens': t.encode(examples['title'], max_length=1000)}, batched=True, batch_size=1000
)
```
Error Message:
```
99% 99/100 [00:22<00:00, 39.07ba/s]
---------------------------------------------------------------------------
ArrowInvalid Traceback (most recent call last)
<timed exec> in <module>
/usr/local/lib/python3.6/site-packages/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint)
1242 fn_kwargs=fn_kwargs,
1243 new_fingerprint=new_fingerprint,
-> 1244 update_data=update_data,
1245 )
1246 else:
/usr/local/lib/python3.6/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
151 "output_all_columns": self._output_all_columns,
152 }
--> 153 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
154 if new_format["columns"] is not None:
155 new_format["columns"] = list(set(new_format["columns"]) & set(out.column_names))
/usr/local/lib/python3.6/site-packages/datasets/fingerprint.py in wrapper(*args, **kwargs)
161 # Call actual function
162
--> 163 out = func(self, *args, **kwargs)
164
165 # Update fingerprint of in-place transforms + update in-place history of transforms
/usr/local/lib/python3.6/site-packages/datasets/arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, update_data)
1496 if update_data:
1497 batch = cast_to_python_objects(batch)
-> 1498 writer.write_batch(batch)
1499 if update_data:
1500 writer.finalize() # close_stream=bool(buf_writer is None)) # We only close if we are writing in a file
/usr/local/lib/python3.6/site-packages/datasets/arrow_writer.py in write_batch(self, batch_examples, writer_batch_size)
271 typed_sequence = TypedSequence(batch_examples[col], type=col_type, try_type=col_try_type)
272 typed_sequence_examples[col] = typed_sequence
--> 273 pa_table = pa.Table.from_pydict(typed_sequence_examples)
274 self.write_table(pa_table)
275
/usr/local/lib/python3.6/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.from_pydict()
/usr/local/lib/python3.6/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.from_arrays()
/usr/local/lib/python3.6/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.validate()
/usr/local/lib/python3.6/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status()
ArrowInvalid: Column 4 named tokens expected length 999 but got length 1000
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/5601012?v=4",
"events_url": "https://api.github.com/users/peinan/events{/privacy}",
"followers_url": "https://api.github.com/users/peinan/followers",
"following_url": "https://api.github.com/users/peinan/following{/other_user}",
"gists_url": "https://api.github.com/users/peinan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/peinan",
"id": 5601012,
"login": "peinan",
"node_id": "MDQ6VXNlcjU2MDEwMTI=",
"organizations_url": "https://api.github.com/users/peinan/orgs",
"received_events_url": "https://api.github.com/users/peinan/received_events",
"repos_url": "https://api.github.com/users/peinan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/peinan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/peinan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/peinan",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/687/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/687/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 3:36:13
|
https://api.github.com/repos/huggingface/datasets/issues/686
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/686/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/686/comments
|
https://api.github.com/repos/huggingface/datasets/issues/686/events
|
https://github.com/huggingface/datasets/issues/686
| 711,385,739
|
MDU6SXNzdWU3MTEzODU3Mzk=
| 686
|
Dataset browser url is still https://huggingface.co/nlp/viewer/
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4",
"events_url": "https://api.github.com/users/jarednielsen/events{/privacy}",
"followers_url": "https://api.github.com/users/jarednielsen/followers",
"following_url": "https://api.github.com/users/jarednielsen/following{/other_user}",
"gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jarednielsen",
"id": 4564897,
"login": "jarednielsen",
"node_id": "MDQ6VXNlcjQ1NjQ4OTc=",
"organizations_url": "https://api.github.com/users/jarednielsen/orgs",
"received_events_url": "https://api.github.com/users/jarednielsen/received_events",
"repos_url": "https://api.github.com/users/jarednielsen/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jarednielsen",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Yes! might do it with @srush one of these days. Hopefully it won't break too many links (we can always redirect from old url to new)",
"This was fixed but forgot to close the issue. cc @lhoestq @yjernite \r\n\r\nThanks @jarednielsen!"
] | 2020-09-29T19:21:52
| 2021-01-08T18:29:26
| 2021-01-08T18:29:26
|
CONTRIBUTOR
| null | null | null | null |
Might be worth updating to https://huggingface.co/datasets/viewer/
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/julien-c",
"id": 326577,
"login": "julien-c",
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"repos_url": "https://api.github.com/users/julien-c/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"type": "User",
"url": "https://api.github.com/users/julien-c",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/686/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/686/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 100 days, 23:07:34
|
https://api.github.com/repos/huggingface/datasets/issues/678
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/678/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/678/comments
|
https://api.github.com/repos/huggingface/datasets/issues/678/events
|
https://github.com/huggingface/datasets/issues/678
| 710,060,497
|
MDU6SXNzdWU3MTAwNjA0OTc=
| 678
|
The download instructions for c4 datasets are not contained in the error message
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Narsil",
"id": 204321,
"login": "Narsil",
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"repos_url": "https://api.github.com/users/Narsil/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Narsil",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Good catch !\r\nIndeed the `@property` is missing.\r\n\r\nFeel free to open a PR :)",
"Also not that C4 is a dataset that needs an Apache Beam runtime to be generated.\r\nFor example Dataflow, Spark, Flink etc.\r\n\r\nUsually we generate the dataset on our side once and for all, but we haven't done it for C4 yet.\r\nMore info about beam datasets [here](https://huggingface.co/docs/datasets/beam_dataset.html)\r\n\r\nLet me know if you have any questions"
] | 2020-09-28T08:30:54
| 2020-09-28T10:26:09
| 2020-09-28T10:26:09
|
CONTRIBUTOR
| null | null | null | null |
The manual download instructions are not clear
```The dataset c4 with config en requires manual data.
Please follow the manual download instructions: <bound method C4.manual_download_instructions of <datasets_modules.datasets.c4.830b0c218bd41fed439812c8dd19dbd4767d2a3faa385eb695cf8666c982b1b3.c4.C4 object at 0x7ff8c5969760>>.
Manual data can be loaded with `datasets.load_dataset(c4, data_dir='<path/to/manual/data>')
```
Either `@property` could be added to C4.manual_download_instrcutions (or make it a real property), or the manual_download_instructions function needs to be called I think.
Let me know if you want a PR for this, but I'm not sure which possible fix is the correct one.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Narsil",
"id": 204321,
"login": "Narsil",
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"repos_url": "https://api.github.com/users/Narsil/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Narsil",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/678/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/678/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 1:55:15
|
https://api.github.com/repos/huggingface/datasets/issues/676
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/676/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/676/comments
|
https://api.github.com/repos/huggingface/datasets/issues/676/events
|
https://github.com/huggingface/datasets/issues/676
| 710,014,319
|
MDU6SXNzdWU3MTAwMTQzMTk=
| 676
|
train_test_split returns empty dataset item
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/26648528?v=4",
"events_url": "https://api.github.com/users/mojave-pku/events{/privacy}",
"followers_url": "https://api.github.com/users/mojave-pku/followers",
"following_url": "https://api.github.com/users/mojave-pku/following{/other_user}",
"gists_url": "https://api.github.com/users/mojave-pku/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mojave-pku",
"id": 26648528,
"login": "mojave-pku",
"node_id": "MDQ6VXNlcjI2NjQ4NTI4",
"organizations_url": "https://api.github.com/users/mojave-pku/orgs",
"received_events_url": "https://api.github.com/users/mojave-pku/received_events",
"repos_url": "https://api.github.com/users/mojave-pku/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mojave-pku/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mojave-pku/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mojave-pku",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"The problem still exists after removing the cache files.",
"Can you reproduce this example in a Colab so we can investigate? (or give more information on your software/hardware config)",
"Thanks for reporting.\r\nI just found the issue, I'm creating a PR",
"We'll do a release pretty soon to include the fix :)\r\nIn the meantime you can install the lib from source if you want to "
] | 2020-09-28T07:19:33
| 2020-10-07T13:46:33
| 2020-10-07T13:38:06
|
NONE
| null | null | null | null |
I try to split my dataset by `train_test_split`, but after that the item in `train` and `test` `Dataset` is empty.
The codes:
```
yelp_data = datasets.load_from_disk('/home/ssd4/huanglianzhe/test_yelp')
print(yelp_data[0])
yelp_data = yelp_data.train_test_split(test_size=0.1)
print(yelp_data)
print(yelp_data['test'])
print(yelp_data['test'][0])
```
The outputs:
```
{'stars': 2.0, 'text': 'xxxx'}
Loading cached split indices for dataset at /home/ssd4/huanglianzhe/test_yelp/cache-f9b22d8b9d5a7346.arrow and /home/ssd4/huanglianzhe/test_yelp/cache-4aa26fa4005059d1.arrow
DatasetDict({'train': Dataset(features: {'stars': Value(dtype='float64', id=None), 'text': Value(dtype='string', id=None)}, num_rows: 7219009), 'test': Dataset(features: {'stars': Value(dtype='float64', id=None), 'text': Value(dtype='string', id=None)}, num_rows: 802113)})
Dataset(features: {'stars': Value(dtype='float64', id=None), 'text': Value(dtype='string', id=None)}, num_rows: 802113)
{} # yelp_data['test'][0] is empty
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/676/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/676/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 9 days, 6:18:33
|
https://api.github.com/repos/huggingface/datasets/issues/675
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/675/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/675/comments
|
https://api.github.com/repos/huggingface/datasets/issues/675/events
|
https://github.com/huggingface/datasets/issues/675
| 709,818,725
|
MDU6SXNzdWU3MDk4MTg3MjU=
| 675
|
Add custom dataset to NLP?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/6556710?v=4",
"events_url": "https://api.github.com/users/timpal0l/events{/privacy}",
"followers_url": "https://api.github.com/users/timpal0l/followers",
"following_url": "https://api.github.com/users/timpal0l/following{/other_user}",
"gists_url": "https://api.github.com/users/timpal0l/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/timpal0l",
"id": 6556710,
"login": "timpal0l",
"node_id": "MDQ6VXNlcjY1NTY3MTA=",
"organizations_url": "https://api.github.com/users/timpal0l/orgs",
"received_events_url": "https://api.github.com/users/timpal0l/received_events",
"repos_url": "https://api.github.com/users/timpal0l/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/timpal0l/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/timpal0l/subscriptions",
"type": "User",
"url": "https://api.github.com/users/timpal0l",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Yes you can have a look here: https://huggingface.co/docs/datasets/loading_datasets.html#csv-files",
"No activity, closing"
] | 2020-09-27T21:22:50
| 2020-10-20T09:08:49
| 2020-10-20T09:08:49
|
CONTRIBUTOR
| null | null | null | null |
Is it possible to add a custom dataset such as a .csv to the NLP library?
Thanks.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/675/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/675/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 22 days, 11:45:59
|
https://api.github.com/repos/huggingface/datasets/issues/674
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/674/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/674/comments
|
https://api.github.com/repos/huggingface/datasets/issues/674/events
|
https://github.com/huggingface/datasets/issues/674
| 709,661,006
|
MDU6SXNzdWU3MDk2NjEwMDY=
| 674
|
load_dataset() won't download in Windows
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/34422661?v=4",
"events_url": "https://api.github.com/users/ThisDavehead/events{/privacy}",
"followers_url": "https://api.github.com/users/ThisDavehead/followers",
"following_url": "https://api.github.com/users/ThisDavehead/following{/other_user}",
"gists_url": "https://api.github.com/users/ThisDavehead/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ThisDavehead",
"id": 34422661,
"login": "ThisDavehead",
"node_id": "MDQ6VXNlcjM0NDIyNjYx",
"organizations_url": "https://api.github.com/users/ThisDavehead/orgs",
"received_events_url": "https://api.github.com/users/ThisDavehead/received_events",
"repos_url": "https://api.github.com/users/ThisDavehead/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ThisDavehead/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ThisDavehead/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ThisDavehead",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"I have the same issue. Tried to download a few of them and not a single one is downloaded successfully.\r\n\r\nThis is the output:\r\n```\r\n>>> dataset = load_dataset('blended_skill_talk', split='train')\r\nUsing custom data configuration default <-- This step never ends\r\n```",
"This was fixed in #644 \r\nI'll do a new release soon :)\r\n\r\nIn the meantime you can run it by installing from source",
"Closing since version 1.1.0 got released with Windows support :) \r\nLet me know if it works for you now"
] | 2020-09-27T03:56:25
| 2020-10-05T08:28:18
| 2020-10-05T08:28:18
|
NONE
| null | null | null | null |
I don't know if this is just me or Windows. Maybe other Windows users can chime in if they don't have this problem. I've been trying to get some of the tutorials working on Windows, but when I use the load_dataset() function, it just stalls and the script keeps running indefinitely without downloading anything. I've waited upwards of 18 hours to download the 'multi-news' dataset (which isn't very big), and still nothing. I've tried running it through different IDE's and the command line, but it had the same behavior. I've also tried it with all virus and malware protection turned off. I've made sure python and all IDE's are exceptions to the firewall and all the requisite permissions are enabled.
Additionally, I checked to see if other packages could download content such as an nltk corpus, and they could. I've also run the same script using Ubuntu and it downloaded fine (and quickly). When I copied the downloaded datasets from my Ubuntu drive to my Windows .cache folder it worked fine by reusing the already-downloaded dataset, but it's cumbersome to do that for every dataset I want to try in my Windows environment.
Could this be a bug, or is there something I'm doing wrong or not thinking of?
Thanks.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/674/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/674/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 8 days, 4:31:53
|
https://api.github.com/repos/huggingface/datasets/issues/673
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/673/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/673/comments
|
https://api.github.com/repos/huggingface/datasets/issues/673/events
|
https://github.com/huggingface/datasets/issues/673
| 709,603,989
|
MDU6SXNzdWU3MDk2MDM5ODk=
| 673
|
blog_authorship_corpus crashed
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/7553188?v=4",
"events_url": "https://api.github.com/users/Moshiii/events{/privacy}",
"followers_url": "https://api.github.com/users/Moshiii/followers",
"following_url": "https://api.github.com/users/Moshiii/following{/other_user}",
"gists_url": "https://api.github.com/users/Moshiii/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Moshiii",
"id": 7553188,
"login": "Moshiii",
"node_id": "MDQ6VXNlcjc1NTMxODg=",
"organizations_url": "https://api.github.com/users/Moshiii/orgs",
"received_events_url": "https://api.github.com/users/Moshiii/received_events",
"repos_url": "https://api.github.com/users/Moshiii/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Moshiii/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Moshiii/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Moshiii",
"user_view_type": "public"
}
|
[
{
"color": "94203D",
"default": false,
"description": "",
"id": 2107841032,
"name": "nlp-viewer",
"node_id": "MDU6TGFiZWwyMTA3ODQxMDMy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer"
}
] |
closed
| false
| null |
[] |
[
"Thanks for reporting !\r\nWe'll free some memory"
] | 2020-09-26T20:15:28
| 2022-02-15T10:47:58
| 2022-02-15T10:47:58
|
NONE
| null | null | null | null |
This is just to report that When I pick blog_authorship_corpus in
https://huggingface.co/nlp/viewer/?dataset=blog_authorship_corpus
I get this:

|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/673/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/673/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 506 days, 14:32:30
|
https://api.github.com/repos/huggingface/datasets/issues/672
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/672/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/672/comments
|
https://api.github.com/repos/huggingface/datasets/issues/672/events
|
https://github.com/huggingface/datasets/issues/672
| 709,575,527
|
MDU6SXNzdWU3MDk1NzU1Mjc=
| 672
|
Questions about XSUM
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/2441454?v=4",
"events_url": "https://api.github.com/users/danyaljj/events{/privacy}",
"followers_url": "https://api.github.com/users/danyaljj/followers",
"following_url": "https://api.github.com/users/danyaljj/following{/other_user}",
"gists_url": "https://api.github.com/users/danyaljj/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/danyaljj",
"id": 2441454,
"login": "danyaljj",
"node_id": "MDQ6VXNlcjI0NDE0NTQ=",
"organizations_url": "https://api.github.com/users/danyaljj/orgs",
"received_events_url": "https://api.github.com/users/danyaljj/received_events",
"repos_url": "https://api.github.com/users/danyaljj/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/danyaljj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/danyaljj/subscriptions",
"type": "User",
"url": "https://api.github.com/users/danyaljj",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"We should try to regenerate the data using the official script.\r\nBut iirc that's what we used in the first place, so not sure why it didn't match in the first place.\r\n\r\nI'll let you know when the dataset is updated",
"Thanks, looking forward to hearing your update on this thread. \r\n\r\nThis is a blocking issue for us; would appreciate any progress on this front. We can also help with the fix, if you deem it appropriately. ",
"I just started the generation on my side, I'll let you know how it goes :) ",
"Hmm after a first run I'm still missing 136668/226711 urls.\r\nI'll relaunch it tomorrow to try to get the remaining ones.",
"Update: I'm missing 36/226711 urls but I haven't managed to download them yet",
"Thanks! That sounds like a reasonable number! ",
"So I managed to download them all but when parsing only 226,181/226,711 worked.\r\nNot sure if it's worth digging and debugging parsing at this point :/ ",
"Maybe @sshleifer can help, I think he's already played with xsum at one point",
"Thanks @lhoestq\r\nIt would be great to improve coverage, but IDs are the really crucial part for us. We'd really appreciate an update to the dataset with IDs either way!",
"I gave up at an even earlier point. The dataset I use has 204,017 train examples.",
"@lhoestq @sshleifer like @jbragg said earlier, the main issue for us is that the current XSUM dataset (in your package) does not have IDs suggested by the original dataset ([here is the file](https://raw.githubusercontent.com/EdinburghNLP/XSum/master/XSum-Dataset/XSum-TRAINING-DEV-TEST-SPLIT-90-5-5.json).) Would appreciate if you update the XSUM dataset to include the instance IDs. \r\n\r\nThe missing instances is also a problem, but likely not worth pursuing given its relatively small scale. ",
">So I managed to download them all but when parsing only 226,181/226,711 worked.\r\n\r\n@lhoestq any chance we could update the HF-hosted dataset with the IDs in your new version? Happy to help if there's something I can do.",
"Well I couldn't parse what I downloaded.\r\nUnfortunately I think I won't be able to take a look at it this week.\r\nI can try to send you what I got if you want to give it a shot @jbragg \r\nOtherwise feel free to re-run the xsum download script, maybe you'll be luckier than me",
"Resolved via #754"
] | 2020-09-26T17:16:24
| 2022-10-04T17:30:17
| 2022-10-04T17:30:17
|
CONTRIBUTOR
| null | null | null | null |
Hi there ✋
I'm looking into your `xsum` dataset and I have several questions on that.
So here is how I loaded the data:
```
>>> data = datasets.load_dataset('xsum', version='1.0.1')
>>> data['train']
Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype='string', id=None)}, num_rows: 204017)
>>> data['test']
Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype='string', id=None)}, num_rows: 11333)
```
The first issue is, the instance counts don’t match what I see on [the dataset's website](https://github.com/EdinburghNLP/XSum/tree/master/XSum-Dataset#what-builds-the-xsum-dataset) (11,333 vs 11,334 for test set; 204,017 vs 204,045 for training set)
```
… training (90%, 204,045), validation (5%, 11,332), and test (5%, 11,334) set.
```
Any thoughts why? Perhaps @mariamabarham could help here, since she recently had a PR on this dataaset https://github.com/huggingface/datasets/pull/289 (reviewed by @patrickvonplaten)
Another issue is that the instances don't seem to have IDs. The original datasets provides IDs for the instances: https://github.com/EdinburghNLP/XSum/blob/master/XSum-Dataset/XSum-TRAINING-DEV-TEST-SPLIT-90-5-5.json but to be able to use them, the dataset sizes need to match.
CC @jbragg
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/672/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/672/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 738 days, 0:13:53
|
https://api.github.com/repos/huggingface/datasets/issues/671
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/671/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/671/comments
|
https://api.github.com/repos/huggingface/datasets/issues/671/events
|
https://github.com/huggingface/datasets/issues/671
| 709,093,151
|
MDU6SXNzdWU3MDkwOTMxNTE=
| 671
|
[BUG] No such file or directory
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/2238344?v=4",
"events_url": "https://api.github.com/users/jbragg/events{/privacy}",
"followers_url": "https://api.github.com/users/jbragg/followers",
"following_url": "https://api.github.com/users/jbragg/following{/other_user}",
"gists_url": "https://api.github.com/users/jbragg/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jbragg",
"id": 2238344,
"login": "jbragg",
"node_id": "MDQ6VXNlcjIyMzgzNDQ=",
"organizations_url": "https://api.github.com/users/jbragg/orgs",
"received_events_url": "https://api.github.com/users/jbragg/received_events",
"repos_url": "https://api.github.com/users/jbragg/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jbragg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jbragg/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jbragg",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[] | 2020-09-25T16:38:54
| 2020-09-28T14:42:42
| 2020-09-28T14:42:42
|
CONTRIBUTOR
| null | null | null | null |
This happens when both
1. Huggingface datasets cache dir does not exist
2. Try to load a local dataset script
builder.py throws an error when trying to create a filelock in a directory (cache/datasets) that does not exist
https://github.com/huggingface/datasets/blob/master/src/datasets/builder.py#L177
Tested on v1.0.2
@lhoestq
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/671/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/671/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 2 days, 22:03:48
|
https://api.github.com/repos/huggingface/datasets/issues/669
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/669/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/669/comments
|
https://api.github.com/repos/huggingface/datasets/issues/669/events
|
https://github.com/huggingface/datasets/issues/669
| 708,857,595
|
MDU6SXNzdWU3MDg4NTc1OTU=
| 669
|
How to skip a example when running dataset.map
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/24541791?v=4",
"events_url": "https://api.github.com/users/xixiaoyao/events{/privacy}",
"followers_url": "https://api.github.com/users/xixiaoyao/followers",
"following_url": "https://api.github.com/users/xixiaoyao/following{/other_user}",
"gists_url": "https://api.github.com/users/xixiaoyao/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/xixiaoyao",
"id": 24541791,
"login": "xixiaoyao",
"node_id": "MDQ6VXNlcjI0NTQxNzkx",
"organizations_url": "https://api.github.com/users/xixiaoyao/orgs",
"received_events_url": "https://api.github.com/users/xixiaoyao/received_events",
"repos_url": "https://api.github.com/users/xixiaoyao/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/xixiaoyao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xixiaoyao/subscriptions",
"type": "User",
"url": "https://api.github.com/users/xixiaoyao",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi @xixiaoyao,\r\nDepending on what you want to do you can:\r\n- use a first step of `filter` to filter out the invalid examples: https://huggingface.co/docs/datasets/processing.html#filtering-rows-select-and-filter\r\n- or directly detect the invalid examples inside the callable used with `map` and return them unchanged or even remove them at the same time if you are using `map` in batched mode. Here is an example where we use `map` in batched mode to add new rows on the fly but you can also use it to remove examples on the fly (that's what `filter` actually do under-the-hood): https://huggingface.co/docs/datasets/processing.html#augmenting-the-dataset",
"Closing this one.\r\nFeel free to re-open if you have other questions",
"Letting finders-of-this-thread know that the new link is: https://huggingface.co/docs/datasets/process#data-augmentation\r\n"
] | 2020-09-25T11:17:53
| 2022-06-17T21:45:03
| 2020-10-05T16:28:13
|
NONE
| null | null | null | null |
in processing func, I process examples and detect some invalid examples, which I did not want it to be added into train dataset. However I did not find how to skip this recognized invalid example when doing dataset.map.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/669/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/669/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 10 days, 5:10:20
|
https://api.github.com/repos/huggingface/datasets/issues/668
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/668/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/668/comments
|
https://api.github.com/repos/huggingface/datasets/issues/668/events
|
https://github.com/huggingface/datasets/issues/668
| 708,310,956
|
MDU6SXNzdWU3MDgzMTA5NTY=
| 668
|
OverflowError when slicing with an array containing negative ids
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[] | 2020-09-24T16:27:14
| 2020-09-28T14:42:19
| 2020-09-28T14:42:19
|
MEMBER
| null | null | null | null |
```python
from datasets import Dataset
d = ds.Dataset.from_dict({"a": range(10)})
print(d[0])
# {'a': 0}
print(d[-1])
# {'a': 9}
print(d[[0, -1]])
# OverflowError
```
results in
```
---------------------------------------------------------------------------
OverflowError Traceback (most recent call last)
<ipython-input-5-863dc3555598> in <module>
----> 1 d[[0, -1]]
~/Desktop/hf/nlp/src/datasets/arrow_dataset.py in __getitem__(self, key)
1070 format_columns=self._format_columns,
1071 output_all_columns=self._output_all_columns,
-> 1072 format_kwargs=self._format_kwargs,
1073 )
1074
~/Desktop/hf/nlp/src/datasets/arrow_dataset.py in _getitem(self, key, format_type, format_columns, output_all_columns, format_kwargs)
1025 indices = key
1026
-> 1027 indices_array = pa.array([int(i) for i in indices], type=pa.uint64())
1028
1029 # Check if we need to convert indices
~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/array.pxi in pyarrow.lib.array()
~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/array.pxi in pyarrow.lib._sequence_to_array()
OverflowError: can't convert negative value to unsigned int
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/668/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/668/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 3 days, 22:15:05
|
https://api.github.com/repos/huggingface/datasets/issues/667
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/667/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/667/comments
|
https://api.github.com/repos/huggingface/datasets/issues/667/events
|
https://github.com/huggingface/datasets/issues/667
| 708,258,392
|
MDU6SXNzdWU3MDgyNTgzOTI=
| 667
|
Loss not decrease with Datasets and Transformers
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/23032865?v=4",
"events_url": "https://api.github.com/users/wangcongcong123/events{/privacy}",
"followers_url": "https://api.github.com/users/wangcongcong123/followers",
"following_url": "https://api.github.com/users/wangcongcong123/following{/other_user}",
"gists_url": "https://api.github.com/users/wangcongcong123/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/wangcongcong123",
"id": 23032865,
"login": "wangcongcong123",
"node_id": "MDQ6VXNlcjIzMDMyODY1",
"organizations_url": "https://api.github.com/users/wangcongcong123/orgs",
"received_events_url": "https://api.github.com/users/wangcongcong123/received_events",
"repos_url": "https://api.github.com/users/wangcongcong123/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/wangcongcong123/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wangcongcong123/subscriptions",
"type": "User",
"url": "https://api.github.com/users/wangcongcong123",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"And I tested it on T5ForConditionalGeneration, that works no problem.",
"Hi did you manage to fix your issue ?\r\n\r\nIf so feel free to share your fix and close this thread"
] | 2020-09-24T15:14:43
| 2021-01-01T20:01:25
| 2021-01-01T20:01:25
|
NONE
| null | null | null | null |
HI,
The following script is used to fine-tune a BertForSequenceClassification model on SST2.
The script is adapted from [this colab](https://colab.research.google.com/github/huggingface/datasets/blob/master/notebooks/Overview.ipynb) that presents an example of fine-tuning BertForQuestionAnswering using squad dataset. In that colab, loss works fine. When I adapt it to SST2, the loss fails to decrease as it should. I attach the adapted script below and appreciate anyone pointing out what I miss?
```python
import torch
from datasets import load_dataset
from transformers import BertForSequenceClassification
from transformers import BertTokenizerFast
# Load our training dataset and tokenizer
dataset = load_dataset("glue", 'sst2')
tokenizer = BertTokenizerFast.from_pretrained('bert-base-cased')
del dataset["test"] # let's remove it in this demo
# Tokenize our training dataset
def convert_to_features(example_batch):
encodings = tokenizer(example_batch["sentence"])
encodings.update({"labels": example_batch["label"]})
return encodings
encoded_dataset = dataset.map(convert_to_features, batched=True)
# Format our dataset to outputs torch.Tensor to train a pytorch model
columns = ['input_ids', 'token_type_ids', 'attention_mask', 'labels']
encoded_dataset.set_format(type='torch', columns=columns)
# Instantiate a PyTorch Dataloader around our dataset
# Let's do dynamic batching (pad on the fly with our own collate_fn)
def collate_fn(examples):
return tokenizer.pad(examples, return_tensors='pt')
dataloader = torch.utils.data.DataLoader(encoded_dataset['train'], collate_fn=collate_fn, batch_size=8)
# Now let's train our model
device = 'cuda' if torch.cuda.is_available() else 'cpu'
# Let's load a pretrained Bert model and a simple optimizer
model = BertForSequenceClassification.from_pretrained('bert-base-cased', return_dict=True)
optimizer = torch.optim.Adam(model.parameters(), lr=1e-5)
model.train().to(device)
for i, batch in enumerate(dataloader):
batch.to(device)
outputs = model(**batch)
loss = outputs.loss
loss.backward()
optimizer.step()
model.zero_grad()
print(f'Step {i} - loss: {loss:.3}')
```
In case needed.
- datasets == 1.0.2
- transformers == 3.2.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/23032865?v=4",
"events_url": "https://api.github.com/users/wangcongcong123/events{/privacy}",
"followers_url": "https://api.github.com/users/wangcongcong123/followers",
"following_url": "https://api.github.com/users/wangcongcong123/following{/other_user}",
"gists_url": "https://api.github.com/users/wangcongcong123/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/wangcongcong123",
"id": 23032865,
"login": "wangcongcong123",
"node_id": "MDQ6VXNlcjIzMDMyODY1",
"organizations_url": "https://api.github.com/users/wangcongcong123/orgs",
"received_events_url": "https://api.github.com/users/wangcongcong123/received_events",
"repos_url": "https://api.github.com/users/wangcongcong123/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/wangcongcong123/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wangcongcong123/subscriptions",
"type": "User",
"url": "https://api.github.com/users/wangcongcong123",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/667/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/667/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 99 days, 4:46:42
|
https://api.github.com/repos/huggingface/datasets/issues/666
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/666/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/666/comments
|
https://api.github.com/repos/huggingface/datasets/issues/666/events
|
https://github.com/huggingface/datasets/issues/666
| 707,608,578
|
MDU6SXNzdWU3MDc2MDg1Nzg=
| 666
|
Does both 'bookcorpus' and 'wikipedia' belong to the same datasets which Google used for pretraining BERT?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/31090427?v=4",
"events_url": "https://api.github.com/users/wahab4114/events{/privacy}",
"followers_url": "https://api.github.com/users/wahab4114/followers",
"following_url": "https://api.github.com/users/wahab4114/following{/other_user}",
"gists_url": "https://api.github.com/users/wahab4114/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/wahab4114",
"id": 31090427,
"login": "wahab4114",
"node_id": "MDQ6VXNlcjMxMDkwNDI3",
"organizations_url": "https://api.github.com/users/wahab4114/orgs",
"received_events_url": "https://api.github.com/users/wahab4114/received_events",
"repos_url": "https://api.github.com/users/wahab4114/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/wahab4114/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wahab4114/subscriptions",
"type": "User",
"url": "https://api.github.com/users/wahab4114",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"No they are other similar copies but they are not provided by the official Bert models authors."
] | 2020-09-23T19:02:25
| 2020-10-27T15:19:25
| 2020-10-27T15:19:25
|
NONE
| null | null | null | null |
{
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yjernite",
"id": 10469459,
"login": "yjernite",
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"repos_url": "https://api.github.com/users/yjernite/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yjernite",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/666/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/666/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 33 days, 20:17:00
|
|
https://api.github.com/repos/huggingface/datasets/issues/665
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/665/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/665/comments
|
https://api.github.com/repos/huggingface/datasets/issues/665/events
|
https://github.com/huggingface/datasets/issues/665
| 707,037,738
|
MDU6SXNzdWU3MDcwMzc3Mzg=
| 665
|
runing dataset.map, it raises TypeError: can't pickle Tokenizer objects
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/24541791?v=4",
"events_url": "https://api.github.com/users/xixiaoyao/events{/privacy}",
"followers_url": "https://api.github.com/users/xixiaoyao/followers",
"following_url": "https://api.github.com/users/xixiaoyao/following{/other_user}",
"gists_url": "https://api.github.com/users/xixiaoyao/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/xixiaoyao",
"id": 24541791,
"login": "xixiaoyao",
"node_id": "MDQ6VXNlcjI0NTQxNzkx",
"organizations_url": "https://api.github.com/users/xixiaoyao/orgs",
"received_events_url": "https://api.github.com/users/xixiaoyao/received_events",
"repos_url": "https://api.github.com/users/xixiaoyao/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/xixiaoyao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xixiaoyao/subscriptions",
"type": "User",
"url": "https://api.github.com/users/xixiaoyao",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi !\r\nIt works on my side with both the LongFormerTokenizer and the LongFormerTokenizerFast.\r\n\r\nWhich version of transformers/datasets are you using ?",
"transformers and datasets are both the latest",
"Then I guess you need to give us more informations on your setup (OS, python, GPU, etc) or a Google Colab reproducing the error for us to be able to debug this error.",
"And your version of `dill` if possible :)",
"I have the same issue with `transformers/BertJapaneseTokenizer`.\r\n\r\n\r\n\r\n```python\r\n# train_ds = Dataset(features: {\r\n# 'title': Value(dtype='string', id=None), \r\n# 'score': Value(dtype='float64', id=None)\r\n# }, num_rows: 99999)\r\n\r\nt = BertJapaneseTokenizer.from_pretrained('bert-base-japanese-whole-word-masking')\r\nencoded = train_ds.map(lambda examples: {'tokens': t.encode(examples['title'])}, batched=True)\r\n```\r\n\r\n<details><summary>Error Message</summary>\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\n<ipython-input-35-2b7d66b291c1> in <module>\r\n 2 \r\n 3 encoded = train_ds.map(lambda examples:\r\n----> 4 {'tokens': t.encode(examples['title'])}, batched=True)\r\n\r\n/usr/local/lib/python3.6/site-packages/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint)\r\n 1242 fn_kwargs=fn_kwargs,\r\n 1243 new_fingerprint=new_fingerprint,\r\n-> 1244 update_data=update_data,\r\n 1245 )\r\n 1246 else:\r\n\r\n/usr/local/lib/python3.6/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)\r\n 151 \"output_all_columns\": self._output_all_columns,\r\n 152 }\r\n--> 153 out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n 154 if new_format[\"columns\"] is not None:\r\n 155 new_format[\"columns\"] = list(set(new_format[\"columns\"]) & set(out.column_names))\r\n\r\n/usr/local/lib/python3.6/site-packages/datasets/fingerprint.py in wrapper(*args, **kwargs)\r\n 156 kwargs_for_fingerprint[\"fingerprint_name\"] = fingerprint_name\r\n 157 kwargs[fingerprint_name] = update_fingerprint(\r\n--> 158 self._fingerprint, transform, kwargs_for_fingerprint\r\n 159 )\r\n 160 \r\n\r\n/usr/local/lib/python3.6/site-packages/datasets/fingerprint.py in update_fingerprint(fingerprint, transform, transform_args)\r\n 103 for key in sorted(transform_args):\r\n 104 hasher.update(key)\r\n--> 105 hasher.update(transform_args[key])\r\n 106 return hasher.hexdigest()\r\n 107 \r\n\r\n/usr/local/lib/python3.6/site-packages/datasets/fingerprint.py in update(self, value)\r\n 55 def update(self, value):\r\n 56 self.m.update(f\"=={type(value)}==\".encode(\"utf8\"))\r\n---> 57 self.m.update(self.hash(value).encode(\"utf-8\"))\r\n 58 \r\n 59 def hexdigest(self):\r\n\r\n/usr/local/lib/python3.6/site-packages/datasets/fingerprint.py in hash(cls, value)\r\n 51 return cls.dispatch[type(value)](cls, value)\r\n 52 else:\r\n---> 53 return cls.hash_default(value)\r\n 54 \r\n 55 def update(self, value):\r\n\r\n/usr/local/lib/python3.6/site-packages/datasets/fingerprint.py in hash_default(cls, value)\r\n 44 @classmethod\r\n 45 def hash_default(cls, value):\r\n---> 46 return cls.hash_bytes(dumps(value))\r\n 47 \r\n 48 @classmethod\r\n\r\n/usr/local/lib/python3.6/site-packages/datasets/utils/py_utils.py in dumps(obj)\r\n 365 file = StringIO()\r\n 366 with _no_cache_fields(obj):\r\n--> 367 dump(obj, file)\r\n 368 return file.getvalue()\r\n 369 \r\n\r\n/usr/local/lib/python3.6/site-packages/datasets/utils/py_utils.py in dump(obj, file)\r\n 337 def dump(obj, file):\r\n 338 \"\"\"pickle an object to a file\"\"\"\r\n--> 339 Pickler(file, recurse=True).dump(obj)\r\n 340 return\r\n 341 \r\n\r\n/usr/local/lib/python3.6/site-packages/dill/_dill.py in dump(self, obj)\r\n 444 raise PicklingError(msg)\r\n 445 else:\r\n--> 446 StockPickler.dump(self, obj)\r\n 447 stack.clear() # clear record of 'recursion-sensitive' pickled objects\r\n 448 return\r\n\r\n/usr/local/lib/python3.6/pickle.py in dump(self, obj)\r\n 407 if self.proto >= 4:\r\n 408 self.framer.start_framing()\r\n--> 409 self.save(obj)\r\n 410 self.write(STOP)\r\n 411 self.framer.end_framing()\r\n\r\n/usr/local/lib/python3.6/pickle.py in save(self, obj, save_persistent_id)\r\n 474 f = self.dispatch.get(t)\r\n 475 if f is not None:\r\n--> 476 f(self, obj) # Call unbound method with explicit self\r\n 477 return\r\n 478 \r\n\r\n/usr/local/lib/python3.6/site-packages/dill/_dill.py in save_function(pickler, obj)\r\n 1436 globs, obj.__name__,\r\n 1437 obj.__defaults__, obj.__closure__,\r\n-> 1438 obj.__dict__, fkwdefaults), obj=obj)\r\n 1439 else:\r\n 1440 _super = ('super' in getattr(obj.func_code,'co_names',())) and (_byref is not None) and getattr(pickler, '_recurse', False)\r\n\r\n/usr/local/lib/python3.6/pickle.py in save_reduce(self, func, args, state, listitems, dictitems, obj)\r\n 608 else:\r\n 609 save(func)\r\n--> 610 save(args)\r\n 611 write(REDUCE)\r\n 612 \r\n\r\n/usr/local/lib/python3.6/pickle.py in save(self, obj, save_persistent_id)\r\n 474 f = self.dispatch.get(t)\r\n 475 if f is not None:\r\n--> 476 f(self, obj) # Call unbound method with explicit self\r\n 477 return\r\n 478 \r\n\r\n/usr/local/lib/python3.6/pickle.py in save_tuple(self, obj)\r\n 749 write(MARK)\r\n 750 for element in obj:\r\n--> 751 save(element)\r\n 752 \r\n 753 if id(obj) in memo:\r\n\r\n/usr/local/lib/python3.6/pickle.py in save(self, obj, save_persistent_id)\r\n 474 f = self.dispatch.get(t)\r\n 475 if f is not None:\r\n--> 476 f(self, obj) # Call unbound method with explicit self\r\n 477 return\r\n 478 \r\n\r\n/usr/local/lib/python3.6/site-packages/dill/_dill.py in save_module_dict(pickler, obj)\r\n 931 # we only care about session the first pass thru\r\n 932 pickler._session = False\r\n--> 933 StockPickler.save_dict(pickler, obj)\r\n 934 log.info(\"# D2\")\r\n 935 return\r\n\r\n/usr/local/lib/python3.6/pickle.py in save_dict(self, obj)\r\n 819 \r\n 820 self.memoize(obj)\r\n--> 821 self._batch_setitems(obj.items())\r\n 822 \r\n 823 dispatch[dict] = save_dict\r\n\r\n/usr/local/lib/python3.6/pickle.py in _batch_setitems(self, items)\r\n 850 k, v = tmp[0]\r\n 851 save(k)\r\n--> 852 save(v)\r\n 853 write(SETITEM)\r\n 854 # else tmp is empty, and we're done\r\n\r\n/usr/local/lib/python3.6/pickle.py in save(self, obj, save_persistent_id)\r\n 519 \r\n 520 # Save the reduce() output and finally memoize the object\r\n--> 521 self.save_reduce(obj=obj, *rv)\r\n 522 \r\n 523 def persistent_id(self, obj):\r\n\r\n/usr/local/lib/python3.6/pickle.py in save_reduce(self, func, args, state, listitems, dictitems, obj)\r\n 632 \r\n 633 if state is not None:\r\n--> 634 save(state)\r\n 635 write(BUILD)\r\n 636 \r\n\r\n/usr/local/lib/python3.6/pickle.py in save(self, obj, save_persistent_id)\r\n 474 f = self.dispatch.get(t)\r\n 475 if f is not None:\r\n--> 476 f(self, obj) # Call unbound method with explicit self\r\n 477 return\r\n 478 \r\n\r\n/usr/local/lib/python3.6/site-packages/dill/_dill.py in save_module_dict(pickler, obj)\r\n 931 # we only care about session the first pass thru\r\n 932 pickler._session = False\r\n--> 933 StockPickler.save_dict(pickler, obj)\r\n 934 log.info(\"# D2\")\r\n 935 return\r\n\r\n/usr/local/lib/python3.6/pickle.py in save_dict(self, obj)\r\n 819 \r\n 820 self.memoize(obj)\r\n--> 821 self._batch_setitems(obj.items())\r\n 822 \r\n 823 dispatch[dict] = save_dict\r\n\r\n/usr/local/lib/python3.6/pickle.py in _batch_setitems(self, items)\r\n 845 for k, v in tmp:\r\n 846 save(k)\r\n--> 847 save(v)\r\n 848 write(SETITEMS)\r\n 849 elif n:\r\n\r\n/usr/local/lib/python3.6/pickle.py in save(self, obj, save_persistent_id)\r\n 519 \r\n 520 # Save the reduce() output and finally memoize the object\r\n--> 521 self.save_reduce(obj=obj, *rv)\r\n 522 \r\n 523 def persistent_id(self, obj):\r\n\r\n/usr/local/lib/python3.6/pickle.py in save_reduce(self, func, args, state, listitems, dictitems, obj)\r\n 632 \r\n 633 if state is not None:\r\n--> 634 save(state)\r\n 635 write(BUILD)\r\n 636 \r\n\r\n/usr/local/lib/python3.6/pickle.py in save(self, obj, save_persistent_id)\r\n 474 f = self.dispatch.get(t)\r\n 475 if f is not None:\r\n--> 476 f(self, obj) # Call unbound method with explicit self\r\n 477 return\r\n 478 \r\n\r\n/usr/local/lib/python3.6/site-packages/dill/_dill.py in save_module_dict(pickler, obj)\r\n 931 # we only care about session the first pass thru\r\n 932 pickler._session = False\r\n--> 933 StockPickler.save_dict(pickler, obj)\r\n 934 log.info(\"# D2\")\r\n 935 return\r\n\r\n/usr/local/lib/python3.6/pickle.py in save_dict(self, obj)\r\n 819 \r\n 820 self.memoize(obj)\r\n--> 821 self._batch_setitems(obj.items())\r\n 822 \r\n 823 dispatch[dict] = save_dict\r\n\r\n/usr/local/lib/python3.6/pickle.py in _batch_setitems(self, items)\r\n 845 for k, v in tmp:\r\n 846 save(k)\r\n--> 847 save(v)\r\n 848 write(SETITEMS)\r\n 849 elif n:\r\n\r\n/usr/local/lib/python3.6/pickle.py in save(self, obj, save_persistent_id)\r\n 494 reduce = getattr(obj, \"__reduce_ex__\", None)\r\n 495 if reduce is not None:\r\n--> 496 rv = reduce(self.proto)\r\n 497 else:\r\n 498 reduce = getattr(obj, \"__reduce__\", None)\r\n\r\nTypeError: can't pickle Tagger objects\r\n```\r\n\r\n</details>\r\n\r\ntrainsformers: 2.10.0\r\ndatasets: 1.0.2\r\ndill: 0.3.2\r\npython: 3.6.8\r\n\r\nOS: ubuntu 16.04 (Docker Image) on [Deep Learning VM](https://console.cloud.google.com/marketplace/details/click-to-deploy-images/deeplearning) (GCP)\r\nGPU: Tesla P100 (CUDA 10)\r\n",
"> I have the same issue with `transformers/BertJapaneseTokenizer`.\r\n\r\nIt looks like it this tokenizer is not supported unfortunately.\r\nThis is because `t.word_tokenizer.mecab` is a `fugashi.fugashi.GenericTagger` which is not compatible with pickle nor dill.\r\n\r\nWe need objects passes to `map` to be picklable for our caching system to work properly.\r\nHere it crashes because the caching system is not able to pickle the GenericTagger.\r\n\r\n\\> Maybe you can create an issue on [fugashi](https://github.com/polm/fugashi/issues) 's repo and ask to make `fugashi.fugashi.GenericTagger` compatible with pickle ?\r\n\r\nWhat you can do in the meantime is use a picklable wrapper of the tokenizer:\r\n\r\n\r\n```python\r\nfrom transformers import BertJapaneseTokenizer, MecabTokenizer\r\n\r\nclass PicklableTokenizer(BertJapaneseTokenizer):\r\n\r\n def __getstate__(self):\r\n state = dict(self.__dict__)\r\n state[\"do_lower_case\"] = self.word_tokenizer.do_lower_case\r\n state[\"never_split\"] = self.word_tokenizer.never_split \r\n del state[\"word_tokenizer\"]\r\n return state\r\n\r\n def __setstate__(self, state):\r\n do_lower_case = state.pop(\"do_lower_case\")\r\n never_split = state.pop(\"never_split\")\r\n self.__dict__ = state\r\n self.word_tokenizer = MecabTokenizer(\r\n do_lower_case=do_lower_case, never_split=never_split)\r\n )\r\n\r\nt = PicklableTokenizer.from_pretrained(\"cl-tohoku/bert-base-japanese-whole-word-masking\")\r\nencoded = train_ds.map(lambda examples: {'tokens': t.encode(examples['title'])}, batched=True) # it works\r\n```",
"We can also update the `BertJapaneseTokenizer` in `transformers` as you just shown @lhoestq to make it compatible with pickle. It will be faster than asking on fugashi 's repo and good for the other users of `transformers` as well.\r\n\r\nI'm currently working on `transformers` I'll include it in the https://github.com/huggingface/transformers/pull/7141 PR and the next release of `transformers`.",
"Thank you for the rapid and polite response!\r\n\r\n@lhoestq Thanks for the suggestion! I've passed the pickle phase, but another `ArrowInvalid` problem occored. I created another issue #687 .\r\n\r\n@thomwolf Wow, really fast work. I'm looking forward to the next release 🤗"
] | 2020-09-23T04:28:14
| 2020-10-08T09:32:16
| 2020-10-08T09:32:16
|
NONE
| null | null | null | null |
I load squad dataset. Then want to process data use following function with `Huggingface Transformers LongformerTokenizer`.
```
def convert_to_features(example):
# Tokenize contexts and questions (as pairs of inputs)
input_pairs = [example['question'], example['context']]
encodings = tokenizer.encode_plus(input_pairs, pad_to_max_length=True, max_length=512)
context_encodings = tokenizer.encode_plus(example['context'])
# Compute start and end tokens for labels using Transformers's fast tokenizers alignement methodes.
# this will give us the position of answer span in the context text
start_idx, end_idx = get_correct_alignement(example['context'], example['answers'])
start_positions_context = context_encodings.char_to_token(start_idx)
end_positions_context = context_encodings.char_to_token(end_idx-1)
# here we will compute the start and end position of the answer in the whole example
# as the example is encoded like this <s> question</s></s> context</s>
# and we know the postion of the answer in the context
# we can just find out the index of the sep token and then add that to position + 1 (+1 because there are two sep tokens)
# this will give us the position of the answer span in whole example
sep_idx = encodings['input_ids'].index(tokenizer.sep_token_id)
start_positions = start_positions_context + sep_idx + 1
end_positions = end_positions_context + sep_idx + 1
if end_positions > 512:
start_positions, end_positions = 0, 0
encodings.update({'start_positions': start_positions,
'end_positions': end_positions,
'attention_mask': encodings['attention_mask']})
return encodings
```
Then I run `dataset.map(convert_to_features)`, it raise
```
In [59]: a.map(convert_to_features)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-59-c453b508761d> in <module>
----> 1 a.map(convert_to_features)
/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint)
1242 fn_kwargs=fn_kwargs,
1243 new_fingerprint=new_fingerprint,
-> 1244 update_data=update_data,
1245 )
1246 else:
/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
151 "output_all_columns": self._output_all_columns,
152 }
--> 153 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
154 if new_format["columns"] is not None:
155 new_format["columns"] = list(set(new_format["columns"]) & set(out.column_names))
/opt/conda/lib/python3.7/site-packages/datasets/fingerprint.py in wrapper(*args, **kwargs)
156 kwargs_for_fingerprint["fingerprint_name"] = fingerprint_name
157 kwargs[fingerprint_name] = update_fingerprint(
--> 158 self._fingerprint, transform, kwargs_for_fingerprint
159 )
160
/opt/conda/lib/python3.7/site-packages/datasets/fingerprint.py in update_fingerprint(fingerprint, transform, transform_args)
103 for key in sorted(transform_args):
104 hasher.update(key)
--> 105 hasher.update(transform_args[key])
106 return hasher.hexdigest()
107
/opt/conda/lib/python3.7/site-packages/datasets/fingerprint.py in update(self, value)
55 def update(self, value):
56 self.m.update(f"=={type(value)}==".encode("utf8"))
---> 57 self.m.update(self.hash(value).encode("utf-8"))
58
59 def hexdigest(self):
/opt/conda/lib/python3.7/site-packages/datasets/fingerprint.py in hash(cls, value)
51 return cls.dispatch[type(value)](cls, value)
52 else:
---> 53 return cls.hash_default(value)
54
55 def update(self, value):
/opt/conda/lib/python3.7/site-packages/datasets/fingerprint.py in hash_default(cls, value)
44 @classmethod
45 def hash_default(cls, value):
---> 46 return cls.hash_bytes(dumps(value))
47
48 @classmethod
/opt/conda/lib/python3.7/site-packages/datasets/utils/py_utils.py in dumps(obj)
365 file = StringIO()
366 with _no_cache_fields(obj):
--> 367 dump(obj, file)
368 return file.getvalue()
369
/opt/conda/lib/python3.7/site-packages/datasets/utils/py_utils.py in dump(obj, file)
337 def dump(obj, file):
338 """pickle an object to a file"""
--> 339 Pickler(file, recurse=True).dump(obj)
340 return
341
/opt/conda/lib/python3.7/site-packages/dill/_dill.py in dump(self, obj)
444 raise PicklingError(msg)
445 else:
--> 446 StockPickler.dump(self, obj)
447 stack.clear() # clear record of 'recursion-sensitive' pickled objects
448 return
/opt/conda/lib/python3.7/pickle.py in dump(self, obj)
435 if self.proto >= 4:
436 self.framer.start_framing()
--> 437 self.save(obj)
438 self.write(STOP)
439 self.framer.end_framing()
/opt/conda/lib/python3.7/pickle.py in save(self, obj, save_persistent_id)
502 f = self.dispatch.get(t)
503 if f is not None:
--> 504 f(self, obj) # Call unbound method with explicit self
505 return
506
/opt/conda/lib/python3.7/site-packages/dill/_dill.py in save_function(pickler, obj)
1436 globs, obj.__name__,
1437 obj.__defaults__, obj.__closure__,
-> 1438 obj.__dict__, fkwdefaults), obj=obj)
1439 else:
1440 _super = ('super' in getattr(obj.func_code,'co_names',())) and (_byref is not None) and getattr(pickler, '_recurse', False)
/opt/conda/lib/python3.7/pickle.py in save_reduce(self, func, args, state, listitems, dictitems, obj)
636 else:
637 save(func)
--> 638 save(args)
639 write(REDUCE)
640
/opt/conda/lib/python3.7/pickle.py in save(self, obj, save_persistent_id)
502 f = self.dispatch.get(t)
503 if f is not None:
--> 504 f(self, obj) # Call unbound method with explicit self
505 return
506
/opt/conda/lib/python3.7/pickle.py in save_tuple(self, obj)
787 write(MARK)
788 for element in obj:
--> 789 save(element)
790
791 if id(obj) in memo:
/opt/conda/lib/python3.7/pickle.py in save(self, obj, save_persistent_id)
502 f = self.dispatch.get(t)
503 if f is not None:
--> 504 f(self, obj) # Call unbound method with explicit self
505 return
506
/opt/conda/lib/python3.7/site-packages/dill/_dill.py in save_module_dict(pickler, obj)
931 # we only care about session the first pass thru
932 pickler._session = False
--> 933 StockPickler.save_dict(pickler, obj)
934 log.info("# D2")
935 return
/opt/conda/lib/python3.7/pickle.py in save_dict(self, obj)
857
858 self.memoize(obj)
--> 859 self._batch_setitems(obj.items())
860
861 dispatch[dict] = save_dict
/opt/conda/lib/python3.7/pickle.py in _batch_setitems(self, items)
883 for k, v in tmp:
884 save(k)
--> 885 save(v)
886 write(SETITEMS)
887 elif n:
/opt/conda/lib/python3.7/pickle.py in save(self, obj, save_persistent_id)
547
548 # Save the reduce() output and finally memoize the object
--> 549 self.save_reduce(obj=obj, *rv)
550
551 def persistent_id(self, obj):
/opt/conda/lib/python3.7/pickle.py in save_reduce(self, func, args, state, listitems, dictitems, obj)
660
661 if state is not None:
--> 662 save(state)
663 write(BUILD)
664
/opt/conda/lib/python3.7/pickle.py in save(self, obj, save_persistent_id)
502 f = self.dispatch.get(t)
503 if f is not None:
--> 504 f(self, obj) # Call unbound method with explicit self
505 return
506
/opt/conda/lib/python3.7/site-packages/dill/_dill.py in save_module_dict(pickler, obj)
931 # we only care about session the first pass thru
932 pickler._session = False
--> 933 StockPickler.save_dict(pickler, obj)
934 log.info("# D2")
935 return
/opt/conda/lib/python3.7/pickle.py in save_dict(self, obj)
857
858 self.memoize(obj)
--> 859 self._batch_setitems(obj.items())
860
861 dispatch[dict] = save_dict
/opt/conda/lib/python3.7/pickle.py in _batch_setitems(self, items)
883 for k, v in tmp:
884 save(k)
--> 885 save(v)
886 write(SETITEMS)
887 elif n:
/opt/conda/lib/python3.7/pickle.py in save(self, obj, save_persistent_id)
547
548 # Save the reduce() output and finally memoize the object
--> 549 self.save_reduce(obj=obj, *rv)
550
551 def persistent_id(self, obj):
/opt/conda/lib/python3.7/pickle.py in save_reduce(self, func, args, state, listitems, dictitems, obj)
660
661 if state is not None:
--> 662 save(state)
663 write(BUILD)
664
/opt/conda/lib/python3.7/pickle.py in save(self, obj, save_persistent_id)
502 f = self.dispatch.get(t)
503 if f is not None:
--> 504 f(self, obj) # Call unbound method with explicit self
505 return
506
/opt/conda/lib/python3.7/site-packages/dill/_dill.py in save_module_dict(pickler, obj)
931 # we only care about session the first pass thru
932 pickler._session = False
--> 933 StockPickler.save_dict(pickler, obj)
934 log.info("# D2")
935 return
/opt/conda/lib/python3.7/pickle.py in save_dict(self, obj)
857
858 self.memoize(obj)
--> 859 self._batch_setitems(obj.items())
860
861 dispatch[dict] = save_dict
/opt/conda/lib/python3.7/pickle.py in _batch_setitems(self, items)
883 for k, v in tmp:
884 save(k)
--> 885 save(v)
886 write(SETITEMS)
887 elif n:
/opt/conda/lib/python3.7/pickle.py in save(self, obj, save_persistent_id)
522 reduce = getattr(obj, "__reduce_ex__", None)
523 if reduce is not None:
--> 524 rv = reduce(self.proto)
525 else:
526 reduce = getattr(obj, "__reduce__", None)
TypeError: can't pickle Tokenizer objects
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/thomwolf",
"id": 7353373,
"login": "thomwolf",
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"type": "User",
"url": "https://api.github.com/users/thomwolf",
"user_view_type": "public"
}
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/665/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/665/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 15 days, 5:04:02
|
https://api.github.com/repos/huggingface/datasets/issues/664
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/664/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/664/comments
|
https://api.github.com/repos/huggingface/datasets/issues/664/events
|
https://github.com/huggingface/datasets/issues/664
| 707,017,791
|
MDU6SXNzdWU3MDcwMTc3OTE=
| 664
|
load_dataset from local squad.py, raise error: TypeError: 'NoneType' object is not callable
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/24541791?v=4",
"events_url": "https://api.github.com/users/xixiaoyao/events{/privacy}",
"followers_url": "https://api.github.com/users/xixiaoyao/followers",
"following_url": "https://api.github.com/users/xixiaoyao/following{/other_user}",
"gists_url": "https://api.github.com/users/xixiaoyao/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/xixiaoyao",
"id": 24541791,
"login": "xixiaoyao",
"node_id": "MDQ6VXNlcjI0NTQxNzkx",
"organizations_url": "https://api.github.com/users/xixiaoyao/orgs",
"received_events_url": "https://api.github.com/users/xixiaoyao/received_events",
"repos_url": "https://api.github.com/users/xixiaoyao/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/xixiaoyao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xixiaoyao/subscriptions",
"type": "User",
"url": "https://api.github.com/users/xixiaoyao",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi !\r\nThanks for reporting.\r\nIt looks like no object inherits from `datasets.GeneratorBasedBuilder` (or more generally from `datasets.DatasetBuilder`) in your script.\r\n\r\nCould you check that there exist at least one dataset builder class ?",
"Hi @xixiaoyao did you manage to fix your issue ?",
"No activity, closing",
"It happened when try to change the old project which use 'nlp' to new project which use 'datasets'. You should check you old 'my_squad.py' file, change the inherit class from `nlp.xxx` to `datasets.xxx`. Otherwise datasets - load.py - import_main_class() `if inspect.isclass(obj) and issubclass(obj, main_cls_type):` can not find the main_cls."
] | 2020-09-23T03:53:36
| 2023-04-17T09:31:20
| 2020-10-20T09:06:13
|
NONE
| null | null | null | null |
version: 1.0.2
```
train_dataset = datasets.load_dataset('squad')
```
The above code can works. However, when I download the squad.py from your server, and saved as `my_squad.py` to local. I run followings raise errors.
```
train_dataset = datasets.load_dataset('./my_squad.py')
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-28-25a84b4d1581> in <module>
----> 1 train_dataset = nlp.load_dataset('./my_squad.py')
/opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs)
602 hash=hash,
603 features=features,
--> 604 **config_kwargs,
605 )
606
TypeError: 'NoneType' object is not callable
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/664/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/664/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 27 days, 5:12:37
|
https://api.github.com/repos/huggingface/datasets/issues/657
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/657/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/657/comments
|
https://api.github.com/repos/huggingface/datasets/issues/657/events
|
https://github.com/huggingface/datasets/issues/657
| 706,204,383
|
MDU6SXNzdWU3MDYyMDQzODM=
| 657
|
Squad Metric Description & Feature Mismatch
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8372098?v=4",
"events_url": "https://api.github.com/users/tshrjn/events{/privacy}",
"followers_url": "https://api.github.com/users/tshrjn/followers",
"following_url": "https://api.github.com/users/tshrjn/following{/other_user}",
"gists_url": "https://api.github.com/users/tshrjn/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/tshrjn",
"id": 8372098,
"login": "tshrjn",
"node_id": "MDQ6VXNlcjgzNzIwOTg=",
"organizations_url": "https://api.github.com/users/tshrjn/orgs",
"received_events_url": "https://api.github.com/users/tshrjn/received_events",
"repos_url": "https://api.github.com/users/tshrjn/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/tshrjn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tshrjn/subscriptions",
"type": "User",
"url": "https://api.github.com/users/tshrjn",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Thanks for reporting !\r\nThere indeed a mismatch between the features and the kwargs description\r\n\r\nI believe `answer_start` was added to match the squad dataset format for consistency, even though it is not used in the metric computation. I think I'd rather keep it this way, so that you can just give `references=squad[\"answers\"]` to `.compute()`.\r\nMaybe we can just fix the description then.",
"But then providing the `answer_start` becomes mandatory since the format of the features is checked against the one provided in the squad [file](https://github.com/huggingface/datasets/pull/658/files)."
] | 2020-09-22T09:07:00
| 2020-10-13T02:16:56
| 2020-09-29T15:57:38
|
NONE
| null | null | null | null |
The [description](https://github.com/huggingface/datasets/blob/master/metrics/squad/squad.py#L39) doesn't mention `answer_start` in squad. However the `datasets.features` require [it](https://github.com/huggingface/datasets/blob/master/metrics/squad/squad.py#L68). It's also not used in the evaluation.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/657/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/657/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 7 days, 6:50:38
|
https://api.github.com/repos/huggingface/datasets/issues/651
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/651/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/651/comments
|
https://api.github.com/repos/huggingface/datasets/issues/651/events
|
https://github.com/huggingface/datasets/issues/651
| 705,212,034
|
MDU6SXNzdWU3MDUyMTIwMzQ=
| 651
|
Problem with JSON dataset format
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/12724810?v=4",
"events_url": "https://api.github.com/users/vikigenius/events{/privacy}",
"followers_url": "https://api.github.com/users/vikigenius/followers",
"following_url": "https://api.github.com/users/vikigenius/following{/other_user}",
"gists_url": "https://api.github.com/users/vikigenius/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/vikigenius",
"id": 12724810,
"login": "vikigenius",
"node_id": "MDQ6VXNlcjEyNzI0ODEw",
"organizations_url": "https://api.github.com/users/vikigenius/orgs",
"received_events_url": "https://api.github.com/users/vikigenius/received_events",
"repos_url": "https://api.github.com/users/vikigenius/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/vikigenius/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vikigenius/subscriptions",
"type": "User",
"url": "https://api.github.com/users/vikigenius",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[
"Currently the `json` dataset doesn't support this format unfortunately.\r\nHowever you could load it with\r\n```python\r\nfrom datasets import Dataset\r\nimport pandas as pd\r\n\r\ndf = pd.read_json(\"path_to_local.json\", orient=\"index\")\r\ndataset = Dataset.from_pandas(df)\r\n```",
"or you can make a custom dataset script as explained in doc here: https://huggingface.co/docs/datasets/add_dataset.html"
] | 2020-09-20T23:57:14
| 2020-09-21T12:14:24
| null |
NONE
| null | null | null | null |
I have a local json dataset with the following form.
{
'id01234': {'key1': value1, 'key2': value2, 'key3': value3},
'id01235': {'key1': value1, 'key2': value2, 'key3': value3},
.
.
.
'id09999': {'key1': value1, 'key2': value2, 'key3': value3}
}
Note that instead of a list of records it's basically a dictionary of key value pairs with the keys being the record_ids and the values being the corresponding record.
Reading this with json:
```
data = datasets.load('json', data_files='path_to_local.json')
```
Throws an error and asks me to chose a field. What's the right way to handle this?
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/651/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/651/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/650
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/650/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/650/comments
|
https://api.github.com/repos/huggingface/datasets/issues/650/events
|
https://github.com/huggingface/datasets/issues/650
| 704,861,844
|
MDU6SXNzdWU3MDQ4NjE4NDQ=
| 650
|
dummy data testing can't test datasets using `dl_manager.extract` in `_split_generators`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4",
"events_url": "https://api.github.com/users/richarddwang/events{/privacy}",
"followers_url": "https://api.github.com/users/richarddwang/followers",
"following_url": "https://api.github.com/users/richarddwang/following{/other_user}",
"gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/richarddwang",
"id": 17963619,
"login": "richarddwang",
"node_id": "MDQ6VXNlcjE3OTYzNjE5",
"organizations_url": "https://api.github.com/users/richarddwang/orgs",
"received_events_url": "https://api.github.com/users/richarddwang/received_events",
"repos_url": "https://api.github.com/users/richarddwang/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions",
"type": "User",
"url": "https://api.github.com/users/richarddwang",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi :) \r\nIn your dummy data zip file you can just have `subset000.xz` as directories instead of compressed files.\r\nLet me know if it helps",
"Thanks for your comment @lhoestq ,\r\nJust for confirmation, changing dummy data like this won't make dummy test test the functionality to extract `subsetxxx.xz` but actually kind of circumvent it. But since we will test the real data so it is ok ?",
"Yes it's fine for now. We plan to add a job for slow tests.\r\nAnd at one point we'll also do another pass on the dummy data handling and consider extracting files.",
"Thanks for the confirmation.\r\nAlso the suggestion works. Thank you."
] | 2020-09-19T11:07:03
| 2020-09-22T11:54:10
| 2020-09-22T11:54:09
|
CONTRIBUTOR
| null | null | null | null |
Hi, I recently want to add a dataset whose source data is like this
```
openwebtext.tar.xz
|__ openwebtext
|__subset000.xz
| |__ ....txt
| |__ ....txt
| ...
|__ subset001.xz
|
....
```
So I wrote `openwebtext.py` like this
```
def _split_generators(self, dl_manager):
dl_dir = dl_manager.download_and_extract(_URL)
owt_dir = os.path.join(dl_dir, 'openwebtext')
subset_xzs = [
os.path.join(owt_dir, file_name) for file_name in os.listdir(owt_dir) if file_name.endswith('xz') # filter out ...xz.lock
]
ex_dirs = dl_manager.extract(subset_xzs, num_proc=round(os.cpu_count()*0.75))
nested_txt_files = [
[
os.path.join(ex_dir,txt_file_name) for txt_file_name in os.listdir(ex_dir) if txt_file_name.endswith('txt')
] for ex_dir in ex_dirs
]
txt_files = chain(*nested_txt_files)
return [
datasets.SplitGenerator(
name=datasets.Split.TRAIN, gen_kwargs={"txt_files": txt_files}
),
]
```
All went good, I can load and use real openwebtext, except when I try to test with dummy data. The problem is `MockDownloadManager.extract` do nothing, so `ex_dirs = dl_manager.extract(subset_xzs)` won't decompress `subset_xxx.xz`s for me.
How should I do ? Or you can modify `MockDownloadManager` to make it like a real `DownloadManager` ?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4",
"events_url": "https://api.github.com/users/richarddwang/events{/privacy}",
"followers_url": "https://api.github.com/users/richarddwang/followers",
"following_url": "https://api.github.com/users/richarddwang/following{/other_user}",
"gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/richarddwang",
"id": 17963619,
"login": "richarddwang",
"node_id": "MDQ6VXNlcjE3OTYzNjE5",
"organizations_url": "https://api.github.com/users/richarddwang/orgs",
"received_events_url": "https://api.github.com/users/richarddwang/received_events",
"repos_url": "https://api.github.com/users/richarddwang/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions",
"type": "User",
"url": "https://api.github.com/users/richarddwang",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/650/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/650/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 3 days, 0:47:06
|
https://api.github.com/repos/huggingface/datasets/issues/649
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/649/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/649/comments
|
https://api.github.com/repos/huggingface/datasets/issues/649/events
|
https://github.com/huggingface/datasets/issues/649
| 704,838,415
|
MDU6SXNzdWU3MDQ4Mzg0MTU=
| 649
|
Inconsistent behavior in map
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/10166085?v=4",
"events_url": "https://api.github.com/users/krandiash/events{/privacy}",
"followers_url": "https://api.github.com/users/krandiash/followers",
"following_url": "https://api.github.com/users/krandiash/following{/other_user}",
"gists_url": "https://api.github.com/users/krandiash/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/krandiash",
"id": 10166085,
"login": "krandiash",
"node_id": "MDQ6VXNlcjEwMTY2MDg1",
"organizations_url": "https://api.github.com/users/krandiash/orgs",
"received_events_url": "https://api.github.com/users/krandiash/received_events",
"repos_url": "https://api.github.com/users/krandiash/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/krandiash/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/krandiash/subscriptions",
"type": "User",
"url": "https://api.github.com/users/krandiash",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
] |
[
"Thanks for reporting !\r\n\r\nThis issue must have appeared when we refactored type inference in `nlp`\r\nBy default the library tries to keep the same feature types when applying `map` but apparently it has troubles with nested structures. I'll try to fix that next week"
] | 2020-09-19T08:41:12
| 2020-09-21T16:13:05
| 2020-09-21T16:13:05
|
NONE
| null | null | null | null |
I'm observing inconsistent behavior when applying .map(). This happens specifically when I'm incrementally adding onto a feature that is a nested dictionary. Here's a simple example that reproduces the problem.
```python
import datasets
# Dataset with a single feature called 'field' consisting of two examples
dataset = datasets.Dataset.from_dict({'field': ['a', 'b']})
print(dataset[0])
# outputs
{'field': 'a'}
# Map this dataset to create another feature called 'otherfield', which is a dictionary containing a key called 'capital'
dataset = dataset.map(lambda example: {'otherfield': {'capital': example['field'].capitalize()}})
print(dataset[0])
# output is okay
{'field': 'a', 'otherfield': {'capital': 'A'}}
# Now I want to map again to modify 'otherfield', by adding another key called 'append_x' to the dictionary under 'otherfield'
print(dataset.map(lambda example: {'otherfield': {'append_x': example['field'] + 'x'}})[0])
# printing out the first example after applying the map shows that the new key 'append_x' doesn't get added
# it also messes up the value stored at 'capital'
{'field': 'a', 'otherfield': {'capital': None}}
# Instead, I try to do the same thing by using a different mapped fn
print(dataset.map(lambda example: {'otherfield': {'append_x': example['field'] + 'x', 'capital': example['otherfield']['capital']}})[0])
# this preserves the value under capital, but still no 'append_x'
{'field': 'a', 'otherfield': {'capital': 'A'}}
# Instead, I try to pass 'otherfield' to remove_columns
print(dataset.map(lambda example: {'otherfield': {'append_x': example['field'] + 'x', 'capital': example['otherfield']['capital']}}, remove_columns=['otherfield'])[0])
# this still doesn't fix the problem
{'field': 'a', 'otherfield': {'capital': 'A'}}
# Alternately, here's what happens if I just directly map both 'capital' and 'append_x' on a fresh dataset.
# Recreate the dataset
dataset = datasets.Dataset.from_dict({'field': ['a', 'b']})
# Now map the entire 'otherfield' dict directly, instead of incrementally as before
print(dataset.map(lambda example: {'otherfield': {'append_x': example['field'] + 'x', 'capital': example['field'].capitalize()}})[0])
# This looks good!
{'field': 'a', 'otherfield': {'append_x': 'ax', 'capital': 'A'}}
```
This might be a new issue, because I didn't see this behavior in the `nlp` library.
Any help is appreciated!
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/649/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/649/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 2 days, 7:31:53
|
https://api.github.com/repos/huggingface/datasets/issues/648
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/648/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/648/comments
|
https://api.github.com/repos/huggingface/datasets/issues/648/events
|
https://github.com/huggingface/datasets/issues/648
| 704,753,123
|
MDU6SXNzdWU3MDQ3NTMxMjM=
| 648
|
offset overflow when multiprocessing batched map on large datasets.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4",
"events_url": "https://api.github.com/users/richarddwang/events{/privacy}",
"followers_url": "https://api.github.com/users/richarddwang/followers",
"following_url": "https://api.github.com/users/richarddwang/following{/other_user}",
"gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/richarddwang",
"id": 17963619,
"login": "richarddwang",
"node_id": "MDQ6VXNlcjE3OTYzNjE5",
"organizations_url": "https://api.github.com/users/richarddwang/orgs",
"received_events_url": "https://api.github.com/users/richarddwang/received_events",
"repos_url": "https://api.github.com/users/richarddwang/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions",
"type": "User",
"url": "https://api.github.com/users/richarddwang",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] |
[
"This should be fixed with #645 ",
"Feel free to re-open if it still occurs",
"This has just happened to me while working with a large (65GB) Parquet dataset.\n```\n[rank0]: Traceback (most recent call last):\n[rank0]: File \"/app/LLaMA-Factory/src/llamafactory/launcher.py\", line 23, in <module>\n[rank0]: launch()\n[rank0]: File \"/app/LLaMA-Factory/src/llamafactory/launcher.py\", line 19, in launch\n[rank0]: run_exp()\n[rank0]: File \"/app/LLaMA-Factory/src/llamafactory/train/tuner.py\", line 110, in run_exp\n[rank0]: _training_function(config={\"args\": args, \"callbacks\": callbacks})\n[rank0]: File \"/app/LLaMA-Factory/src/llamafactory/train/tuner.py\", line 72, in _training_function\n[rank0]: run_sft(model_args, data_args, training_args, finetuning_args, generating_args, callbacks)\n[rank0]: File \"/app/LLaMA-Factory/src/llamafactory/train/sft/workflow.py\", line 51, in run_sft\n[rank0]: dataset_module = get_dataset(template, model_args, data_args, training_args, stage=\"sft\", **tokenizer_module)\n[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n[rank0]: File \"/app/LLaMA-Factory/src/llamafactory/data/loader.py\", line 306, in get_dataset\n[rank0]: dataset = _get_merged_dataset(data_args.dataset, model_args, data_args, training_args, stage)\n[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n[rank0]: File \"/app/LLaMA-Factory/src/llamafactory/data/loader.py\", line 184, in _get_merged_dataset\n[rank0]: datasets[dataset_name] = _load_single_dataset(dataset_attr, model_args, data_args, training_args)\n[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n[rank0]: File \"/app/LLaMA-Factory/src/llamafactory/data/loader.py\", line 164, in _load_single_dataset\n[rank0]: return align_dataset(dataset, dataset_attr, data_args, training_args)\n[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n[rank0]: File \"/app/LLaMA-Factory/src/llamafactory/data/converter.py\", line 279, in align_dataset\n[rank0]: return dataset.map(\n[rank0]: ^^^^^^^^^^^^\n[rank0]: File \"/usr/local/lib/python3.12/dist-packages/datasets/arrow_dataset.py\", line 557, in wrapper\n[rank0]: out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\n[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^\n[rank0]: File \"/usr/local/lib/python3.12/dist-packages/datasets/arrow_dataset.py\", line 3171, in map\n[rank0]: for rank, done, content in iflatmap_unordered(\n[rank0]: File \"/usr/local/lib/python3.12/dist-packages/datasets/utils/py_utils.py\", line 728, in iflatmap_unordered\n[rank0]: [async_result.get(timeout=0.05) for async_result in async_results]\n[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n[rank0]: File \"/usr/local/lib/python3.12/dist-packages/multiprocess/pool.py\", line 774, in get\n[rank0]: raise self._value\n[rank0]: pyarrow.lib.ArrowInvalid: offset overflow while concatenating arrays\n```",
"Probably also worth mentioning that my dataset is multimodal; there is one text column and another with PIL image lists (singletons.)",
"Hi ! Arrow has a limitation in the max size of binary data (e.g. images) per batch, you can try to lower the Arrow writer batch size e.g. `datasets.config.DEFAULT_MAX_BATCH_SIZE = 100` (default is 1000)",
"I've already tried to set that via the keyword argument `writer_batch_size=4` and the same issue appears; I'm currently testing preprocessing with no multiprocessing."
] | 2020-09-19T02:15:11
| 2025-06-17T12:56:07
| 2020-09-19T16:46:31
|
CONTRIBUTOR
| null | null | null | null |
It only happened when "multiprocessing" + "batched" + "large dataset" at the same time.
```
def bprocess(examples):
examples['len'] = []
for text in examples['text']:
examples['len'].append(len(text))
return examples
wiki.map(brpocess, batched=True, num_proc=8)
```
```
---------------------------------------------------------------------------
RemoteTraceback Traceback (most recent call last)
RemoteTraceback:
"""
Traceback (most recent call last):
File "/home/yisiang/miniconda3/envs/ml/lib/python3.7/multiprocessing/pool.py", line 121, in worker
result = (True, func(*args, **kwds))
File "/home/yisiang/datasets/src/datasets/arrow_dataset.py", line 153, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/yisiang/datasets/src/datasets/fingerprint.py", line 163, in wrapper
out = func(self, *args, **kwargs)
File "/home/yisiang/datasets/src/datasets/arrow_dataset.py", line 1486, in _map_single
batch = self[i : i + batch_size]
File "/home/yisiang/datasets/src/datasets/arrow_dataset.py", line 1071, in __getitem__
format_kwargs=self._format_kwargs,
File "/home/yisiang/datasets/src/datasets/arrow_dataset.py", line 972, in _getitem
data_subset = self._data.take(indices_array)
File "pyarrow/table.pxi", line 1145, in pyarrow.lib.Table.take
File "/home/yisiang/miniconda3/envs/ml/lib/python3.7/site-packages/pyarrow/compute.py", line 268, in take
return call_function('take', [data, indices], options)
File "pyarrow/_compute.pyx", line 298, in pyarrow._compute.call_function
File "pyarrow/_compute.pyx", line 192, in pyarrow._compute.Function.call
File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: offset overflow while concatenating arrays
"""
The above exception was the direct cause of the following exception:
ArrowInvalid Traceback (most recent call last)
in
30 owt = datasets.load_dataset('/home/yisiang/datasets/datasets/openwebtext/openwebtext.py', cache_dir='./datasets')['train']
31 print('load/create data from OpenWebText Corpus for ELECTRA')
---> 32 e_owt = ELECTRAProcessor(owt, apply_cleaning=False).map(cache_file_name=f"electra_owt_{c.max_length}.arrow")
33 dsets.append(e_owt)
34
~/Reexamine_Attention/electra_pytorch/_utils/utils.py in map(self, **kwargs)
126 writer_batch_size=10**4,
127 num_proc=num_proc,
--> 128 **kwargs
129 )
130
~/hugdatafast/hugdatafast/transform.py in my_map(self, *args, **kwargs)
21 if not cache_file_name.endswith('.arrow'): cache_file_name += '.arrow'
22 if '/' not in cache_file_name: cache_file_name = os.path.join(self.cache_directory(), cache_file_name)
---> 23 return self.map(*args, cache_file_name=cache_file_name, **kwargs)
24
25 @patch
~/datasets/src/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint)
1285 logger.info("Spawning {} processes".format(num_proc))
1286 results = [pool.apply_async(self.__class__._map_single, kwds=kwds) for kwds in kwds_per_shard]
-> 1287 transformed_shards = [r.get() for r in results]
1288 logger.info("Concatenating {} shards from multiprocessing".format(num_proc))
1289 result = concatenate_datasets(transformed_shards)
~/datasets/src/datasets/arrow_dataset.py in (.0)
1285 logger.info("Spawning {} processes".format(num_proc))
1286 results = [pool.apply_async(self.__class__._map_single, kwds=kwds) for kwds in kwds_per_shard]
-> 1287 transformed_shards = [r.get() for r in results]
1288 logger.info("Concatenating {} shards from multiprocessing".format(num_proc))
1289 result = concatenate_datasets(transformed_shards)
~/miniconda3/envs/ml/lib/python3.7/multiprocessing/pool.py in get(self, timeout)
655 return self._value
656 else:
--> 657 raise self._value
658
659 def _set(self, i, obj):
ArrowInvalid: offset overflow while concatenating arrays
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/648/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/648/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 14:31:20
|
https://api.github.com/repos/huggingface/datasets/issues/647
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/647/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/647/comments
|
https://api.github.com/repos/huggingface/datasets/issues/647/events
|
https://github.com/huggingface/datasets/issues/647
| 704,734,764
|
MDU6SXNzdWU3MDQ3MzQ3NjQ=
| 647
|
Cannot download dataset_info.json
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/33407613?v=4",
"events_url": "https://api.github.com/users/chiyuzhang94/events{/privacy}",
"followers_url": "https://api.github.com/users/chiyuzhang94/followers",
"following_url": "https://api.github.com/users/chiyuzhang94/following{/other_user}",
"gists_url": "https://api.github.com/users/chiyuzhang94/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/chiyuzhang94",
"id": 33407613,
"login": "chiyuzhang94",
"node_id": "MDQ6VXNlcjMzNDA3NjEz",
"organizations_url": "https://api.github.com/users/chiyuzhang94/orgs",
"received_events_url": "https://api.github.com/users/chiyuzhang94/received_events",
"repos_url": "https://api.github.com/users/chiyuzhang94/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/chiyuzhang94/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chiyuzhang94/subscriptions",
"type": "User",
"url": "https://api.github.com/users/chiyuzhang94",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Thanks for reporting !\r\nWe should add support for servers without internet connection indeed\r\nI'll do that early next week",
"Thanks, @lhoestq !\r\nPlease let me know when it is available. ",
"Right now the recommended way is to create the dataset on a server with internet connection and then to save it and copy the serialized dataset to the server without internet connection.",
"#652 should allow you to load text/json/csv/pandas datasets without an internet connection **IF** you've the dataset script locally.\r\n\r\nExample: \r\nIf you have `datasets/text/text.py` locally, then you can do `load_dataset(\"./datasets/text\", data_files=...)`"
] | 2020-09-19T01:35:15
| 2020-09-21T08:28:42
| 2020-09-21T08:28:42
|
NONE
| null | null | null | null |
I am running my job on a cloud server where does not provide for connections from the standard compute nodes to outside resources. Hence, when I use `dataset.load_dataset()` to load data, I got an error like this:
```
ConnectionError: Couldn't reach https://storage.googleapis.com/huggingface-nlp/cache/datasets/text/default-53ee3045f07ba8ca/0.0.0/dataset_info.json
```
I tried to open this link manually, but I cannot access this file. How can I download this file and pass it through `dataset.load_dataset()` manually?
Versions:
Python version 3.7.3
PyTorch version 1.6.0
TensorFlow version 2.3.0
datasets version: 1.0.1
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/647/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/647/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 2 days, 6:53:27
|
https://api.github.com/repos/huggingface/datasets/issues/643
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/643/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/643/comments
|
https://api.github.com/repos/huggingface/datasets/issues/643/events
|
https://github.com/huggingface/datasets/issues/643
| 704,477,164
|
MDU6SXNzdWU3MDQ0NzcxNjQ=
| 643
|
Caching processed dataset at wrong folder
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mrm8488",
"id": 3653789,
"login": "mrm8488",
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mrm8488",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] |
[
"Thanks for reporting !\r\nIt uses a temporary file to write the data.\r\nHowever it looks like the temporary file is not placed in the right directory during the processing",
"Well actually I just tested and the temporary file is placed in the same directory, so it should work as expected.\r\nWhich version of `datasets` are you using ?",
"`datasets-1.0.1`\r\nHere you can reproduce it here:\r\nhttps://colab.research.google.com/drive/1O0KcepTFsmpkBbrbLLMq42iwTKmQh8d5?usp=sharing\r\n",
"It looks like a pyarrow issue with google colab.\r\nFor some reason this code increases the disk usage of google colab while it actually writes into google drive:\r\n\r\n```python\r\nimport pyarrow as pa\r\n\r\nstream = pa.OSFile(\"/content/drive/My Drive/path/to/file.arrow\", \"wb\")\r\nwriter = pa.RecordBatchStreamWriter(stream, schema=pa.schema({\"text\": pa.string()}))\r\nwriter.write_table(pa.Table.from_pydict({\"text\": [\"a\"*511 + \"\\n\"] * ((1 << 30) // 512)})) # 1GiB\r\nwriter.close()\r\nstream.close()\r\n```\r\n\r\nMoreover if I `rm` the file on google drive, it frees disk space on google colab.",
"It looks like replacing `pa.OSFile` by `open` fixes it, I'm going to open a PR",
"Ok. Thank you so much!",
"Actually I did more tests it doesn't >.<\r\nI'll let you know if I find a way to fix that",
"Actually I also have the issue when writing a regular text file\r\n\r\n```python\r\nf = open(\"/content/drive/My Drive/path/to/file\", \"w\")\r\nf.write((\"a\"*511 + \"\\n\") * ((1 << 30) // 512)) # 1GiB\r\nf.close()\r\n```\r\n\r\nIs that supposed to happen ?",
"The code you wrote should write a 1GB file in the Google Drive folder. Doesn't it? ",
"Yes it does, but the disk usage of google colab also increases by 1GB",
"I could check it and as you say as I write to te Drive disk the colab disk also increases...",
"To reproduce it: \r\n```bash\r\n!df -h | grep sda1\r\n```\r\n```python\r\nf = open(\"/content/drive/My Drive/test_to_remove.txt\", \"w\")\r\nf.write((\"a\"*511 + \"\\n\") * ((1 << 30) // 512)) # 1GiB\r\nf.write((\"a\"*511 + \"\\n\") * ((1 << 30) // 512)) # 1GiB\r\nf.close()\r\n```\r\n```bash\r\n!ls -lh /content/drive/My\\ Drive/test_to_remove.txt\r\n\r\n!df -h | grep sda1\r\n\r\n!rm -rf /content/drive/My\\ Drive/test_to_remove.txt\r\n\r\n```\r\n[Colab](https://colab.research.google.com/drive/1D0UiweCYQwwWZ65EEhuqqbaDDbhJYXfm?usp=sharing)\r\n\r\n\r\n",
"Apparently, Colab uses a local cache of the data files read/written from Google Drive. See:\r\n- https://github.com/googlecolab/colabtools/issues/2087#issuecomment-860818457\r\n- https://github.com/googlecolab/colabtools/issues/1915#issuecomment-804234540\r\n- https://github.com/googlecolab/colabtools/issues/2147#issuecomment-885052636"
] | 2020-09-18T15:41:26
| 2022-02-16T14:53:29
| 2022-02-16T14:53:29
|
CONTRIBUTOR
| null | null | null | null |
Hi guys, I run this on my Colab (PRO):
```python
from datasets import load_dataset
dataset = load_dataset('text', data_files='/content/corpus.txt', cache_dir='/content/drive/My Drive', split='train')
def encode(examples):
return tokenizer(examples['text'], truncation=True, padding='max_length')
dataset = dataset.map(encode, batched=True)
```
The file is about 4 GB, so I cannot process it on the Colab HD because there is no enough space. So I decided to mount my Google Drive fs and do it on it.
The dataset is cached in the right place but by processing it (applying `encode` function) seems to use a different folder because Colab HD starts to grow and it crashes when it should be done in the Drive fs.
What gets me crazy, it prints it is processing/encoding the dataset in the right folder:
```
Testing the mapped function outputs
Testing finished, running the mapping function on the dataset
Caching processed dataset at /content/drive/My Drive/text/default-ad3e69d6242ee916/0.0.0/7e13bc0fa76783d4ef197f079dc8acfe54c3efda980f2c9adfab046ede2f0ff7/cache-b16341780a59747d.arrow
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/643/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/643/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 515 days, 23:12:03
|
https://api.github.com/repos/huggingface/datasets/issues/638
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/638/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/638/comments
|
https://api.github.com/repos/huggingface/datasets/issues/638/events
|
https://github.com/huggingface/datasets/issues/638
| 704,146,956
|
MDU6SXNzdWU3MDQxNDY5NTY=
| 638
|
GLUE/QQP dataset: NonMatchingChecksumError
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4",
"events_url": "https://api.github.com/users/richarddwang/events{/privacy}",
"followers_url": "https://api.github.com/users/richarddwang/followers",
"following_url": "https://api.github.com/users/richarddwang/following{/other_user}",
"gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/richarddwang",
"id": 17963619,
"login": "richarddwang",
"node_id": "MDQ6VXNlcjE3OTYzNjE5",
"organizations_url": "https://api.github.com/users/richarddwang/orgs",
"received_events_url": "https://api.github.com/users/richarddwang/received_events",
"repos_url": "https://api.github.com/users/richarddwang/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions",
"type": "User",
"url": "https://api.github.com/users/richarddwang",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi ! Sure I'll take a look"
] | 2020-09-18T07:09:10
| 2020-09-18T11:37:07
| 2020-09-18T11:37:07
|
CONTRIBUTOR
| null | null | null | null |
Hi @lhoestq , I know you are busy and there are also other important issues. But if this is easy to be fixed, I am shamelessly wondering if you can give me some help , so I can evaluate my models and restart with my developing cycle asap. 😚
datasets version: editable install of master at 9/17
`datasets.load_dataset('glue','qqp', cache_dir='./datasets')`
```
Downloading and preparing dataset glue/qqp (download: 57.73 MiB, generated: 107.02 MiB, post-processed: Unknown size, total: 164.75 MiB) to ./datasets/glue/qqp/1.0.0/7c99657241149a24692c402a5c3f34d4c9f1df5ac2e4c3759fadea38f6cb29c4...
---------------------------------------------------------------------------
NonMatchingChecksumError Traceback (most recent call last)
in
----> 1 datasets.load_dataset('glue','qqp', cache_dir='./datasets')
~/datasets/src/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs)
609 download_config=download_config,
610 download_mode=download_mode,
--> 611 ignore_verifications=ignore_verifications,
612 )
613
~/datasets/src/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)
467 if not downloaded_from_gcs:
468 self._download_and_prepare(
--> 469 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
470 )
471 # Sync info
~/datasets/src/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
527 if verify_infos:
528 verify_checksums(
--> 529 self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), "dataset source files"
530 )
531
~/datasets/src/datasets/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name)
37 if len(bad_urls) > 0:
38 error_msg = "Checksums didn't match" + for_verification_name + ":\n"
---> 39 raise NonMatchingChecksumError(error_msg + str(bad_urls))
40 logger.info("All the checksums matched successfully" + for_verification_name)
41
NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://dl.fbaipublicfiles.com/glue/data/QQP-clean.zip']
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/638/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/638/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 4:27:57
|
https://api.github.com/repos/huggingface/datasets/issues/633
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/633/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/633/comments
|
https://api.github.com/repos/huggingface/datasets/issues/633/events
|
https://github.com/huggingface/datasets/issues/633
| 702,440,484
|
MDU6SXNzdWU3MDI0NDA0ODQ=
| 633
|
Load large text file for LM pre-training resulting in OOM
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/29704017?v=4",
"events_url": "https://api.github.com/users/leethu2012/events{/privacy}",
"followers_url": "https://api.github.com/users/leethu2012/followers",
"following_url": "https://api.github.com/users/leethu2012/following{/other_user}",
"gists_url": "https://api.github.com/users/leethu2012/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/leethu2012",
"id": 29704017,
"login": "leethu2012",
"node_id": "MDQ6VXNlcjI5NzA0MDE3",
"organizations_url": "https://api.github.com/users/leethu2012/orgs",
"received_events_url": "https://api.github.com/users/leethu2012/received_events",
"repos_url": "https://api.github.com/users/leethu2012/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/leethu2012/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leethu2012/subscriptions",
"type": "User",
"url": "https://api.github.com/users/leethu2012",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[
"Not sure what could cause that on the `datasets` side. Could this be a `Trainer` issue ? cc @julien-c @sgugger ?",
"There was a memory leak issue fixed recently in master. You should install from source and see if it fixes your problem.",
"@lhoestq @sgugger Thanks for your comments. I have install from source code as you told, but the problem is still there.\r\nTo reproduce the issue, just replace [these lines](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_language_modeling.py#L241-L258) with: \r\n(load_dataset and DataCollatorForDatasetsLanguageModeling as [above mentioned](https://github.com/huggingface/datasets/issues/633#issue-702440484))\r\n```python\r\n dataset = load_dataset(\"bookcorpus\")\r\n dataset = dataset.train_test_split(test_size=0.1)\r\n train_dataset = dataset['train']\r\n eval_dataset = dataset['test'] if training_args.do_eval else None\r\n\r\n data_collator = DataCollatorForDatasetsLanguageModeling(\r\n tokenizer=tokenizer,\r\n mlm=data_args.mlm,\r\n mlm_probability=data_args.mlm_probability,\r\n block_size=data_args.block_size\r\n )\r\n```\r\nand run by:\r\n```bash\r\npython run_language_modeling.py\r\n--output_dir=output \\\r\n--model_type=bert \\\r\n--model_name_or_path=bert-base-uncased \\\r\n--do_train \\\r\n--do_eval \\\r\n--mlm \r\n```",
"Same here. Pre-training on wikitext-103 to do some test. At the end of the training it takes 32GB of RAM + ~30GB of SWAP. I installed dataset==1.1.0, not built from source. I will try uninstalling and building from source when it finish.",
"This seems to be on the `transformers` library side.\r\n\r\nIf you have more informations (pip env) or even better, a colab reproducing the error we can investigate.",
"It seems like it's solved with freshed versions of transformers. I have tried to replicate the error doing a fresh pip install transformers & datasets on colab and the error doesn't continue. On colab it keeps stable on 5GB! (Y)\r\n\r\nEdit: **Thanks for your great work**. Have a good day.",
"@gaceladri witch version transformers and datasets are you using now? I want to try again. Thanks.",
"transformers==3.3.1\r\ndatasets==1.1.0\r\ntokenizers==0.8.1rc2\r\n",
"doing some modifications to mobilebert\r\nhttps://colab.research.google.com/drive/1ba09ZOpyHGAOQLcsxiQAHRXl10qnMU5o?usp=sharing ",
"It does not happen to me anymore. Can we close? @leethu2012 ",
"It's happening to me again. After 4 hours of pre-training, my ram memory gets full and the kernel dies. I am using the last transformers version as today. 4.4.0 and the last version of datasets 1.2.1, both installed from master. The memory consumption keeps increasing.",
"It looks like it is something from pytorch/python itself :face_with_head_bandage: https://github.com/pytorch/pytorch/issues/13246 ",
"Thanks for the investigation @gaceladri \r\n\r\nApparently this happens when `num_workers>0` and has to do with objects being copied-on-write.\r\nDid you try setting num_workers to 0 @gaceladri ?\r\nIf the issue doesn't happen with `num_workers=0` then this would confirm that it's indeed related to this python/pytorch issue.\r\n\r\nSince a `Dataset` object is a wrapper of a pyarrow Table, we should investigate if the data being copied comes from the Table itself or from metadata in the `Dataset` object. If it comes from the metadata in the `Dataset` object, we should be able to implement a workaround. But if it comes from the Table, we'll need to see with the pyarrow team what we can do... ",
"@lhoestq I have tried and it keeps increasing also with `dataloader_num_workers=0`",
"Hmmm so this might come from another issue...\r\nSince it doesn't seem to be related to multiprocessing it should be easier to investigate though.\r\nDo you have some ideas @gaceladri ?",
"@lhoestq I looked quickly to a previously spoted bug in my env wandb /sdk/interface/interface.py, because sometimes when I load the dataset I got a multiprocessing error at line 510 in wandb...interface.py\r\n\r\nThis bug is reported here https://github.com/huggingface/datasets/issues/847\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nAssertionError Traceback (most recent call last)\r\n<timed eval> in <module>\r\n\r\n~/anaconda3/envs/tfm/lib/python3.6/site-packages/transformers/trainer.py in train(self, model_path, trial)\r\n 877 print(len(epoch_iterator))\r\n 878 \r\n--> 879 for step, inputs in enumerate(epoch_iterator):\r\n 880 \r\n 881 start_step = time.time()\r\n\r\n~/anaconda3/envs/tfm/lib/python3.6/site-packages/torch/utils/data/dataloader.py in __next__(self)\r\n 433 if self._sampler_iter is None:\r\n 434 self._reset()\r\n--> 435 data = self._next_data()\r\n 436 self._num_yielded += 1\r\n 437 if self._dataset_kind == _DatasetKind.Iterable and \\\r\n\r\n~/anaconda3/envs/tfm/lib/python3.6/site-packages/torch/utils/data/dataloader.py in _next_data(self)\r\n 1083 else:\r\n 1084 del self._task_info[idx]\r\n-> 1085 return self._process_data(data)\r\n 1086 \r\n 1087 def _try_put_index(self):\r\n\r\n~/anaconda3/envs/tfm/lib/python3.6/site-packages/torch/utils/data/dataloader.py in _process_data(self, data)\r\n 1109 self._try_put_index()\r\n 1110 if isinstance(data, ExceptionWrapper):\r\n-> 1111 data.reraise()\r\n 1112 return data\r\n 1113 \r\n\r\n~/anaconda3/envs/tfm/lib/python3.6/site-packages/torch/_utils.py in reraise(self)\r\n 426 # have message field\r\n 427 raise self.exc_type(message=msg)\r\n--> 428 raise self.exc_type(msg)\r\n 429 \r\n 430 \r\n\r\nAssertionError: Caught AssertionError in DataLoader worker process 0.\r\nOriginal Traceback (most recent call last):\r\n File \"/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/torch/utils/data/_utils/worker.py\", line 198, in _worker_loop\r\n data = fetcher.fetch(index)\r\n File \"/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py\", line 44, in fetch\r\n data = [self.dataset[idx] for idx in possibly_batched_index]\r\n File \"/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py\", line 44, in <listcomp>\r\n data = [self.dataset[idx] for idx in possibly_batched_index]\r\n File \"/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/datasets/arrow_dataset.py\", line 1083, in __getitem__\r\n format_kwargs=self._format_kwargs,\r\n File \"/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/datasets/arrow_dataset.py\", line 1070, in _getitem\r\n format_kwargs=format_kwargs,\r\n File \"/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/datasets/arrow_dataset.py\", line 886, in _convert_outputs\r\n v = map_nested(command, v, **map_nested_kwargs)\r\n File \"/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/datasets/utils/py_utils.py\", line 216, in map_nested\r\n return function(data_struct)\r\n File \"/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/datasets/arrow_dataset.py\", line 847, in command\r\n return torch.tensor(x, **format_kwargs)\r\n File \"/home/ad/anaconda3/envs/tfm/lib/python3.6/warnings.py\", line 101, in _showwarnmsg\r\n _showwarnmsg_impl(msg)\r\n File \"/home/ad/anaconda3/envs/tfm/lib/python3.6/warnings.py\", line 30, in _showwarnmsg_impl\r\n file.write(text)\r\n File \"/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/wandb/sdk/lib/redirect.py\", line 100, in new_write\r\n cb(name, data)\r\n File \"/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/wandb/sdk/wandb_run.py\", line 729, in _console_callback\r\n self._backend.interface.publish_output(name, data)\r\n File \"/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/wandb/sdk/interface/interface.py\", line 186, in publish_output\r\n self._publish_output(o)\r\n File \"/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/wandb/sdk/interface/interface.py\", line 191, in _publish_output\r\n self._publish(rec)\r\n File \"/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/wandb/sdk/interface/interface.py\", line 510, in _publish\r\n if self._process and not self._process.is_alive():\r\n File \"/home/ad/anaconda3/envs/tfm/lib/python3.6/multiprocessing/process.py\", line 134, in is_alive\r\n assert self._parent_pid == os.getpid(), 'can only test a child process'\r\nAssertionError: can only test a child process\r\n```\r\n\r\nMy workaround was to just comment those lines without looking to much into consecuences:\r\n\r\n```\r\ndef _publish(self, record: pb.Record, local: bool = None) -> None:\r\n #if self._process and not self._process.is_alive():\r\n # raise Exception(\"The wandb backend process has shutdown\")\r\n```\r\n\r\nIt worked so far... I need to try running without wandb and see if it could be causing something wrong with multiprocessing. I am going to try to launch the training setting wandb to false and I will let you know again.",
"@lhoestq But despite this, I got lost into the [class Dataset()](https://huggingface.co/docs/datasets/_modules/datasets/arrow_dataset.html#Dataset) reading the pyarrow files.\r\n\r\nEdit: but you should be rigth, that it does not have to be related to multiprocessing since it keeps happening when `num_workers=0` ",
"Or maybe wandb uses multiprocessing ? One process for wandb logging and one for actual training ? If this is the case then even setting `num_workers=0` would cause the process to be forked for wandb and therefore cause the memory issue.",
"@lhoestq could be, but if we set wandb to false this should not happen. I am going to try.",
"@lhoestq It keeps happening. I have uninstalled wandb from my env, setted `%env WANDB_DISABLED=true` on my notebook, and commented this func:\r\n\r\n```\r\ndef get_available_reporting_integrations():\r\n integrations = []\r\n if is_azureml_available():\r\n integrations.append(\"azure_ml\")\r\n if is_comet_available():\r\n integrations.append(\"comet_ml\")\r\n if is_mlflow_available():\r\n integrations.append(\"mlflow\")\r\n if is_tensorboard_available():\r\n integrations.append(\"tensorboard\")\r\n # if is_wandb_available():\r\n # integrations.append(\"wandb\")\r\n return integrations\r\n```\r\nAs a fast test and it keeps increasing the ram memory. Wandb could not be the blameworthy here.",
"Thanks for checking @gaceladri . Let's investigate the single process setting then.\r\nIf you have some sort of colab notebook with a minimal code example that shows this behavior feel free to share it @gaceladri so that we can play around with it to find what causes this. Otherwise I'll probably try to reproduce on my side at one point",
"@lhoestq sure. Here you have https://colab.research.google.com/drive/1ba09ZOpyHGAOQLcsxiQAHRXl10qnMU5o?usp=sharing let me know if the link works and it reproduces the issue. To me, it reproduces the issue, since if you start the training the ram memory keeps increasing.\r\n\r\nLet me know. Thanks!",
"Could the bug be comming from tokenizers?\r\n\r\nI got this warning at the terminal from my jupyter notebook: \r\n```\r\nhuggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\r\nTo disable this warning, you can either:\r\n\t- Avoid using `tokenizers` before the fork if possible\r\n\t- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\r\n```",
"I've never experienced memory issues with tokenizers so I don't know\r\nCc @n1t0 are you aware of any issue that would cause memory to keep increasing when the tokenizer is used in the Data Collator for language modeling ?",
"@lhoestq Thanks for pointing to n1t0, just to clarify. That warning was doing fine-tuning, without collator:\r\n```\r\n\r\nfrom datasets import load_dataset, load_metric\r\nimport numpy as np\r\n\r\nGLUE_TASKS = [\r\n \"cola\",\r\n \"mnli\",\r\n \"mnli-mm\",\r\n \"mrpc\",\r\n \"qnli\",\r\n \"qqp\",\r\n \"rte\",\r\n \"sst2\",\r\n \"stsb\",\r\n \"wnli\",\r\n]\r\ntask = \"mnli\"\r\nactual_task = \"mnli\" if task == \"mnli-mm\" else task\r\ndataset = load_dataset(\"glue\", actual_task)\r\nmetric = load_metric(\"glue\", actual_task)\r\nbatch_size = 16\r\nattention_type = \"linear\"\r\n\r\nfrom transformers.models.mobilebert_mod import (\r\n MobileBertForSequenceClassification,\r\n MobileBertTokenizerFast,\r\n)\r\nfrom transformers.models.mobilebert_mod.configuration_mobilebert import (\r\n MobileBertConfigMod,\r\n)\r\nfrom transformers import TrainingArguments, Trainer\r\n\r\nnum_labels = 3 if task.startswith(\"mnli\") else 1 if task == \"stsb\" else 2\r\ntokenizer = MobileBertTokenizerFast.from_pretrained(\r\n \"/media/ad/00b5422b-9d54-4449-8b5d-08eab5cdac8c/training_trfm/big_linear_layerdrop_shared/checkpoint-23000/\",\r\n max_len=512,\r\n)\r\nmodel = MobileBertForSequenceClassification.from_pretrained(\r\n \"/media/ad/00b5422b-9d54-4449-8b5d-08eab5cdac8c/training_trfm/big_linear_layerdrop_shared/checkpoint-23000/\",\r\n num_labels=num_labels,\r\n)\r\nprint(model.num_parameters())\r\n\r\ntask_to_keys = {\r\n \"cola\": (\"sentence\", None),\r\n \"mnli\": (\"premise\", \"hypothesis\"),\r\n \"mnli-mm\": (\"premise\", \"hypothesis\"),\r\n \"mrpc\": (\"sentence1\", \"sentence2\"),\r\n \"qnli\": (\"question\", \"sentence\"),\r\n \"qqp\": (\"question1\", \"question2\"),\r\n \"rte\": (\"sentence1\", \"sentence2\"),\r\n \"sst2\": (\"sentence\", None),\r\n \"stsb\": (\"sentence1\", \"sentence2\"),\r\n \"wnli\": (\"sentence1\", \"sentence2\"),\r\n}\r\n\r\nsentence1_key, sentence2_key = task_to_keys[task]\r\nif sentence2_key is None:\r\n print(f\"Sentence: {dataset['train'][0][sentence1_key]}\")\r\nelse:\r\n print(f\"Sentence 1: {dataset['train'][0][sentence1_key]}\")\r\n print(f\"Sentence 2: {dataset['train'][0][sentence2_key]}\")\r\n\r\n\r\ndef preprocess_function(examples):\r\n if sentence2_key is None:\r\n return tokenizer(examples[sentence1_key], truncation=True)\r\n return tokenizer(examples[sentence1_key], examples[sentence2_key], truncation=True)\r\n\r\n\r\nencoded_dataset = dataset.map(preprocess_function, batched=True)\r\nmetric_name = (\r\n \"pearson\"\r\n if task == \"stsb\"\r\n else \"matthews_correlation\"\r\n if task == \"cola\"\r\n else \"accuracy\"\r\n)\r\n\r\nargs = TrainingArguments(\r\n f\"test-glue/{task}_{attention_type}\",\r\n evaluation_strategy=\"steps\",\r\n learning_rate=1e-5,\r\n per_device_train_batch_size=batch_size,\r\n per_device_eval_batch_size=batch_size,\r\n logging_steps=200,\r\n num_train_epochs=5,\r\n gradient_accumulation_steps=1,\r\n warmup_steps=10000,\r\n fp16=True,\r\n dataloader_num_workers=10,\r\n weight_decay=0.1,\r\n load_best_model_at_end=True,\r\n metric_for_best_model=metric_name,\r\n)\r\n\r\n\r\ndef compute_metrics(eval_pred):\r\n predictions, labels = eval_pred\r\n if task != \"stsb\":\r\n predictions = np.argmax(predictions, axis=1)\r\n else:\r\n predictions = predictions[:, 0]\r\n return metric.compute(predictions=predictions, references=labels)\r\n\r\n\r\nvalidation_key = (\r\n \"validation_mismatched\"\r\n if task == \"mnli-mm\"\r\n else \"validation_matched\"\r\n if task == \"mnli\"\r\n else \"validation\"\r\n)\r\n\r\ntrainer = Trainer(\r\n model,\r\n args,\r\n train_dataset=encoded_dataset[\"train\"],\r\n eval_dataset=encoded_dataset[validation_key],\r\n tokenizer=tokenizer,\r\n compute_metrics=compute_metrics,\r\n)\r\n\r\ntrainer.train()\r\n```\r\n\r\nNow, I have come back to pre-training. The changes that I think I have done are: not formatting the dataset to torch: ~~`big_dataset.set_format(type='torch', columns=[\"text\", \"input_ids\", \"attention_mask\", \"token_type_ids\"])`~~ so maybe some column is dropped and not freezed in memory and now I have not setted any validation dataset in the trainer. \r\n\r\nMy validation dataset before:\r\n```\r\nbook_corpus_eval = load_dataset(\r\n \"bookcorpus\",\r\n \"plain_text\",\r\n cache_dir=\"/home/ad/Desktop/bookcorpus\",\r\n split=\"train[98:99%]\",\r\n)\r\nbook_corpus_eval = book_corpus_eval.map(encode, batched=True)\r\nbook_corpus_eval.set_format(\r\n type=\"torch\", columns=[\"text\", \"input_ids\", \"attention_mask\", \"token_type_ids\"]\r\n)\r\n**book_corpus_eval = book_corpus_eval.select([i for i in range(1500)])**\r\n```\r\nMaybe _selecting_ or indexing the dataset before feeding it to the trainer, do something strange.\r\n\r\nMy trainer now:\r\n```\r\n\r\nbig_dataset = load_from_disk(\"/home/ad/Desktop/35percent_data.arrow/\")\r\n\r\nfrom transformers import DataCollatorForWholeWordMask\r\n\r\ndata_collator = DataCollatorForWholeWordMask(\r\n tokenizer=tokenizer, mlm=True, mlm_probability=0.15)\r\n\r\nfrom transformers import Trainer, TrainingArguments\r\n\r\ntraining_args = TrainingArguments(\r\n output_dir=\"./big_linear_layerdrop_shared_silu_secondtry\",\r\n overwrite_output_dir=True,\r\n per_device_train_batch_size=60,\r\n per_device_eval_batch_size=60,\r\n save_steps=500,\r\n save_total_limit=10,\r\n logging_first_step=True,\r\n logging_steps=100,\r\n# evaluation_strategy='steps',\r\n# eval_steps=250,\r\n gradient_accumulation_steps=8,\r\n fp16=True,\r\n dataloader_num_workers=10,\r\n warmup_steps=15000,\r\n learning_rate=6e-4,\r\n adam_epsilon=1e-6,\r\n adam_beta2=0.98,\r\n weight_decay=0.01,\r\n max_grad_norm=1.0,\r\n max_steps=500000, \r\n)\r\n\r\ntrainer = Trainer(\r\n model=model,\r\n args=training_args,\r\n data_collator=data_collator,\r\n train_dataset=big_dataset,\r\n# eval_dataset=book_corpus_eval,\r\n tokenizer=tokenizer)\r\n\r\nimport wandb\r\nwandb.login()\r\n\r\ntrainer.train()\r\n```\r\n\r\nAnd surprisingly, the ram now keeps going up and down. The training is up now for 12h without collapse the ram. I don't know what could cause the leakage. :mag: \r\n\r\nEdit: I didn't see the swap memory, that keeps increasing. So the problem persist. ",
"Thanks for sharing your results.\r\nSo you still had the issue for fine-tuning ?\r\nAnd the issue still appears with a bare-bone dataset from an arrow file...",
"Yes, on both cases. Fine-tuning a pre-trained model and pre-training from scratch with a local arrow file already pre-processed."
] | 2020-09-16T04:33:15
| 2021-02-16T12:02:01
| null |
NONE
| null | null | null | null |
I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this:
```python
from datasets import load_dataset
@dataclass
class DataCollatorForDatasetsLanguageModeling(DataCollatorForLanguageModeling):
"""
Data collator used for language modeling based on DataCollatorForLazyLanguageModeling
- collates batches of tensors, honoring their tokenizer's pad_token
- preprocesses batches for masked language modeling
"""
block_size: int = 512
def __call__(self, examples: List[dict]) -> Dict[str, torch.Tensor]:
examples = [example['text'] for example in examples]
batch, attention_mask = self._tensorize_batch(examples)
if self.mlm:
inputs, labels = self.mask_tokens(batch)
return {"input_ids": inputs, "labels": labels}
else:
labels = batch.clone().detach()
if self.tokenizer.pad_token_id is not None:
labels[labels == self.tokenizer.pad_token_id] = -100
return {"input_ids": batch, "labels": labels}
def _tensorize_batch(self, examples: List[str]) -> Tuple[torch.Tensor, torch.Tensor]:
if self.tokenizer._pad_token is None:
raise ValueError(
"You are attempting to pad samples but the tokenizer you are using"
f" ({self.tokenizer.__class__.__name__}) does not have one."
)
tensor_examples = self.tokenizer.batch_encode_plus(
[ex for ex in examples if ex],
max_length=self.block_size,
return_tensors="pt",
pad_to_max_length=True,
return_attention_mask=True,
truncation=True,
)
input_ids, attention_mask = tensor_examples["input_ids"], tensor_examples["attention_mask"]
return input_ids, attention_mask
dataset = load_dataset('text', data_files='train.txt',cache_dir="./", , split='train')
data_collator = DataCollatorForDatasetsLanguageModeling(tokenizer=tokenizer, mlm=True,
mlm_probability=0.15, block_size=tokenizer.max_len)
trainer = Trainer(model=model, args=args, data_collator=data_collator,
train_dataset=train_dataset, prediction_loss_only=True, )
trainer.train(model_path=model_path)
```
This train.txt is about 1.1GB and has 90k lines where each line is a sequence of 4k words.
During training, the memory usage increased fast as the following graph and resulted in OOM before the finish of training.

Could you please give me any suggestions on why this happened and how to fix it?
Thanks.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/633/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/633/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/630
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/630/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/630/comments
|
https://api.github.com/repos/huggingface/datasets/issues/630/events
|
https://github.com/huggingface/datasets/issues/630
| 701,636,350
|
MDU6SXNzdWU3MDE2MzYzNTA=
| 630
|
Text dataset not working with large files
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/17930170?v=4",
"events_url": "https://api.github.com/users/ksjae/events{/privacy}",
"followers_url": "https://api.github.com/users/ksjae/followers",
"following_url": "https://api.github.com/users/ksjae/following{/other_user}",
"gists_url": "https://api.github.com/users/ksjae/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ksjae",
"id": 17930170,
"login": "ksjae",
"node_id": "MDQ6VXNlcjE3OTMwMTcw",
"organizations_url": "https://api.github.com/users/ksjae/orgs",
"received_events_url": "https://api.github.com/users/ksjae/received_events",
"repos_url": "https://api.github.com/users/ksjae/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ksjae/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ksjae/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ksjae",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Seems like it works when setting ```block_size=2100000000``` or something arbitrarily large though.",
"Can you give us some stats on the data files you use as inputs?",
"Basically ~600MB txt files(UTF-8) * 59. \r\ncontents like ```안녕하세요, 이것은 예제로 한번 말해보는 텍스트입니다. 그냥 이렇다고요.<|endoftext|>\\n```\r\n\r\nAlso, it gets stuck for a loooong time at ```Testing the mapped function outputs```, for more than 12 hours(currently ongoing)",
"It gets stuck while doing `.map()` ? Are you using multiprocessing ?\r\nIf you could provide a code snippet it could be very useful",
"From transformers/examples/language-modeling/run-language-modeling.py :\r\n```\r\ndef get_dataset(\r\n args: DataTrainingArguments,\r\n tokenizer: PreTrainedTokenizer,\r\n evaluate: bool = False,\r\n cache_dir: Optional[str] = None,\r\n):\r\n file_path = args.eval_data_file if evaluate else args.train_data_file\r\n if True:\r\n dataset = load_dataset(\"text\", data_files=glob.glob(file_path), split='train', use_threads=True, \r\n ignore_verifications=True, save_infos=True, block_size=104857600)\r\n dataset = dataset.map(lambda ex: tokenizer(ex[\"text\"], add_special_tokens=True,\r\n truncation=True, max_length=args.block_size), batched=True)\r\n dataset.set_format(type='torch', columns=['input_ids'])\r\n return dataset\r\n if args.line_by_line:\r\n return LineByLineTextDataset(tokenizer=tokenizer, file_path=file_path, block_size=args.block_size)\r\n else:\r\n return TextDataset(\r\n tokenizer=tokenizer,\r\n file_path=file_path,\r\n block_size=args.block_size,\r\n overwrite_cache=args.overwrite_cache,\r\n cache_dir=cache_dir,\r\n )\r\n```\r\n\r\nNo, I'm not using multiprocessing.",
"I am not able to reproduce on my side :/\r\n\r\nCould you send the version of `datasets` and `pyarrow` you're using ?\r\nCould you try to update the lib and try again ?\r\nOr do you think you could try to reproduce it on google colab ?",
"Huh, weird. It's fixed on my side too.\r\nBut now ```Caching processed dataset``` is taking forever - how can I disable it? Any flags?",
"Right after `Caching processed dataset`, your function is applied to the dataset and there's a progress bar that shows how much time is left. How much time does it take for you ?\r\n\r\nAlso caching isn't supposed to slow down your processing. But if you still want to disable it you can do `.map(..., load_from_cache_file=False)`",
"Ah, it’s much faster now(Takes around 15~20min). \r\nBTW, any way to set default tensor output as plain tensors with distributed training? The ragged tensors are incompatible with tpustrategy :(",
"> Ah, it’s much faster now(Takes around 15~20min).\r\n\r\nGlad to see that it's faster now. What did you change exactly ?\r\n\r\n> BTW, any way to set default tensor output as plain tensors with distributed training? The ragged tensors are incompatible with tpustrategy :(\r\n\r\nOh I didn't know about that. Feel free to open an issue to mention that.\r\nI guess what you can do for now is set the dataset format to numpy instead of tensorflow, and use a wrapper of the dataset that converts the numpy arrays to tf tensors.\r\n\r\n",
">>> Glad to see that it's faster now. What did you change exactly ?\r\nI don't know, it just worked...? Sorry I couldn't be more helpful.\r\n\r\nSetting with numpy array is a great idea! Thanks."
] | 2020-09-15T06:02:36
| 2020-09-25T22:21:43
| 2020-09-25T22:21:43
|
NONE
| null | null | null | null |
```
Traceback (most recent call last):
File "examples/language-modeling/run_language_modeling.py", line 333, in <module>
main()
File "examples/language-modeling/run_language_modeling.py", line 262, in main
get_dataset(data_args, tokenizer=tokenizer, cache_dir=model_args.cache_dir) if training_args.do_train else None
File "examples/language-modeling/run_language_modeling.py", line 144, in get_dataset
dataset = load_dataset("text", data_files=file_path, split='train+test')
File "/home/ksjae/.local/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset
ignore_verifications=ignore_verifications,
File "/home/ksjae/.local/lib/python3.7/site-packages/datasets/builder.py", line 469, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/ksjae/.local/lib/python3.7/site-packages/datasets/builder.py", line 546, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/ksjae/.local/lib/python3.7/site-packages/datasets/builder.py", line 888, in _prepare_split
for key, table in utils.tqdm(generator, unit=" tables", leave=False, disable=not_verbose):
File "/home/ksjae/.local/lib/python3.7/site-packages/tqdm/std.py", line 1129, in __iter__
for obj in iterable:
File "/home/ksjae/.cache/huggingface/modules/datasets_modules/datasets/text/7e13bc0fa76783d4ef197f079dc8acfe54c3efda980f2c9adfab046ede2f0ff7/text.py", line 104, in _generate_tables
convert_options=self.config.convert_options,
File "pyarrow/_csv.pyx", line 714, in pyarrow._csv.read_csv
File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status
```
**pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?)**
It gives the same message for both 200MB, 10GB .tx files but not for 700MB file.
Can't upload due to size & copyright problem. sorry.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/17930170?v=4",
"events_url": "https://api.github.com/users/ksjae/events{/privacy}",
"followers_url": "https://api.github.com/users/ksjae/followers",
"following_url": "https://api.github.com/users/ksjae/following{/other_user}",
"gists_url": "https://api.github.com/users/ksjae/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ksjae",
"id": 17930170,
"login": "ksjae",
"node_id": "MDQ6VXNlcjE3OTMwMTcw",
"organizations_url": "https://api.github.com/users/ksjae/orgs",
"received_events_url": "https://api.github.com/users/ksjae/received_events",
"repos_url": "https://api.github.com/users/ksjae/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ksjae/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ksjae/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ksjae",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/630/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/630/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 10 days, 16:19:07
|
https://api.github.com/repos/huggingface/datasets/issues/629
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/629/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/629/comments
|
https://api.github.com/repos/huggingface/datasets/issues/629/events
|
https://github.com/huggingface/datasets/issues/629
| 701,517,550
|
MDU6SXNzdWU3MDE1MTc1NTA=
| 629
|
straddling object straddles two block boundaries
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/17970177?v=4",
"events_url": "https://api.github.com/users/bharaniabhishek123/events{/privacy}",
"followers_url": "https://api.github.com/users/bharaniabhishek123/followers",
"following_url": "https://api.github.com/users/bharaniabhishek123/following{/other_user}",
"gists_url": "https://api.github.com/users/bharaniabhishek123/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bharaniabhishek123",
"id": 17970177,
"login": "bharaniabhishek123",
"node_id": "MDQ6VXNlcjE3OTcwMTc3",
"organizations_url": "https://api.github.com/users/bharaniabhishek123/orgs",
"received_events_url": "https://api.github.com/users/bharaniabhishek123/received_events",
"repos_url": "https://api.github.com/users/bharaniabhishek123/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bharaniabhishek123/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bharaniabhishek123/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bharaniabhishek123",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"sorry it's an apache arrow issue."
] | 2020-09-15T00:30:46
| 2020-09-15T00:36:17
| 2020-09-15T00:32:17
|
NONE
| null | null | null | null |
I am trying to read json data (it's an array with lots of dictionaries) and getting block boundaries issue as below :
I tried calling read_json with readOptions but no luck .
```
table = json.read_json(fn)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "pyarrow/_json.pyx", line 246, in pyarrow._json.read_json
File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?)
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/17970177?v=4",
"events_url": "https://api.github.com/users/bharaniabhishek123/events{/privacy}",
"followers_url": "https://api.github.com/users/bharaniabhishek123/followers",
"following_url": "https://api.github.com/users/bharaniabhishek123/following{/other_user}",
"gists_url": "https://api.github.com/users/bharaniabhishek123/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bharaniabhishek123",
"id": 17970177,
"login": "bharaniabhishek123",
"node_id": "MDQ6VXNlcjE3OTcwMTc3",
"organizations_url": "https://api.github.com/users/bharaniabhishek123/orgs",
"received_events_url": "https://api.github.com/users/bharaniabhishek123/received_events",
"repos_url": "https://api.github.com/users/bharaniabhishek123/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bharaniabhishek123/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bharaniabhishek123/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bharaniabhishek123",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/629/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/629/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 0:01:31
|
https://api.github.com/repos/huggingface/datasets/issues/625
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/625/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/625/comments
|
https://api.github.com/repos/huggingface/datasets/issues/625/events
|
https://github.com/huggingface/datasets/issues/625
| 701,057,799
|
MDU6SXNzdWU3MDEwNTc3OTk=
| 625
|
dtype of tensors should be preserved
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/BramVanroy",
"id": 2779410,
"login": "BramVanroy",
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"type": "User",
"url": "https://api.github.com/users/BramVanroy",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Indeed we convert tensors to list to be able to write in arrow format. Because of this conversion we lose the dtype information. We should add the dtype detection when we do type inference. However it would require a bit of refactoring since currently the conversion happens before the type inference..\r\n\r\nAnd then for your information, when reading from arrow format we have to cast from arrow to numpy (which is fast since pyarrow has a numpy integration), and then to torch.\r\n\r\nHowever there's one thing that can help you: we make sure that the dtypes correspond to what is defined in `features`.\r\nTherefore what you can do is provide `features` in `.map(preprocess, feature=...)` to specify the output types.\r\n\r\nFor example in your case:\r\n```python\r\nfrom datasets import Features, Value, Sequence\r\n\r\nfeatures = Features({\r\n \"input_ids\": Sequence(Value(\"int32\")),\r\n \"sembedding\": Sequence(Value(\"float32\"))\r\n})\r\npreprocessed_dataset = dataset.map(preprocess, features=features)\r\n\r\npreprocessed_dataset.set_format(\"torch\", columns=[\"input_ids\", \"sembedding\"])\r\nprint(preprocessed_dataset[0][\"sembedding\"].dtype)\r\n# \"torch.float32\"\r\n```\r\n\r\nLet me know if it helps",
"If the arrow format is basically lists, why is the intermediate step to numpy necessary? I am a bit confused about that part.\r\n\r\nThanks for your suggestion. as I have currently implemented this, I cast to torch.Tensor in my collate_fn to save disk space (so I do not have to save padded tensors to max_len but can pad up to max batch len in collate_fn) at the cost of a bit slower processing. So for me this is not relevant anymore, but I am sure it is for others!",
"I'm glad you managed to figure something out :)\r\n\r\nCasting from arrow to numpy can be 100x faster than casting from arrow to list.\r\nThis is because arrow has an integration with numpy that allows it to instantiate numpy arrays with zero-copy from arrow.\r\nOn the other hand to create python lists it is slow since it has to recreate the list object by iterating through each element in python.",
"Ah that is interesting. I have no direct experience with arrow so I didn't know. ",
"I encountered a simliar issue: `datasets` converted my float numpy array to `torch.float64` tensors, while many pytorch operations require `torch.float32` inputs and it's very troublesome. \r\n\r\nI tried @lhoestq 's solution, but since it's mixed with the preprocess function, it's not very intuitive. \r\n\r\nI just want to share another possible simpler solution: directly cast the dtype of the processed dataset.\r\n\r\nNow I want to change the type of `labels` in `train_dataset` from float64 to float32, I can do this.\r\n\r\n```\r\nfrom datasets import Value, Sequence, Features\r\nfeats = train_dataset.features.copy()\r\nfeats['labels'].feature = Value(dtype='float32')\r\nfeats = Features(feats)\r\ntrain_dataset.cast_(feats)\r\n```\r\n",
"Reopening since @bhavitvyamalik started looking into it !\r\n\r\nAlso I'm posting here a function that could be helpful to support preserving the dtype of tensors.\r\n\r\nIt's used to build a pyarrow array out of a numpy array and:\r\n- it doesn't convert the numpy array to a python list\r\n- it keeps the precision of the numpy array for the pyarrow array\r\n- it works with multidimensional arrays (while `pa.array` can only take a 1D array as input)\r\n- it builds the pyarrow ListArray from offsets created on-the-fly and values that come from the flattened numpy array\r\n\r\n```python\r\nfrom functools import reduce\r\nfrom operator import mul\r\n\r\nimport numpy as np\r\nimport pyarrow as pa\r\n\r\ndef pa_ndarray(a):\r\n \"\"\"Build a PyArrow ListArray from a multidimensional NumPy array\"\"\"\r\n values = pa.array(a.flatten()) \r\n for i in range(a.ndim - 1): \r\n n_offsets = reduce(mul, a.shape[:a.ndim - i - 1], 1) \r\n step_offsets = a.shape[a.ndim - i - 1] \r\n offsets = pa.array(np.arange(n_offsets + 1) * step_offsets, type=pa.int32()) \r\n values = pa.ListArray.from_arrays(offsets, values) \r\n return values \r\n\r\nnarr = np.arange(42).reshape(7, 2, 3).astype(np.uint8)\r\nparr = pa_ndarray(narr)\r\nassert isinstance(parr, pa.Array)\r\nassert parr.type == pa.list_(pa.list_(pa.uint8()))\r\nassert narr.tolist() == parr.to_pylist()\r\n```\r\n\r\nThe only costly operation is the offsets computations. Since it doesn't iterate on the numpy array values this function is pretty fast.",
"@lhoestq Have you thought about this further?\r\n\r\nWe have a use case where we're attempting to load data containing numpy arrays using the `datasets` library.\r\n\r\nWhen using one of the \"standard\" methods (`[Value(...)]` or `Sequence()`) we see ~200 samples processed per second during the call to `_prepare_split`. This slowdown is caused by the vast number of calls to `encode_nested_example` (each sequence is converted to a list, and each element in the sequence...). \r\n\r\nUsing the `Feature` `ArrayND` improves this somewhat to ~500/s as it now uses numpy's `tolist()` rather than iterating over each value in the array and converting them individually.\r\n\r\nHowever, it's still pretty slow and in theory it should be possible to avoid the `numpy -> python -> arrow` dance altogether. To demonstrate this, if you keep the `Feature` set to an `ArrayND` but instead return a `pa_ndarray(...)` in `_generate_examples` it skips the conversion (`return obj, False`) and hits ~11_000/s. Two orders of magnitude speed up! The problem is this then fails later on when the `ArrowWriter` tries to write the examples to disk :-( \r\n\r\nIt would be nice to have first-class support for user-defined PyArrow objects. Is this a possibility? We have _large_ datasets where even an order of magnitude difference is important so settling on the middle ~500/s is less than ideal! \r\n\r\nIs there a workaround for this or another method that should be used instead that gets near-to or equal performance to returning PyArrow arrays?",
"Note that manually generating the table using `pyarrow` achieves ~30_000/s",
"Hi !\r\n\r\nIt would be awesome to achieve this speed for numpy arrays !\r\nFor now we have to use `encode_nested_example` to convert numpy arrays to python lists since pyarrow doesn't support multidimensional numpy arrays (only 1D).\r\n\r\nMaybe let's start a new PR from your PR @bhavitvyamalik (idk why we didn't answer your PR at that time, sorry about that).\r\nBasically the idea is to allow `TypedSequence` to support numpy arrays as you did, and remove the numpy->python casting in `_cast_to_python_objects`.\r\n\r\nThis is really important since we are starting to have a focus on other modalities than text as well (audio, images).\r\n\r\nThough until then @samgd, there is another feature that may interest you and that may give you the speed you want:\r\n\r\nIn a dataset script you can subclass either a GeneratorBasedBuilder (with the `_generate_examples ` method) or an ArrowBasedBuilder if you want. the ArrowBasedBuilder allows to yield arrow data by implementing the `_generate_tables` method (it's the same as `_generate_examples` except you must yield arrow tables). Since the data are already in arrow format, it doesn't call `encode_nested_example`. Let me know if that helps."
] | 2020-09-14T12:38:05
| 2021-08-17T08:30:04
| 2021-08-17T08:30:04
|
CONTRIBUTOR
| null | null | null | null |
After switching to `datasets` my model just broke. After a weekend of debugging, the issue was that my model could not handle the double that the Dataset provided, as it expected a float (but didn't give a warning, which seems a [PyTorch issue](https://discuss.pytorch.org/t/is-it-required-that-input-and-hidden-for-gru-have-the-same-dtype-float32/96221)).
As a user I did not expect this bug. I have a `map` function that I call on the Dataset that looks like this:
```python
def preprocess(sentences: List[str]):
token_ids = [[vocab.to_index(t) for t in s.split()] for s in sentences]
sembeddings = stransformer.encode(sentences)
print(sembeddings.dtype)
return {"input_ids": token_ids, "sembedding": sembeddings}
```
Given a list of `sentences` (`List[str]`), it converts those into token_ids on the one hand (list of lists of ints; `List[List[int]]`) and into sentence embeddings on the other (Tensor of dtype `torch.float32`). That means that I actually set the column "sembedding" to a tensor that I as a user expect to be a float32.
It appears though that behind the scenes, this tensor is converted into a **list**. I did not find this documented anywhere but I might have missed it. From a user's perspective this is incredibly important though, because it means you cannot do any data_type or tensor casting yourself in a mapping function! Furthermore, this can lead to issues, as was my case.
My model expected float32 precision, which I thought `sembedding` was because that is what `stransformer.encode` outputs. But behind the scenes this tensor is first cast to a list, and when we then set its format, as below, this column is cast not to float32 but to double precision float64.
```python
dataset.set_format(type="torch", columns=["input_ids", "sembedding"])
```
This happens because apparently there is an intermediate step of casting to a **numpy** array (?) **whose dtype creation/deduction is different from torch dtypes** (see the snippet below). As you can see, this means that the dtype is not preserved: if I got it right, the dataset goes from torch.float32 -> list -> float64 (numpy) -> torch.float64.
```python
import torch
import numpy as np
l = [-0.03010837361216545, -0.035979013890028, -0.016949838027358055]
torch_tensor = torch.tensor(l)
np_array = np.array(l)
np_to_torch = torch.from_numpy(np_array)
print(torch_tensor.dtype)
# torch.float32
print(np_array.dtype)
# float64
print(np_to_torch.dtype)
# torch.float64
```
This might lead to unwanted behaviour. I understand that the whole library is probably built around casting from numpy to other frameworks, so this might be difficult to solve. Perhaps `set_format` should include a `dtypes` option where for each input column the user can specify the wanted precision.
The alternative is that the user needs to cast manually after loading data from the dataset but that does not seem user-friendly, makes the dataset less portable, and might use more space in memory as well as on disk than is actually needed.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/625/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/625/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 336 days, 19:51:59
|
https://api.github.com/repos/huggingface/datasets/issues/624
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/624/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/624/comments
|
https://api.github.com/repos/huggingface/datasets/issues/624/events
|
https://github.com/huggingface/datasets/issues/624
| 700,541,628
|
MDU6SXNzdWU3MDA1NDE2Mjg=
| 624
|
Add learningq dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/17561003?v=4",
"events_url": "https://api.github.com/users/krrishdholakia/events{/privacy}",
"followers_url": "https://api.github.com/users/krrishdholakia/followers",
"following_url": "https://api.github.com/users/krrishdholakia/following{/other_user}",
"gists_url": "https://api.github.com/users/krrishdholakia/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/krrishdholakia",
"id": 17561003,
"login": "krrishdholakia",
"node_id": "MDQ6VXNlcjE3NTYxMDAz",
"organizations_url": "https://api.github.com/users/krrishdholakia/orgs",
"received_events_url": "https://api.github.com/users/krrishdholakia/received_events",
"repos_url": "https://api.github.com/users/krrishdholakia/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/krrishdholakia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/krrishdholakia/subscriptions",
"type": "User",
"url": "https://api.github.com/users/krrishdholakia",
"user_view_type": "public"
}
|
[
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] |
open
| false
| null |
[] |
[] | 2020-09-13T10:20:27
| 2020-09-14T09:50:02
| null |
NONE
| null | null | null | null |
Hi,
Thank you again for this amazing repo.
Would it be possible for y'all to add the LearningQ dataset - https://github.com/AngusGLChen/LearningQ ?
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/624/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/624/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/623
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/623/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/623/comments
|
https://api.github.com/repos/huggingface/datasets/issues/623/events
|
https://github.com/huggingface/datasets/issues/623
| 700,235,308
|
MDU6SXNzdWU3MDAyMzUzMDg=
| 623
|
Custom feature types in `load_dataset` from CSV
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8264887?v=4",
"events_url": "https://api.github.com/users/lvwerra/events{/privacy}",
"followers_url": "https://api.github.com/users/lvwerra/followers",
"following_url": "https://api.github.com/users/lvwerra/following{/other_user}",
"gists_url": "https://api.github.com/users/lvwerra/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lvwerra",
"id": 8264887,
"login": "lvwerra",
"node_id": "MDQ6VXNlcjgyNjQ4ODc=",
"organizations_url": "https://api.github.com/users/lvwerra/orgs",
"received_events_url": "https://api.github.com/users/lvwerra/received_events",
"repos_url": "https://api.github.com/users/lvwerra/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lvwerra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lvwerra/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lvwerra",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
| null |
[] |
[
"Currently `csv` doesn't support the `features` attribute (unlike `json`).\r\nWhat you can do for now is cast the features using the in-place transform `cast_`\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset('csv', data_files=file_dict, delimiter=';', column_names=['text', 'label'])\r\ndataset.cast_(emotion_features)\r\n```\r\n",
"Thanks for the clarification!",
"Hi @lhoestq we've tried out your suggestion but are now running into the following error:\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n<ipython-input-163-81ffd5ac18c9> in <module>\r\n----> 1 dataset.cast_(emotion_features)\r\n\r\n/usr/local/lib/python3.6/dist-packages/datasets/dataset_dict.py in cast_(self, features)\r\n 125 self._check_values_type()\r\n 126 for dataset in self.values():\r\n--> 127 dataset.cast_(features=features)\r\n 128 \r\n 129 def remove_columns_(self, column_names: Union[str, List[str]]):\r\n\r\n/usr/local/lib/python3.6/dist-packages/datasets/fingerprint.py in wrapper(*args, **kwargs)\r\n 161 # Call actual function\r\n 162 \r\n--> 163 out = func(self, *args, **kwargs)\r\n 164 \r\n 165 # Update fingerprint of in-place transforms + update in-place history of transforms\r\n\r\n/usr/local/lib/python3.6/dist-packages/datasets/arrow_dataset.py in cast_(self, features)\r\n 602 self._info.features = features\r\n 603 schema = pa.schema(features.type)\r\n--> 604 self._data = self._data.cast(schema)\r\n 605 \r\n 606 @fingerprint(inplace=True)\r\n\r\n/usr/local/lib/python3.6/dist-packages/pyarrow/table.pxi in pyarrow.lib.Table.cast()\r\n\r\nValueError: Target schema's field names are not matching the table's field names: ['text', 'label'], ['label', 'text']\r\n```\r\n\r\nLooking at the types in `emotion_features` we see that `label` and `text` appear to be swapped in the Arrow table:\r\n\r\n```\r\nemotion_features.type\r\nStructType(struct<label: int64, text: string>)\r\n```\r\n\r\nDid we define the `emotion_features` incorrectly? We just followed the instructions from the [docs](https://huggingface.co/docs/datasets/features.html?highlight=features#dataset-features), but perhaps we misunderstood something 😬 \r\n\r\n",
"In general, I don't think there is any hard reason we don't allow to use `features` in the csv script, right @lhoestq?\r\n\r\nShould I add it?",
"> In general, I don't think there is any hard reason we don't allow to use `features` in the csv script, right @lhoestq?\r\n> \r\n> Should I add it?\r\n\r\nSure let's add it. Setting the convert options should do the job\r\n\r\n> Hi @lhoestq we've tried out your suggestion but are now running into the following error:\r\n> \r\n> ```\r\n> ---------------------------------------------------------------------------\r\n> ValueError Traceback (most recent call last)\r\n> <ipython-input-163-81ffd5ac18c9> in <module>\r\n> ----> 1 dataset.cast_(emotion_features)\r\n>\r\n> /usr/local/lib/python3.6/dist-packages/pyarrow/table.pxi in pyarrow.lib.Table.cast()\r\n> \r\n> ValueError: Target schema's field names are not matching the table's field names: ['text', 'label'], ['label', 'text']\r\n> ```\r\n>\r\n> Did we define the `emotion_features` incorrectly? We just followed the instructions from the [docs](https://huggingface.co/docs/datasets/features.html?highlight=features#dataset-features), but perhaps we misunderstood something 😬\r\n\r\nThanks for reporting, that's a bug :) I'm fixing it right now",
"PR is open for the `ValueError: Target schema's field names are not matching the table's field names` error.\r\n\r\nI'm adding the features parameter to csv",
"Thanks a lot for the PR and quick fix @lhoestq!"
] | 2020-09-12T13:21:34
| 2020-09-30T19:51:43
| 2020-09-30T08:39:54
|
MEMBER
| null | null | null | null |
I am trying to load a local file with the `load_dataset` function and I want to predefine the feature types with the `features` argument. However, the types are always the same independent of the value of `features`.
I am working with the local files from the emotion dataset. To get the data you can use the following code:
```Python
from pathlib import Path
import wget
EMOTION_PATH = Path("./data/emotion")
DOWNLOAD_URLS = [
"https://www.dropbox.com/s/1pzkadrvffbqw6o/train.txt?dl=1",
"https://www.dropbox.com/s/2mzialpsgf9k5l3/val.txt?dl=1",
"https://www.dropbox.com/s/ikkqxfdbdec3fuj/test.txt?dl=1",
]
if not Path.is_dir(EMOTION_PATH):
Path.mkdir(EMOTION_PATH)
for url in DOWNLOAD_URLS:
wget.download(url, str(EMOTION_PATH))
```
The first five lines of the train set are:
```
i didnt feel humiliated;sadness
i can go from feeling so hopeless to so damned hopeful just from being around someone who cares and is awake;sadness
im grabbing a minute to post i feel greedy wrong;anger
i am ever feeling nostalgic about the fireplace i will know that it is still on the property;love
i am feeling grouchy;anger
```
Here the code to reproduce the issue:
```Python
from datasets import Features, Value, ClassLabel, load_dataset
class_names = ["sadness", "joy", "love", "anger", "fear", "surprise"]
emotion_features = Features({'text': Value('string'), 'label': ClassLabel(names=class_names)})
file_dict = {'train': EMOTION_PATH/'train.txt'}
dataset = load_dataset('csv', data_files=file_dict, delimiter=';', column_names=['text', 'label'], features=emotion_features)
```
**Observed behaviour:**
```Python
dataset['train'].features
```
```Python
{'text': Value(dtype='string', id=None),
'label': Value(dtype='string', id=None)}
```
**Expected behaviour:**
```Python
dataset['train'].features
```
```Python
{'text': Value(dtype='string', id=None),
'label': ClassLabel(num_classes=6, names=['sadness', 'joy', 'love', 'anger', 'fear', 'surprise'], names_file=None, id=None)}
```
**Things I've tried:**
- deleting the cache
- trying other types such as `int64`
Am I missing anything? Thanks for any pointer in the right direction.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/623/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/623/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 17 days, 19:18:20
|
https://api.github.com/repos/huggingface/datasets/issues/622
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/622/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/622/comments
|
https://api.github.com/repos/huggingface/datasets/issues/622/events
|
https://github.com/huggingface/datasets/issues/622
| 700,225,826
|
MDU6SXNzdWU3MDAyMjU4MjY=
| 622
|
load_dataset for text files not working
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/BramVanroy",
"id": 2779410,
"login": "BramVanroy",
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"type": "User",
"url": "https://api.github.com/users/BramVanroy",
"user_view_type": "public"
}
|
[
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
] |
[
"Can you give us more information on your os and pip environments (pip list)?",
"@thomwolf Sure. I'll try downgrading to 3.7 now even though Arrow say they support >=3.5.\r\n\r\nLinux (Ubuntu 18.04) - Python 3.8\r\n======================\r\nPackage - Version\r\n---------------------\r\ncertifi 2020.6.20\r\nchardet 3.0.4\r\nclick 7.1.2\r\ndatasets 1.0.1\r\ndill 0.3.2\r\nfasttext 0.9.2\r\nfilelock 3.0.12\r\nfuture 0.18.2\r\nidna 2.10\r\njoblib 0.16.0\r\nnltk 3.5\r\nnumpy 1.19.1\r\npackaging 20.4\r\npandas 1.1.2\r\npip 20.0.2\r\nprotobuf 3.13.0\r\npyarrow 1.0.1\r\npybind11 2.5.0\r\npyparsing 2.4.7\r\npython-dateutil 2.8.1\r\npytz 2020.1\r\nregex 2020.7.14\r\nrequests 2.24.0\r\nsacremoses 0.0.43\r\nscikit-learn 0.23.2\r\nscipy 1.5.2\r\nsentence-transformers 0.3.6\r\nsentencepiece 0.1.91\r\nsetuptools 46.1.3\r\nsix 1.15.0\r\nstanza 1.1.1\r\nthreadpoolctl 2.1.0\r\ntokenizers 0.8.1rc2\r\ntorch 1.6.0+cu101\r\ntqdm 4.48.2\r\ntransformers 3.1.0\r\nurllib3 1.25.10\r\nwheel 0.34.2\r\nxxhash 2.0.0\r\n\r\nWindows 10 - Python 3.8\r\n================\r\nPackage - Version\r\n----------------------------\r\ncertifi 2020.6.20\r\nchardet 3.0.4\r\nclick 7.1.2\r\ndatasets 1.0.1\r\ndill 0.3.2\r\nfasttext 0.9.2\r\nfilelock 3.0.12\r\nfuture 0.18.2\r\nidna 2.10\r\njoblib 0.16.0\r\nnlp 0.4.0\r\nnltk 3.5\r\nnumpy 1.19.1\r\npackaging 20.4\r\npandas 1.1.1\r\npip 20.0.2\r\nprotobuf 3.13.0\r\npyarrow 1.0.1\r\npybind11 2.5.0\r\npyparsing 2.4.7\r\npython-dateutil 2.8.1\r\npytz 2020.1\r\nregex 2020.7.14\r\nrequests 2.24.0\r\nsacremoses 0.0.43\r\nscikit-learn 0.23.2\r\nscipy 1.5.2\r\nsentence-transformers 0.3.5.1\r\nsentencepiece 0.1.91\r\nsetuptools 46.1.3\r\nsix 1.15.0\r\nstanza 1.1.1\r\nthreadpoolctl 2.1.0\r\ntokenizers 0.8.1rc1\r\ntorch 1.6.0+cu101\r\ntqdm 4.48.2\r\ntransformers 3.0.2\r\nurllib3 1.25.10\r\nwheel 0.34.2\r\nxxhash 2.0.0",
"Downgrading to 3.7 does not help. Here is a dummy text file:\r\n\r\n```text\r\nVerzekering weigert vaker te betalen\r\nBedrijven van verzekeringen erkennen steeds minder arbeidsongevallen .\r\nIn 2012 weigerden de bedrijven te betalen voor 21.055 ongevallen op het werk .\r\nDat is 11,8 % van alle ongevallen op het werk .\r\nNog nooit weigerden verzekeraars zoveel zaken .\r\nIn 2012 hadden 135.118 mensen een ongeval op het werk .\r\nDat zijn elke werkdag 530 mensen .\r\nBij die ongevallen stierven 67 mensen .\r\nBijna 12.000 hebben een handicap na het ongeval .\r\nGeen echt arbeidsongeval Bedrijven moeten een verzekering hebben voor hun werknemers .\r\n```\r\n\r\nA temporary work around for the \"text\" type, is\r\n\r\n```python\r\ndataset = Dataset.from_dict({\"text\": Path(dataset_f).read_text().splitlines()})\r\n```",
"\r\n\r\neven i am facing the same issue.",
"@banunitte Please do not post screenshots in the future but copy-paste your code and the errors. That allows others to copy-and-paste your code and test it. You may also want to provide the Python version that you are using.",
"I have the exact same problem in Windows 10, Python 3.8.\r\n",
"I have the same problem on Linux of the script crashing with a CSV error. This may be caused by 'CRLF', when changed 'CRLF' to 'LF', the problem solved.",
"I pushed a fix for `pyarrow.lib.ArrowInvalid: CSV parse error`. Let me know if you still have this issue.\r\n\r\nNot sure about the windows one yet",
"To complete what @lhoestq is saying, I think that to use the new version of the `text` processing script (which is on master right now) you need to either specify the version of the script to be the `master` one or to install the lib from source (in which case it uses the `master` version of the script by default):\r\n```python\r\ndataset = load_dataset('text', script_version='master', data_files=XXX)\r\n```\r\nWe do versioning by default, i.e. your version of the dataset lib will use the script with the same version by default (i.e. only the `1.0.1` version of the script if you have the PyPI version `1.0.1` of the lib).",
"\r\nwin10, py3.6\r\n\r\n\r\n```\r\nfrom datasets import Features, Value, ClassLabel, load_dataset\r\n\r\n\r\nfeatures = Features({'text': Value('string'), 'ctext': Value('string')})\r\nfile_dict = {'train': PATH/'summary.csv'}\r\n\r\ndataset = load_dataset('csv', data_files=file_dict, script_version='master', delimiter='\\t', column_names=['text', 'ctext'], features=features)\r\n```",
"```python\r\nTraceback` (most recent call last):\r\n File \"main.py\", line 281, in <module>\r\n main()\r\n File \"main.py\", line 190, in main\r\n train_data, test_data = data_factory(\r\n File \"main.py\", line 129, in data_factory\r\n train_data = load_dataset('text', \r\n File \"/home/me/Downloads/datasets/src/datasets/load.py\", line 608, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/home/me/Downloads/datasets/src/datasets/builder.py\", line 468, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/me/Downloads/datasets/src/datasets/builder.py\", line 546, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"/home/me/Downloads/datasets/src/datasets/builder.py\", line 888, in _prepare_split\r\n for key, table in utils.tqdm(generator, unit=\" tables\", leave=False, disable=not_verbose):\r\n File \"/home/me/.local/lib/python3.8/site-packages/tqdm/std.py\", line 1130, in __iter__\r\n for obj in iterable:\r\n File \"/home/me/.cache/huggingface/modules/datasets_modules/datasets/text/512f465342e4f4cd07a8791428a629c043bb89d55ad7817cbf7fcc649178b014/text.py\", line 103, in _generate_tables\r\n pa_table = pac.read_csv(\r\n File \"pyarrow/_csv.pyx\", line 617, in pyarrow._csv.read_csv\r\n File \"pyarrow/error.pxi\", line 123, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow/error.pxi\", line 85, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowInvalid: CSV parse error: Expected 1 columns, got 2\r\n```\r\n\r\nUnfortunately i am still getting this issue on Linux. I installed datasets from source and specified script_version to master.\r\n\r\n",
"> \r\n> win10, py3.6\r\n> \r\n> ```\r\n> from datasets import Features, Value, ClassLabel, load_dataset\r\n> \r\n> \r\n> features = Features({'text': Value('string'), 'ctext': Value('string')})\r\n> file_dict = {'train': PATH/'summary.csv'}\r\n> \r\n> dataset = load_dataset('csv', data_files=file_dict, script_version='master', delimiter='\\t', column_names=['text', 'ctext'], features=features)\r\n> ```\r\n\r\nSince #644 it should now work on windows @ScottishFold007 \r\n\r\n> Trying the following snippet, I get different problems on Linux and Windows.\r\n> \r\n> ```python\r\n> dataset = load_dataset(\"text\", data_files=\"data.txt\")\r\n> # or \r\n> dataset = load_dataset(\"text\", data_files=[\"data.txt\"])\r\n> ```\r\n>\r\n> Windows just seems to get stuck. Even with a tiny dataset of 10 lines, it has been stuck for 15 minutes already at this message:\r\n> \r\n> ```\r\n> Checking C:\\Users\\bramv\\.cache\\huggingface\\datasets\\b1d50a0e74da9a7b9822cea8ff4e4f217dd892e09eb14f6274a2169e5436e2ea.30c25842cda32b0540d88b7195147decf9671ee442f4bc2fb6ad74016852978e.py for additional imports.\r\n> Found main folder for dataset https://raw.githubusercontent.com/huggingface/datasets/1.0.1/datasets/text/text.py at C:\\Users\\bramv\\.cache\\huggingface\\modules\\datasets_modules\\datasets\\text\r\n> Found specific version folder for dataset https://raw.githubusercontent.com/huggingface/datasets/1.0.1/datasets/text/text.py at C:\\Users\\bramv\\.cache\\huggingface\\modules\\datasets_modules\\datasets\\text\\7e13bc0fa76783d4ef197f079dc8acfe54c3efda980f2c9adfab046ede2f0ff7\r\n> Found script file from https://raw.githubusercontent.com/huggingface/datasets/1.0.1/datasets/text/text.py to C:\\Users\\bramv\\.cache\\huggingface\\modules\\datasets_modules\\datasets\\text\\7e13bc0fa76783d4ef197f079dc8acfe54c3efda980f2c9adfab046ede2f0ff7\\text.py\r\n> Couldn't find dataset infos file at https://raw.githubusercontent.com/huggingface/datasets/1.0.1/datasets/text\\dataset_infos.json\r\n> Found metadata file for dataset https://raw.githubusercontent.com/huggingface/datasets/1.0.1/datasets/text/text.py at C:\\Users\\bramv\\.cache\\huggingface\\modules\\datasets_modules\\datasets\\text\\7e13bc0fa76783d4ef197f079dc8acfe54c3efda980f2c9adfab046ede2f0ff7\\text.json\r\n> Using custom data configuration default\r\n> ```\r\n\r\nSame for you @BramVanroy .\r\n\r\nNot sure about the one on linux though",
"> To complete what @lhoestq is saying, I think that to use the new version of the `text` processing script (which is on master right now) you need to either specify the version of the script to be the `master` one or to install the lib from source (in which case it uses the `master` version of the script by default):\r\n> \r\n> ```python\r\n> dataset = load_dataset('text', script_version='master', data_files=XXX)\r\n> ```\r\n> \r\n> We do versioning by default, i.e. your version of the dataset lib will use the script with the same version by default (i.e. only the `1.0.1` version of the script if you have the PyPI version `1.0.1` of the lib).\r\n\r\nLinux here:\r\n\r\nI was using the 0.4.0 nlp library load_dataset to load a text dataset of 9-10Gb without collapsing the RAM memory. However, today I got the csv error message mentioned in this issue. After installing the new (datasets) library from source and specifying the script_verson = 'master' I'm still having this same error message. Furthermore, I cannot use the dictionary \"trick\" to load the dataset since the system kills the process due to a RAM out of memory problem. Is there any other solution to this error? Thank you in advance. ",
"Hi @raruidol \r\nTo fix the RAM issue you'll need to shard your text files into smaller files (see https://github.com/huggingface/datasets/issues/610#issuecomment-691672919 for example)\r\n\r\nI'm not sure why you're having the csv error on linux.\r\nDo you think you could to to reproduce it on google colab for example ?\r\nOr send me a dummy .txt file that reproduces the issue ?",
"@lhoestq \r\n\r\nThe crash message shows up when loading the dataset:\r\n```\r\nprint('Loading corpus...') \r\nfiles = glob.glob('corpora/shards/*') \r\n-> dataset = load_dataset('text', script_version='master', data_files=files) \r\nprint('Corpus loaded.')\r\n```\r\nAnd this is the exact message:\r\n```\r\nTraceback (most recent call last):\r\n File \"run_language_modeling.py\", line 27, in <module>\r\n dataset = load_dataset('text', script_version='master', data_files=files)\r\n File \"/home/jupyter-raruidol/DebatAnalyser/env/lib/python3.7/site-packages/datasets/load.py\", line 611, in load_dataset\r\n ignore_verifications=ignore_verifications,\r\n File \"/home/jupyter-raruidol/DebatAnalyser/env/lib/python3.7/site-packages/datasets/builder.py\", line 471, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"/home/jupyter-raruidol/DebatAnalyser/env/lib/python3.7/site-packages/datasets/builder.py\", line 548, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"/home/jupyter-raruidol/DebatAnalyser/env/lib/python3.7/site-packages/datasets/builder.py\", line 892, in _prepare_split\r\n for key, table in utils.tqdm(generator, unit=\" tables\", leave=False, disable=not_verbose):\r\n File \"/home/jupyter-raruidol/DebatAnalyser/env/lib/python3.7/site-packages/tqdm/std.py\", line 1130, in __iter__\r\n for obj in iterable:\r\n File \"/home/jupyter-raruidol/.cache/huggingface/modules/datasets_modules/datasets/text/512f465342e4f4cd07a8791428a629c043bb89d55ad7817cbf7fcc649178b014/text.py\", line 107, in _generate_tables\r\n convert_options=self.config.convert_options,\r\n File \"pyarrow/_csv.pyx\", line 714, in pyarrow._csv.read_csv\r\n File \"pyarrow/error.pxi\", line 122, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow/error.pxi\", line 84, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowInvalid: CSV parse error: Expected 1 columns, got 2\r\n```\r\n\r\nAnd these are the pip packages I have atm and their versions:\r\n\r\n```\r\nPackage Version Location \r\n--------------- --------- -------------------------------------------------------------\r\ncertifi 2020.6.20 \r\nchardet 3.0.4 \r\nclick 7.1.2 \r\ndatasets 1.0.2 \r\ndill 0.3.2 \r\nfilelock 3.0.12 \r\nfuture 0.18.2 \r\nidna 2.10 \r\njoblib 0.16.0 \r\nnumpy 1.19.1 \r\npackaging 20.4 \r\npandas 1.1.1 \r\npip 19.0.3 \r\npyarrow 1.0.1 \r\npyparsing 2.4.7 \r\npython-dateutil 2.8.1 \r\npytz 2020.1 \r\nregex 2020.7.14 \r\nrequests 2.24.0 \r\nsacremoses 0.0.43 \r\nsentencepiece 0.1.91 \r\nsetuptools 40.8.0 \r\nsix 1.15.0 \r\ntokenizers 0.8.1rc2 \r\ntorch 1.6.0 \r\ntqdm 4.48.2 \r\ntransformers 3.0.2 /home/jupyter-raruidol/DebatAnalyser/env/src/transformers/src\r\n```\r\n\r\n\r\n",
"I tested on google colab which is also linux using this code:\r\n\r\n- first download an arbitrary text file\r\n```bash\r\nwget https://raw.githubusercontent.com/abisee/cnn-dailymail/master/url_lists/all_train.txt\r\n```\r\n- then run\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nd = load_dataset(\"text\", data_files=\"all_train.txt\", script_version='master')\r\n```\r\nAnd I don't get this issue.\r\n\r\n\\> Could you test on your side if these lines work @raruidol ?\r\n\r\nalso cc @Skyy93 as it seems you have the same issue\r\n\r\nIf it works:\r\nIt could mean that the issue could come from unexpected patterns in the files you want to use.\r\nIn that case we should find a way to handle them.\r\n\r\nAnd if it doesn't work:\r\nIt could mean that it comes from the way pyarrow reads text files on linux.\r\nIn that case we should report it to pyarrow and find a workaround in the meantime\r\n\r\nEither way it should help to find where this bug comes from and fix it :)\r\n\r\nThank you in advance !",
"Update: also tested the above code in a docker container from [jupyter/minimal-notebook](https://hub.docker.com/r/jupyter/minimal-notebook/) (based on ubuntu) and still not able to reproduce",
"It looks like with your text input file works without any problem. I have been doing some experiments this morning with my input files and I'm almost certain that the crash is caused by some unexpected pattern in the files. However, I've not been able to spot the main cause of it. What I find strange is that this same corpus was being loaded by the nlp 0.4.0 library without any problem... Where can I find the code where you structure the input text data in order to use it with pyarrow?",
"Under the hood it does\r\n```python\r\nimport pyarrow as pa\r\nimport pyarrow.csv\r\n\r\n# Use csv reader from Pyarrow with one column for text files\r\n\r\n# To force the one-column setting, we set an arbitrary character\r\n# that is not in text files as delimiter, such as \\b or \\v.\r\n# The bell character, \\b, was used to make beeps back in the days\r\nparse_options = pa.csv.ParseOptions( \r\n delimiter=\"\\b\", \r\n quote_char=False, \r\n double_quote=False, \r\n escape_char=False, \r\n newlines_in_values=False, \r\n ignore_empty_lines=False, \r\n)\r\n\r\nread_options= pa.csv.ReadOptions(use_threads=True, column_names=[\"text\"])\r\n\r\npa_table = pa.csv.read_csv(\"all_train.txt\", read_options=read_options, parse_options=parse_options)\r\n```\r\n\r\nNote that we changed the parse options with datasets 1.0\r\nIn particular the delimiter used to be `\\r` but this delimiter doesn't work on windows.",
"Could you try with `\\a` instead of `\\b` ? It looks like the bell character is \\a in python and not \\b",
"I was just exploring if the crash was happening in every shard or not, and which shards were generating the error message. With \\b I got the following list of shards crashing:\r\n\r\n```\r\nErrors on files: ['corpora/shards/shard_0069', 'corpora/shards/shard_0043', 'corpora/shards/shard_0014', 'corpora/shards/shard_0032', 'corpora/shards/shard_0088', 'corpora/shards/shard_0018', 'corpora/shards/shard_0073', 'corpora/shards/shard_0079', 'corpora/shards/shard_0038', 'corpora/shards/shard_0041', 'corpora/shards/shard_0007', 'corpora/shards/shard_0004', 'corpora/shards/shard_0102', 'corpora/shards/shard_0096', 'corpora/shards/shard_0030', 'corpora/shards/shard_0076', 'corpora/shards/shard_0067', 'corpora/shards/shard_0052', 'corpora/shards/shard_0026', 'corpora/shards/shard_0024', 'corpora/shards/shard_0064', 'corpora/shards/shard_0044', 'corpora/shards/shard_0013', 'corpora/shards/shard_0062', 'corpora/shards/shard_0057', 'corpora/shards/shard_0097', 'corpora/shards/shard_0094', 'corpora/shards/shard_0078', 'corpora/shards/shard_0075', 'corpora/shards/shard_0039', 'corpora/shards/shard_0077', 'corpora/shards/shard_0021', 'corpora/shards/shard_0040', 'corpora/shards/shard_0009', 'corpora/shards/shard_0023', 'corpora/shards/shard_0095', 'corpora/shards/shard_0107', 'corpora/shards/shard_0063', 'corpora/shards/shard_0086', 'corpora/shards/shard_0047', 'corpora/shards/shard_0089', 'corpora/shards/shard_0037', 'corpora/shards/shard_0101', 'corpora/shards/shard_0093', 'corpora/shards/shard_0082', 'corpora/shards/shard_0091', 'corpora/shards/shard_0065', 'corpora/shards/shard_0020', 'corpora/shards/shard_0070', 'corpora/shards/shard_0008', 'corpora/shards/shard_0058', 'corpora/shards/shard_0060', 'corpora/shards/shard_0022', 'corpora/shards/shard_0059', 'corpora/shards/shard_0100', 'corpora/shards/shard_0027', 'corpora/shards/shard_0072', 'corpora/shards/shard_0098', 'corpora/shards/shard_0019', 'corpora/shards/shard_0066', 'corpora/shards/shard_0042', 'corpora/shards/shard_0053']\r\n```\r\n\r\nI also tried with \\a and the list decreased but there were still several crashes:\r\n\r\n```\r\nErrors on files: ['corpora/shards/shard_0069', 'corpora/shards/shard_0055', 'corpora/shards/shard_0043', 'corpora/shards/shard_0014', 'corpora/shards/shard_0073', 'corpora/shards/shard_0025', 'corpora/shards/shard_0068', 'corpora/shards/shard_0102', 'corpora/shards/shard_0096', 'corpora/shards/shard_0076', 'corpora/shards/shard_0067', 'corpora/shards/shard_0026', 'corpora/shards/shard_0024', 'corpora/shards/shard_0044', 'corpora/shards/shard_0087', 'corpora/shards/shard_0092', 'corpora/shards/shard_0074', 'corpora/shards/shard_0094', 'corpora/shards/shard_0078', 'corpora/shards/shard_0039', 'corpora/shards/shard_0077', 'corpora/shards/shard_0040', 'corpora/shards/shard_0009', 'corpora/shards/shard_0107', 'corpora/shards/shard_0063', 'corpora/shards/shard_0103', 'corpora/shards/shard_0047', 'corpora/shards/shard_0033', 'corpora/shards/shard_0089', 'corpora/shards/shard_0037', 'corpora/shards/shard_0082', 'corpora/shards/shard_0071', 'corpora/shards/shard_0091', 'corpora/shards/shard_0065', 'corpora/shards/shard_0070', 'corpora/shards/shard_0058', 'corpora/shards/shard_0081', 'corpora/shards/shard_0060', 'corpora/shards/shard_0002', 'corpora/shards/shard_0059', 'corpora/shards/shard_0027', 'corpora/shards/shard_0072', 'corpora/shards/shard_0098', 'corpora/shards/shard_0019', 'corpora/shards/shard_0045', 'corpora/shards/shard_0036', 'corpora/shards/shard_0066', 'corpora/shards/shard_0053']\r\n```\r\n\r\nWhich means that it is quite possible that the assumption of that some unexpected pattern in the files is causing the crashes is true. If I am able to reach any conclusion I will post It here asap.",
"Hmmm I was expecting it to work with \\a, not sure why they appear in your text files though",
"Hi @lhoestq, is there any input length restriction which was not before the update of the nlp library?",
"No we never set any input length restriction on our side (maybe arrow but I don't think so)",
"@lhoestq Can you ever be certain that a delimiter character is not present in a plain text file? In other formats (e.g. CSV) , rules are set of what is allowed and what isn't so that it actually constitutes a CSV file. In a text file you basically have \"anything goes\", so I don't think you can ever be entirely sure that the chosen delimiter does not exist in the text file, or am I wrong? \r\n\r\nIf I understand correctly you choose a delimiter that we hope does not exist in the file, so that when the CSV parser starts splitting into columns, it will only ever create one column? Why can't we use a newline character though?",
"Okay, I have splitted the crashing shards into individual sentences and some examples of the inputs that are causing the crashes are the following ones:\r\n\r\n\r\n_4. DE L’ORGANITZACIÓ ESTAMENTAL A L’ORGANITZACIÓ EN CLASSES A mesura que es desenvolupava un sistema econòmic capitalista i naixia una classe burgesa cada vegada més preparada per a substituir els dirigents de les velles monarquies absolutistes, es qüestionava l’abundància de béns amortitzats, que com s’ha dit estaven fora del mercat i no pagaven tributs, pels perjudicis que ocasionaven a les finances públiques i a l’economia en general. Aquest estat d’opinió revolucionari va desembocar en un conjunt de mesures pràctiques de caràcter liberal. D’una banda, les que intentaven desposseir les mans mortes del domini de béns acumulats, procés que acostumem a denominar desamortització, i que no és més que la nacionalització i venda d’aquests béns eclesiàstics o civils en subhasta pública al millor postor. D’altra banda, les que redimien o reduïen els censos i delmes o aixecaven les prohibicions de venda, és a dir, les vinculacions. La desamortització, que va afectar béns dels ordes religiosos, dels pobles i d’algunes corporacions civils, no va ser un camí fàcil, perquè costava i costa trobar algú que sigui indiferent a la pèrdua de béns, drets i privilegis. I té una gran transcendència, va privar els antics estaments de les Espanyes, clero i pobles —la noblesa en queda al marge—, de la força econòmica que els donaven bona part de les seves terres i, en última instància, va preparar el terreny per a la substitució de la vella societat estamental per la nova societat classista. En aquesta societat, en teoria, les agrupacions socials són obertes, no tenen cap estatut jurídic privilegiat i estan definides per la possessió o no d’uns béns econòmics que són lliurement alienables. A les Espanyes la transformació va afectar poc l’aristocràcia latifundista, allà on n’hi havia. Aquesta situació va afavorir, en part, la persistència de la vella cultura de la societat estamental en determinats ambients, i això ha influït decisivament en la manca de democràcia que caracteritza la majoria de règims polítics que s’han anat succeint. Una manera de pensar que sempre sura en un moment o altre, i que de fet no acaba de desaparèixer del tot. 5. INICI DE LA DESAMORTITZACIÓ A LES ESPANYES Durant el segle xviii, dins d’aquesta visió lliberal, va agafar força en alguns cercles de les Espanyes el corrent d’opinió contrari a les mans mortes. Durant el regnat de Carles III, s’arbitraren les primeres mesures desamortitzadores proposades per alguns ministres il·lustrats. Aquestes disposicions foren modestes i poc eficaces, no van aturar l’acumulació de terres per part dels estaments que constituïen les mans mortes i varen afectar principalment béns dels pobles. L’Església no va ser tocada, excepte en el cas de 110_\r\n\r\n_la revolució liberal, perquè, encara que havia perdut els seus drets jurisdiccionals, havia conservat la majoria de terres i fins i tot les havia incrementat amb d’altres que procedien de la desamortització. En la nova situació, les mans mortes del bosc públic eren l’Estat, que no cerca mai l’autofinançament de les despeses de gestió; els diners que manquin ja els posarà l’Estat. 9. DEFENSA I INTENTS DE RECUPERACIÓ DELS BÉNS COMUNALS DESAMORTITZATS El procés de centralització no era senzill, perquè, d’una banda, la nova organització apartava de la gestió moltes corporacions locals i molts veïns que l’havien portada des de l’edat mitjana, i, de l’altra, era difícil de coordinar la nova silvicultura amb moltes pràctiques forestals i drets tradicionals, com la pastura, fer llenya o tallar un arbre aquí i un altre allà quan tenia el gruix suficient, les pràctiques que s’havien fet sempre. Les primeres passes de la nova organització centralitzada varen tenir moltes dificultats en aquells indrets en què els terrenys municipals i comunals tenien un paper important en l’economia local. La desobediència a determinades normes imposades varen prendre formes diferents. Algunes institucions, com, per exemple, la Diputació de Lleida, varen retardar la tramitació d’alguns expedients i varen evitar la venda de béns municipals. Molts pobles permeteren deixar que els veïns continuessin amb les seves pràctiques tradicionals, d’altres varen boicotejar les subhastes d’aprofitaments. L’Estat va reaccionar encomanant a la Guàrdia Civil el compliment de les noves directrius. Imposar el nou règim va costar a l’Administració un grapat d’anys, però de mica en mica, amb molta, molta guarderia i gens de negociació, ho va aconseguir. La nova gestió estatal dels béns municipals va deixar, com hem comentat, molta gent sense uns recursos necessaris per a la supervivència, sobre tot en àrees on predominaven les grans propietats, i on els pagesos sense terra treballaven de jornalers temporers. Això va afavorir que, a bona part de les Espanyes, les primeres lluites camperoles de la segona meitat del segle xix defensessin la recuperació dels comunals desamortitzats; per a molts aquella expropiació i venda dirigida pels governs monàrquics era la causa de molta misèria. D’altres, més radicalitzats, varen entendre que l’eliminació de la propietat col·lectiva i la gestió estatal dels boscos no desamortitzats suposava una usurpació pura i dura. En les zones més afectades per la desamortització això va donar lloc a un imaginari centrat en la defensa del comunal. La Segona República va arribar en una conjuntura econòmica de crisi, generada pel crac del 1929. Al camp, aquesta situació va produir una forta caiguda dels preus dels productes agraris i un increment important de l’atur. QUADERNS AGRARIS 42 (juny 2017), p. 105-126_\r\n\r\nI think that the main difference between the crashing samples and the rest is their length. Therefore, couldn't the length be causing the message errors? I hope with these samples you can identify what is causing the crashes considering that the 0.4.0 nlp library was loading them properly.",
"So we're using the csv reader to read text files because arrow doesn't have a text reader.\r\nTo workaround the fact that text files are just csv with one column, we want to set a delimiter that doesn't appear in text files.\r\nUntil now I thought that it would do the job but unfortunately it looks like even characters like \\a appear in text files.\r\n\r\nSo we have to option:\r\n- find another delimiter that does the job (maybe `\\x1b` esc or `\\x18` cancel)\r\n- don't use the csv reader from arrow but the text reader from pandas instead (or any other reader). The only important thing is that it must be fast (arrow's reader has a nice and fast multithreaded for csv that we're using now but hopefully we can find an alternative)\r\n\r\n\r\n\r\n> @lhoestq Can you ever be certain that a delimiter character is not present in a plain text file? In other formats (e.g. CSV) , rules are set of what is allowed and what isn't so that it actually constitutes a CSV file. In a text file you basically have \"anything goes\", so I don't think you can ever be entirely sure that the chosen delimiter does not exist in the text file, or am I wrong?\r\n\r\nAs long as the text file follows some encoding it wouldn't make sense to have characters such as the bell character. However I agree it can happen.\r\n\r\n> If I understand correctly you choose a delimiter that we hope does not exist in the file, so that when the CSV parser starts splitting into columns, it will only ever create one column? Why can't we use a newline character though?\r\n\r\nExactly. Arrow doesn't allow the newline character unfortunately.",
"> Okay, I have splitted the crashing shards into individual sentences and some examples of the inputs that are causing the crashes are the following ones\r\n\r\nThanks for digging into it !\r\n\r\nCharacters like \\a or \\b are not shown when printing the text, so as it is I can't tell if it contains unexpected characters.\r\nMaybe could could open the file in python and check if `\"\\b\" in open(\"path/to/file\", \"r\").read()` ?\r\n\r\n> I think that the main difference between the crashing samples and the rest is their length. Therefore, couldn't the length be causing the message errors? I hope with these samples you can identify what is causing the crashes considering that the 0.4.0 nlp library was loading them properly.\r\n\r\nTo check that you could try to run \r\n\r\n```python\r\nimport pyarrow as pa\r\nimport pyarrow.csv\r\n\r\nopen(\"dummy.txt\", \"w\").write(((\"a\" * 10_000) + \"\\n\") * 4) # 4 lines of 10 000 'a'\r\n\r\nparse_options = pa.csv.ParseOptions( \r\n delimiter=\"\\b\", \r\n quote_char=False, \r\n double_quote=False, \r\n escape_char=False, \r\n newlines_in_values=False, \r\n ignore_empty_lines=False, \r\n)\r\n\r\nread_options= pa.csv.ReadOptions(use_threads=True, column_names=[\"text\"])\r\n\r\npa_table = pa.csv.read_csv(\"dummy.txt\", read_options=read_options, parse_options=parse_options)\r\n```\r\n\r\non my side it runs without error though",
"That's true, It was my error printing the text that way. Maybe as a workaround, I can force all my input samples to have \"\\b\" at the end?",
"> That's true, It was my error printing the text that way. Maybe as a workaround, I can force all my input samples to have \"\\b\" at the end?\r\n\r\nI don't think it would work since we only want one column, and \"\\b\" is set to be the delimiter between two columns, so it will raise the same issue again. Pyarrow would think that there is more than one column if the delimiter is found somewhere.\r\n\r\nAnyway, I I'll work on a new text reader if we don't find the right workaround about this delimiter issue."
] | 2020-09-12T12:49:28
| 2020-10-28T11:07:31
| 2020-10-28T11:07:30
|
CONTRIBUTOR
| null | null | null | null |
Trying the following snippet, I get different problems on Linux and Windows.
```python
dataset = load_dataset("text", data_files="data.txt")
# or
dataset = load_dataset("text", data_files=["data.txt"])
```
(ps [This example](https://huggingface.co/docs/datasets/loading_datasets.html#json-files) shows that you can use a string as input for data_files, but the signature is `Union[Dict, List]`.)
The problem on Linux is that the script crashes with a CSV error (even though it isn't a CSV file). On Windows the script just seems to freeze or get stuck after loading the config file.
Linux stack trace:
```
PyTorch version 1.6.0+cu101 available.
Checking /home/bram/.cache/huggingface/datasets/b1d50a0e74da9a7b9822cea8ff4e4f217dd892e09eb14f6274a2169e5436e2ea.30c25842cda32b0540d88b7195147decf9671ee442f4bc2fb6ad74016852978e.py for additional imports.
Found main folder for dataset https://raw.githubusercontent.com/huggingface/datasets/1.0.1/datasets/text/text.py at /home/bram/.cache/huggingface/modules/datasets_modules/datasets/text
Found specific version folder for dataset https://raw.githubusercontent.com/huggingface/datasets/1.0.1/datasets/text/text.py at /home/bram/.cache/huggingface/modules/datasets_modules/datasets/text/7e13bc0fa76783d4ef197f079dc8acfe54c3efda980f2c9adfab046ede2f0ff7
Found script file from https://raw.githubusercontent.com/huggingface/datasets/1.0.1/datasets/text/text.py to /home/bram/.cache/huggingface/modules/datasets_modules/datasets/text/7e13bc0fa76783d4ef197f079dc8acfe54c3efda980f2c9adfab046ede2f0ff7/text.py
Couldn't find dataset infos file at https://raw.githubusercontent.com/huggingface/datasets/1.0.1/datasets/text/dataset_infos.json
Found metadata file for dataset https://raw.githubusercontent.com/huggingface/datasets/1.0.1/datasets/text/text.py at /home/bram/.cache/huggingface/modules/datasets_modules/datasets/text/7e13bc0fa76783d4ef197f079dc8acfe54c3efda980f2c9adfab046ede2f0ff7/text.json
Using custom data configuration default
Generating dataset text (/home/bram/.cache/huggingface/datasets/text/default-0907112cc6cd2a38/0.0.0/7e13bc0fa76783d4ef197f079dc8acfe54c3efda980f2c9adfab046ede2f0ff7)
Downloading and preparing dataset text/default-0907112cc6cd2a38 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/bram/.cache/huggingface/datasets/text/default-0907112cc6cd2a38/0.0.0/7e13bc0fa76783d4ef197f079dc8acfe54c3efda980f2c9adfab046ede2f0ff7...
Dataset not on Hf google storage. Downloading and preparing it from source
Downloading took 0.0 min
Checksum Computation took 0.0 min
Unable to verify checksums.
Generating split train
Traceback (most recent call last):
File "/home/bram/Python/projects/dutch-simplification/utils.py", line 45, in prepare_data
dataset = load_dataset("text", data_files=dataset_f)
File "/home/bram/.local/share/virtualenvs/dutch-simplification-NcpPZtDF/lib/python3.8/site-packages/datasets/load.py", line 608, in load_dataset
builder_instance.download_and_prepare(
File "/home/bram/.local/share/virtualenvs/dutch-simplification-NcpPZtDF/lib/python3.8/site-packages/datasets/builder.py", line 468, in download_and_prepare
self._download_and_prepare(
File "/home/bram/.local/share/virtualenvs/dutch-simplification-NcpPZtDF/lib/python3.8/site-packages/datasets/builder.py", line 546, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/bram/.local/share/virtualenvs/dutch-simplification-NcpPZtDF/lib/python3.8/site-packages/datasets/builder.py", line 888, in _prepare_split
for key, table in utils.tqdm(generator, unit=" tables", leave=False, disable=not_verbose):
File "/home/bram/.local/share/virtualenvs/dutch-simplification-NcpPZtDF/lib/python3.8/site-packages/tqdm/std.py", line 1130, in __iter__
for obj in iterable:
File "/home/bram/.cache/huggingface/modules/datasets_modules/datasets/text/7e13bc0fa76783d4ef197f079dc8acfe54c3efda980f2c9adfab046ede2f0ff7/text.py", line 100, in _generate_tables
pa_table = pac.read_csv(
File "pyarrow/_csv.pyx", line 714, in pyarrow._csv.read_csv
File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: CSV parse error: Expected 1 columns, got 2
```
Windows just seems to get stuck. Even with a tiny dataset of 10 lines, it has been stuck for 15 minutes already at this message:
```
Checking C:\Users\bramv\.cache\huggingface\datasets\b1d50a0e74da9a7b9822cea8ff4e4f217dd892e09eb14f6274a2169e5436e2ea.30c25842cda32b0540d88b7195147decf9671ee442f4bc2fb6ad74016852978e.py for additional imports.
Found main folder for dataset https://raw.githubusercontent.com/huggingface/datasets/1.0.1/datasets/text/text.py at C:\Users\bramv\.cache\huggingface\modules\datasets_modules\datasets\text
Found specific version folder for dataset https://raw.githubusercontent.com/huggingface/datasets/1.0.1/datasets/text/text.py at C:\Users\bramv\.cache\huggingface\modules\datasets_modules\datasets\text\7e13bc0fa76783d4ef197f079dc8acfe54c3efda980f2c9adfab046ede2f0ff7
Found script file from https://raw.githubusercontent.com/huggingface/datasets/1.0.1/datasets/text/text.py to C:\Users\bramv\.cache\huggingface\modules\datasets_modules\datasets\text\7e13bc0fa76783d4ef197f079dc8acfe54c3efda980f2c9adfab046ede2f0ff7\text.py
Couldn't find dataset infos file at https://raw.githubusercontent.com/huggingface/datasets/1.0.1/datasets/text\dataset_infos.json
Found metadata file for dataset https://raw.githubusercontent.com/huggingface/datasets/1.0.1/datasets/text/text.py at C:\Users\bramv\.cache\huggingface\modules\datasets_modules\datasets\text\7e13bc0fa76783d4ef197f079dc8acfe54c3efda980f2c9adfab046ede2f0ff7\text.json
Using custom data configuration default
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/thomwolf",
"id": 7353373,
"login": "thomwolf",
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"type": "User",
"url": "https://api.github.com/users/thomwolf",
"user_view_type": "public"
}
|
{
"+1": 3,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/622/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/622/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 45 days, 22:18:02
|
https://api.github.com/repos/huggingface/datasets/issues/620
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/620/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/620/comments
|
https://api.github.com/repos/huggingface/datasets/issues/620/events
|
https://github.com/huggingface/datasets/issues/620
| 699,815,135
|
MDU6SXNzdWU2OTk4MTUxMzU=
| 620
|
map/filter multiprocessing raises errors and corrupts datasets
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/2000204?v=4",
"events_url": "https://api.github.com/users/timothyjlaurent/events{/privacy}",
"followers_url": "https://api.github.com/users/timothyjlaurent/followers",
"following_url": "https://api.github.com/users/timothyjlaurent/following{/other_user}",
"gists_url": "https://api.github.com/users/timothyjlaurent/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/timothyjlaurent",
"id": 2000204,
"login": "timothyjlaurent",
"node_id": "MDQ6VXNlcjIwMDAyMDQ=",
"organizations_url": "https://api.github.com/users/timothyjlaurent/orgs",
"received_events_url": "https://api.github.com/users/timothyjlaurent/received_events",
"repos_url": "https://api.github.com/users/timothyjlaurent/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/timothyjlaurent/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/timothyjlaurent/subscriptions",
"type": "User",
"url": "https://api.github.com/users/timothyjlaurent",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
] |
[
"It seems that I ran into the same problem\r\n```\r\ndef tokenize(cols, example):\r\n for in_col, out_col in cols.items():\r\n example[out_col] = hf_tokenizer.convert_tokens_to_ids(hf_tokenizer.tokenize(example[in_col]))\r\n return example\r\ncola = datasets.load_dataset('glue', 'cola')\r\ntokenized_cola = cola.map(partial(tokenize, {'sentence': 'text_idxs'}),\r\n num_proc=2,)\r\n```\r\nand it outpus (exceprts)\r\n```\r\nConcatenating 2 shards from multiprocessing\r\nSet __getitem__(key) output type to python objects for ['idx', 'label', 'sentence', 'text_idxs'] columns (when key is int or slice) and don't output other (un-formatted) columns.\r\nTesting the mapped function outputs\r\nTesting finished, running the mapping function on the dataset\r\nDone writing 532 indices in 4256 bytes .\r\nDone writing 531 indices in 4248 bytes .\r\nProcess #0 will write at /home/yisiang/.cache/huggingface/datasets/glue/cola/1.0.0/930e9d141872db65102cabb9fa8ac01c11ffc8a1b72c2e364d8cdda4610df542/tokenized_test_00000_of_00002.arrow\r\nProcess #1 will write at /home/yisiang/.cache/huggingface/datasets/glue/cola/1.0.0/930e9d141872db65102cabb9fa8ac01c11ffc8a1b72c2e364d8cdda4610df542/tokenized_test_00001_of_00002.arrow\r\nSpawning 2 processes\r\n```\r\nand then the program never stop.",
"same problem.\r\n`encoded_dataset = core_data.map(lambda examples: tokenizer(examples[\"query\"], examples[\"document\"], padding=True, truncation='longest_first', return_tensors=\"pt\", max_length=384), num_proc=16, keep_in_memory=True)`\r\nit outputs:\r\n```\r\nSet __getitem__(key) output type to python objects for ['document', 'is_random', 'query'] columns (when key is int or slice) and don't output other (un-formatted) columns.\r\nDone writing 1787500 indices in 25568400000 bytes .\r\nSet __getitem__(key) output type to python objects for ['document', 'is_random', 'query'] columns (when key is int or slice) and don't output other (un-formatted) columns.\r\nDone writing 1787500 indices in 25568400000 bytes .\r\nSet __getitem__(key) output type to python objects for ['document', 'is_random', 'query'] columns (when key is int or slice) and don't output other (un-formatted) columns.\r\nDone writing 1787500 indices in 25568400000 bytes .\r\nSet __getitem__(key) output type to python objects for ['document', 'is_random', 'query'] columns (when key is int or slice) and don't output other (un-formatted) columns.\r\nDone writing 1787500 indices in 25568400000 bytes .\r\nSet __getitem__(key) output type to python objects for ['document', 'is_random', 'query'] columns (when key is int or slice) and don't output other (un-formatted) columns.\r\nDone writing 1787500 indices in 25568400000 bytes .\r\nSet __getitem__(key) output type to python objects for ['document', 'is_random', 'query'] columns (when key is int or slice) and don't output other (un-formatted) columns.\r\nDone writing 1787500 indices in 25568400000 bytes .\r\nSet __getitem__(key) output type to python objects for ['document', 'is_random', 'query'] columns (when key is int or slice) and don't output other (un-formatted) columns.\r\nDone writing 1787500 indices in 25568400000 bytes .\r\nSet __getitem__(key) output type to python objects for ['document', 'is_random', 'query'] columns (when key is int or slice) and don't output other (un-formatted) columns.\r\nDone writing 1787499 indices in 25568385696 bytes .\r\nSet __getitem__(key) output type to python objects for ['document', 'is_random', 'query'] columns (when key is int or slice) and don't output other (un-formatted) columns.\r\nSpawning 16 processes\r\n```",
"Thanks for reporting.\r\n\r\n\r\nWhich tokenizers are you using ? What platform are you on ? Can you tell me which version of datasets and pyarrow you're using ? @timothyjlaurent @richarddwang @HuangLianzhe \r\n\r\nAlso if you're able to reproduce the issue on google colab that would be very helpful.\r\n\r\nI tried to run your code @richarddwang with the bert tokenizer and I wasn't able to reproduce",
"Hi, Sorry that I forgot to see what my version was.\r\nBut after updating datasets to master (editable install), and latest pyarrow. \r\nIt works now ~",
"Sorry, I just noticed this.\r\nI'm running this on MACOS the version of datasets I'm was 1.0.0 but I've also tried it on 1.0.2. `pyarrow==1.0.1`, Python 3.6\r\n\r\nConsider this code:\r\n```python\r\n\r\n loader_path = str(Path(__file__).parent / \"prodigy_dataset_builder.py\")\r\n ds = load_dataset(\r\n loader_path, name=\"prodigy-ds\", data_files=list(file_paths), cache_dir=cache_dir\r\n )[\"train\"]\r\n valid_relations = set(vocabulary.relation_types.keys())\r\n\r\n ds = ds.filter(filter_good_rows, fn_kwargs=dict(valid_rel_labels=valid_relations))\r\n\r\n ds = ds.map(map_bpe_encodings, batched=True, fn_kwargs=dict(tokenizer=vocabulary.tokenizer), num_proc=10)\r\n\r\n # add all feature data\r\n ner_ds: Dataset = ds.map(\r\n add_bio_tags,\r\n fn_kwargs=dict(ner_label_map=vocabulary.ner_labels, tokenizer=vocabulary.tokenizer),\r\n )\r\n rel_ds: Dataset = ner_ds.map(\r\n relation_ds_factory,\r\n batched=True,\r\n writer_batch_size=100,\r\n fn_kwargs=dict(tokenizer=vocabulary.tokenizer, vocabulary=vocabulary),\r\n )\r\n```\r\nThe loader is essentially a jsonloader with some extra error handling. The data is a jsonlines format with text field and a list of span objects and relation objects. \r\n\r\nIn the `ner_ds` a field, `ner_labels` is added, this is used in the downstream `relation_ds_factory`. It all runs fine in a single process but I get a KeyError error if run with num_proc set\r\n:\r\n\r\n```\r\n File \"/Users/timothy.laurent/src/inv-text2struct/text2struct/model/dataset.py\", line 348, in relation_ds_factory\r\n ner_labels = example[\"ner_labels\"]\r\nKeyError: 'ner_labels'\r\n``` \r\n\r\nThis is just one example of what goes wrong. I've started just saving the dataset as arrow at the end because it takes a long time to map/filter/shuffle and the caching isn't working (tracked it down to byte differences in the pickled functions). \r\n\r\n^^ Interestingly if I heed the warning from Tokenizers and set the environment variable, `TOKENIZERS_PARALLELISM=true` the map just hangs:\r\n\r\n```\r\n[I 200921 21:43:18 filelock:318] Lock 5694118768 released on /Users/timothy.laurent/.cache/huggingface/datasets/_Users_timothy.laurent_.cache_huggingface_datasets_prodigy_dataset_builder_prodigy-ds-5f34378723c4e83f_0.0.0_e67d9b43d5cd82c50b1eae8f2097daf95b601a04dc03ddd504f2b234a5fa247a.lock\r\n100%|████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 1.34ba/s]\r\n#0: 0%| | 0/1 [00:00<?, ?ba/s]\r\n#1: 0%| | 0/1 [00:00<?, ?ba/s]\r\n#2: 0%| | 0/1 [00:00<?, ?ba/s]\r\n#3: 0%| | 0/1 [00:00<?, ?ba/s]\r\n#4: 0%| | 0/1 [00:00<?, ?ba/s]\r\n#5: 0%| | 0/1 [00:00<?, ?ba/s]\r\n#6: 0%| | 0/1 [00:00<?, ?ba/s]\r\n#7: 0%| | 0/1 [00:00<?, ?ba/s]\r\n#8: 0%| | 0/1 [00:00<?, ?ba/s]\r\n```",
"Thank you, I was able to reproduce :)\r\nI'm on it",
"#659 should fix the `KeyError` issue. It was due to the formatting not getting updated the right way",
"Also maybe @n1t0 knows why setting `TOKENIZERS_PARALLELISM=true` creates deadlock issues when calling `map` with multiprocessing ?",
"@lhoestq \r\n\r\nThanks for taking a look. I pulled the master but I still see the key error.\r\n\r\n```\r\nTo disable this warning, you can either:\r\n - Avoid using `tokenizers` before the fork if possible\r\n - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\r\n#0: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 21.56ba/s]\r\n#1: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 17.71ba/s]\r\n#2: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 20.45ba/s]\r\n#3: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 26.05ba/s]\r\n#4: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 26.83ba/s]\r\n#5: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 27.00ba/s]\r\n#6: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 27.40ba/s]\r\n#7: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 25.91ba/s]\r\n#8: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 22.46ba/s]\r\n#9: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 20.15ba/s]\r\n#10: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 26.81ba/s]\r\n#11: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 27.45ba/s]\r\n100%|█████████████████████████████████████████████████████████████████████████████████████████████████████| 322/322 [00:00<00:00, 1462.85ex/s]\r\nTraceback (most recent call last): | 0/1 [00:00<?, ?ba/s]\r\n File \"text2struct/run_model.py\", line 372, in <module>\r\n main()\r\n File \"text2struct/run_model.py\", line 368, in main | 0/1 [00:00<?, ?ba/s]\r\n run_model(auto_envvar_prefix=\"GFB_CIES\") # pragma: no cover\r\n File \"/Users/timothy.laurent/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/click/core.py\", line 829, in __call__\r\n return self.main(*args, **kwargs) | 0/1 [00:00<?, ?ba/s]\r\n File \"/Users/timothy.laurent/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/click/core.py\", line 782, in main\r\n rv = self.invoke(ctx)\r\n File \"/Users/timothy.laurent/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/click/core.py\", line 1236, in invoke\r\n return Command.invoke(self, ctx)\r\n File \"/Users/timothy.laurent/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/click/core.py\", line 1066, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"/Users/timothy.laurent/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/click/core.py\", line 610, in invoke\r\n return callback(*args, **kwargs)\r\n File \"/Users/timothy.laurent/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/click/decorators.py\", line 21, in new_func\r\n return f(get_current_context(), *args, **kwargs)\r\n File \"text2struct/run_model.py\", line 136, in run_model\r\n ctx.invoke(ctx.command.commands[config_dict[\"mode\"]])\r\n File \"/Users/timothy.laurent/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/click/core.py\", line 610, in invoke\r\n return callback(*args, **kwargs)\r\n File \"/Users/timothy.laurent/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/click/decorators.py\", line 21, in new_func\r\n return f(get_current_context(), *args, **kwargs)\r\n File \"text2struct/run_model.py\", line 187, in train\r\n run_train_model(_parse_subcommand(ctx))\r\n File \"text2struct/run_model.py\", line 241, in run_train_model\r\n checkpoint_steps=config.train.checkpoint_steps,\r\n File \"/Users/timothy.laurent/src/inv-text2struct/text2struct/model/train.py\", line 153, in alternate_training\r\n max_len=config.model.dim.max_len,\r\n File \"/Users/timothy.laurent/src/inv-text2struct/text2struct/model/dataset.py\", line 466, in load_prodigy_tf_datasets\r\n folder, file_patterns, vocabulary, cache_dir=cache_dir, test_pct=test_pct\r\n File \"/Users/timothy.laurent/src/inv-text2struct/text2struct/model/dataset.py\", line 447, in load_prodigy_arrow_datasets\r\n fn_kwargs=dict(tokenizer=vocabulary.tokenizer, vocabulary=vocabulary),\r\n File \"/Users/timothy.laurent/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/datasets/arrow_dataset.py\", line 1224, in map\r\n update_data = does_function_return_dict(test_inputs, test_indices)\r\n File \"/Users/timothy.laurent/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/datasets/arrow_dataset.py\", line 1195, in does_function_return_dict\r\n function(*fn_args, indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)\r\n File \"/Users/timothy.laurent/src/inv-text2struct/text2struct/model/dataset.py\", line 348, in relation_ds_factory\r\n ner_labels = example[\"ner_labels\"]\r\nKeyError: 'ner_labels'\r\n\r\n```",
"The parallelism is automatically disabled on `tokenizers` when the process gets forked, while we already used the parallelism capabilities of a tokenizer. We have to do it in order to avoid having the process hang, because we cannot safely fork a multithreaded process (cf https://github.com/huggingface/tokenizers/issues/187).\r\nSo if possible, the tokenizers shouldn't be used before the fork, so that each process can then make use of the parallelism. Otherwise using `TOKENIZERS_PARALLELISM=false` is the way to go.",
"> Thanks for taking a look. I pulled the master but I still see the key error.\r\n\r\nI am no longer able to get the error since #659 was merged. Not sure why you still have it @timothyjlaurent \r\nMaybe it is a cache issue ? Could you try to use `load_from_cache_file=False` in your `.map()` calls ?",
"> The parallelism is automatically disabled on `tokenizers` when the process gets forked, while we already used the parallelism capabilities of a tokenizer. We have to do it in order to avoid having the process hang, because we cannot safely fork a multithreaded process (cf [huggingface/tokenizers#187](https://github.com/huggingface/tokenizers/issues/187)).\r\n> So if possible, the tokenizers shouldn't be used before the fork, so that each process can then make use of the parallelism. Otherwise using `TOKENIZERS_PARALLELISM=false` is the way to go.\r\n\r\nOk thanks :)\r\n\r\nIs there something we should do on the `datasets` side to avoid that that the program hangs ?\r\n\r\nAlso when doing `.map` with a tokenizer, the tokenizer is called once on the first examples of the dataset to check the function output before spawning the processes. Is that compatible with how tokenizers are supposed to be used with multiprocessing ?",
"#659 fixes the empty dict issue\r\n#688 fixes the hang issue",
"Hmmm I pulled the latest commit, `b93c5517f70a480533a44e0c42638392fd53d90`, and I'm still seeing both the hanging and the key error. ",
"Hi @timothyjlaurent \r\n\r\nThe hanging fix just got merged, that why you still had it.\r\n\r\nFor the key error it's possible that the code you ran reused cached datasets from where the KeyError bug was still there.\r\nCould you try to clear your cache or make sure that it doesn't reuse cached data with `.map(..., load_from_cache=False)` ?\r\nLet me know if it it helps",
"Hi @lhoestq , \r\n\r\nThanks for letting me know about the update.\r\n\r\nSo I don't think it's the caching - because hashing mechanism isn't stable for me -- but that's a different issue. In any case I `rm -rf ~/.cache/huggingface` to make a clean slate.\r\n\r\nI synced with master and I see the key error has gone away, I tried with and without the `TOKENIZERS_PARALLELISM` variable set and see the log line for setting the value false before the map.\r\n\r\nNow I'm seeing an issue with `.train_test_split()` on datasets that are the product of a multiprocess map.\r\n\r\nHere is the stack trace\r\n\r\n```\r\n File \"/Users/timothy.laurent/src/inv-text2struct/text2struct/model/dataset.py\", line 451, in load_prodigy_arrow_datasets\r\n ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed)\r\n File \"/Users/timothy.laurent/.virtualenvs/inv-text2struct/src/datasets/src/datasets/arrow_dataset.py\", line 168, in wrapper\r\n dataset.set_format(**new_format)\r\n File \"/Users/timothy.laurent/.virtualenvs/inv-text2struct/src/datasets/src/datasets/fingerprint.py\", line 163, in wrapper\r\n out = func(self, *args, **kwargs)\r\n File \"/Users/timothy.laurent/.virtualenvs/inv-text2struct/src/datasets/src/datasets/arrow_dataset.py\", line 794, in set_format\r\n list(filter(lambda col: col not in self._data.column_names, columns)), self._data.column_names\r\nValueError: Columns ['train', 'test'] not in the dataset. Current columns in the dataset: ['_input_hash', '_task_hash', '_view_id', 'answer', 'encoding__ids', 'encoding__offsets', 'encoding__overflowing', 'encoding__tokens', 'encoding__words', 'ner_ids', 'ner_labels', 'relations', 'spans', 'text', 'tokens']\r\n```\r\n\r\n\r\n",
"Thanks for reporting.\r\nI'm going to fix that and add a test case so that it doesn't happen again :) \r\nI'll let you know when it's done\r\n\r\nIn the meantime if you could make a google colab that reproduces the issue it would be helpful ! @timothyjlaurent ",
"Sure thing, @lhoestq.\r\n\r\nhttps://colab.research.google.com/drive/1lg4fbyrUO6m8ssQ2dNdVFaUqMUfA2zZ3?usp=sharing",
"Thanks @timothyjlaurent ! I just merged a fix on master. I also checked your notebook and it looks like it's working now.\r\nI added some tests to make sure it works as expected now :)",
"Great, @lhoestq . I'm trying to verify in the colab:\r\nchanged\r\n```\r\n!pip install datasets\r\n```\r\nto \r\n\r\n```\r\n!pip install git+https://github.com/huggingface/datasets@master\r\n```\r\n\r\nBut I'm still seeing the error - I wonder why?",
"It works on my side @timothyjlaurent on google colab.\r\nDid you try to uninstall datasets first, before updating it to master's version ?",
"I didn't -- it was a new sessions --- buuut - look like it's working today -- woot! I'll close this issue. Thanks @lhoestq "
] | 2020-09-11T22:30:06
| 2020-10-08T16:31:47
| 2020-10-08T16:31:46
|
NONE
| null | null | null | null |
After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing.
```python
...
ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed)
ner_ds_dict["validation"] = ner_ds_dict["test"]
rel_ds_dict = rel_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed)
rel_ds_dict["validation"] = rel_ds_dict["test"]
return ner_ds_dict, rel_ds_dict
```
The first train_test_split, `ner_ds`/`ner_ds_dict`, returns a `train` and `test` split that are iterable.
The second, `rel_ds`/`rel_ds_dict` in this case, returns a Dataset dict that has rows but if selected from or sliced into into returns an empty dictionary. eg `rel_ds_dict['train'][0] == {}` and `rel_ds_dict['train'][0:100] == {}`.
Ok I think I know the problem -- the rel_ds was mapped though a mapper with `num_proc=12`. If I remove `num_proc`. The dataset loads.
I also see errors with other map and filter functions when `num_proc` is set.
```
Done writing 67 indices in 536 bytes .
Done writing 67 indices in 536 bytes .
Fatal Python error: PyCOND_WAIT(gil_cond) failed
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/2000204?v=4",
"events_url": "https://api.github.com/users/timothyjlaurent/events{/privacy}",
"followers_url": "https://api.github.com/users/timothyjlaurent/followers",
"following_url": "https://api.github.com/users/timothyjlaurent/following{/other_user}",
"gists_url": "https://api.github.com/users/timothyjlaurent/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/timothyjlaurent",
"id": 2000204,
"login": "timothyjlaurent",
"node_id": "MDQ6VXNlcjIwMDAyMDQ=",
"organizations_url": "https://api.github.com/users/timothyjlaurent/orgs",
"received_events_url": "https://api.github.com/users/timothyjlaurent/received_events",
"repos_url": "https://api.github.com/users/timothyjlaurent/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/timothyjlaurent/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/timothyjlaurent/subscriptions",
"type": "User",
"url": "https://api.github.com/users/timothyjlaurent",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/620/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/620/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 26 days, 18:01:40
|
https://api.github.com/repos/huggingface/datasets/issues/619
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/619/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/619/comments
|
https://api.github.com/repos/huggingface/datasets/issues/619/events
|
https://github.com/huggingface/datasets/issues/619
| 699,733,612
|
MDU6SXNzdWU2OTk3MzM2MTI=
| 619
|
Mistakes in MLQA features names
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/9285264?v=4",
"events_url": "https://api.github.com/users/M-Salti/events{/privacy}",
"followers_url": "https://api.github.com/users/M-Salti/followers",
"following_url": "https://api.github.com/users/M-Salti/following{/other_user}",
"gists_url": "https://api.github.com/users/M-Salti/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/M-Salti",
"id": 9285264,
"login": "M-Salti",
"node_id": "MDQ6VXNlcjkyODUyNjQ=",
"organizations_url": "https://api.github.com/users/M-Salti/orgs",
"received_events_url": "https://api.github.com/users/M-Salti/received_events",
"repos_url": "https://api.github.com/users/M-Salti/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/M-Salti/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/M-Salti/subscriptions",
"type": "User",
"url": "https://api.github.com/users/M-Salti",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Indeed you're right ! Thanks for reporting that\r\n\r\nCould you open a PR to fix the features names ?"
] | 2020-09-11T20:46:23
| 2020-09-16T06:59:19
| 2020-09-16T06:59:19
|
CONTRIBUTOR
| null | null | null | null |
I think the following features in MLQA shouldn't be named the way they are:
1. `questions` (should be `question`)
2. `ids` (should be `id`)
3. `start` (should be `answer_start`)
The reasons I'm suggesting these features be renamed are:
* To make them consistent with other QA datasets like SQuAD, XQuAD, TyDiQA etc. and hence make it easier to concatenate multiple QA datasets.
* The features names are not the same as the ones provided in the original MLQA datasets (it uses the names I suggested).
I know these columns can be renamed using using `Dataset.rename_column_`, `questions` and `ids` can be easily renamed but `start` on the other hand is annoying to rename since it's nested inside the feature `answers`.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/9285264?v=4",
"events_url": "https://api.github.com/users/M-Salti/events{/privacy}",
"followers_url": "https://api.github.com/users/M-Salti/followers",
"following_url": "https://api.github.com/users/M-Salti/following{/other_user}",
"gists_url": "https://api.github.com/users/M-Salti/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/M-Salti",
"id": 9285264,
"login": "M-Salti",
"node_id": "MDQ6VXNlcjkyODUyNjQ=",
"organizations_url": "https://api.github.com/users/M-Salti/orgs",
"received_events_url": "https://api.github.com/users/M-Salti/received_events",
"repos_url": "https://api.github.com/users/M-Salti/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/M-Salti/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/M-Salti/subscriptions",
"type": "User",
"url": "https://api.github.com/users/M-Salti",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/619/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/619/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 4 days, 10:12:56
|
https://api.github.com/repos/huggingface/datasets/issues/617
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/617/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/617/comments
|
https://api.github.com/repos/huggingface/datasets/issues/617/events
|
https://github.com/huggingface/datasets/issues/617
| 699,472,596
|
MDU6SXNzdWU2OTk0NzI1OTY=
| 617
|
Compare different Rouge implementations
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/2287797?v=4",
"events_url": "https://api.github.com/users/ibeltagy/events{/privacy}",
"followers_url": "https://api.github.com/users/ibeltagy/followers",
"following_url": "https://api.github.com/users/ibeltagy/following{/other_user}",
"gists_url": "https://api.github.com/users/ibeltagy/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ibeltagy",
"id": 2287797,
"login": "ibeltagy",
"node_id": "MDQ6VXNlcjIyODc3OTc=",
"organizations_url": "https://api.github.com/users/ibeltagy/orgs",
"received_events_url": "https://api.github.com/users/ibeltagy/received_events",
"repos_url": "https://api.github.com/users/ibeltagy/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ibeltagy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ibeltagy/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ibeltagy",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Updates - the differences between the following three\r\n(1) https://github.com/bheinzerling/pyrouge (previously popular. The one I trust the most)\r\n(2) https://github.com/google-research/google-research/tree/master/rouge\r\n(3) https://github.com/pltrdy/files2rouge (used in fairseq)\r\ncan be explained by two things, stemming and handling multiple sentences.\r\n\r\nStemming: \r\n(1), (2): default is no stemming. (3): default is with stemming ==> No stemming is the correct default as you did [here](https://github.com/huggingface/datasets/blob/master/metrics/rouge/rouge.py#L84)\r\n\r\nMultiple sentences:\r\n(1) `rougeL` splits text using `\\n`\r\n(2) `rougeL` ignores `\\n`. \r\n(2) `rougeLsum` splits text using `\\n`\r\n(3) `rougeL` splits text using `.`\r\n\r\nFor (2), `rougeL` and `rougeLsum` are identical if the sequence doesn't contain `\\n`. With `\\n`, it is `rougeLsum` that matches (1) not `rougeL`. \r\n\r\nOverall, and as far as I understand, for your implementation here https://github.com/huggingface/datasets/blob/master/metrics/rouge/rouge.py#L65 to match the default, you only need to change `rougeL` [here](https://github.com/huggingface/datasets/blob/master/metrics/rouge/rouge.py#L86) to `rougeLsum` to correctly compute metrics for text with newlines.\r\n\r\nTagging @sshleifer who might be interested.",
"Thanks for the clarification !\r\nWe're adding Rouge Lsum in #701 ",
"This is a real issue, sorry for missing the mention @ibeltagy\r\n\r\nWe implemented a more involved [solution](https://github.com/huggingface/transformers/blob/99cb924bfb6c4092bed9232bea3c242e27c6911f/examples/seq2seq/utils.py#L481) that enforces that sentences are split with `\\n` so that rougeLsum scores match papers even if models don't generate newlines. \r\n\r\nUnfortunately, the best/laziest way I found to do this introduced an `nltk` dependency (For sentence splitting, all sentences don't end in `.`!!!), but this might be avoidable with some effort.\r\n\r\n#### Sidebar: Wouldn't Deterministic Be Better?\r\n\r\n`rouge_scorer.scoring.BootstrapAggregator` is well named but is not deterministic which I would like to change for my mental health, unless there is some really good reason to sample 500 observations before computing f-scores.\r\n\r\nI have a fix on a branch, but I wanted to get some context before introducting a 4th way to compute rouge. Scores are generally within .03 Rouge2 of boostrap after multiplying by 100, e.g 22.05 vs 22.08 Rouge2.\r\n\r\n",
"> This is a real issue, sorry for missing the mention @ibeltagy\r\n> \r\n> We implemented a more involved [solution](https://github.com/huggingface/transformers/blob/99cb924bfb6c4092bed9232bea3c242e27c6911f/examples/seq2seq/utils.py#L481) that enforces that sentences are split with `\\n` so that rougeLsum scores match papers even if models don't generate newlines.\r\n> \r\n> Unfortunately, the best/laziest way I found to do this introduced an `nltk` dependency (For sentence splitting, all sentences don't end in `.`!!!), but this might be avoidable with some effort.\r\n\r\nThanks for the details, I didn't know about that. Maybe we should consider adding this processing step or at least mention it somewhere in the library or the documentation\r\n\r\n> #### Sidebar: Wouldn't Deterministic Be Better?\r\n> `rouge_scorer.scoring.BootstrapAggregator` is well named but is not deterministic which I would like to change for my mental health, unless there is some really good reason to sample 500 observations before computing f-scores.\r\n> \r\n> I have a fix on a branch, but I wanted to get some context before introducting a 4th way to compute rouge. Scores are generally within .03 Rouge2 of boostrap after multiplying by 100, e.g 22.05 vs 22.08 Rouge2.\r\n\r\nI think the default `n_samples` of the aggregator is 1000. We could increase it or at least allow users to change it if they want more precise results.",
"Hi, thanks for the solution. \r\n\r\nI am not sure if this is a bug, but on line [510](https://github.com/huggingface/transformers/blob/99cb924bfb6c4092bed9232bea3c242e27c6911f/examples/seq2seq/utils.py#L510), are pred, tgt supposed to be swapped?",
"This looks like a bug in an old version of the examples in `transformers`",
"Hi, so I took this example from the HF implementation. What I can see is that the precision of `Hello there` being summarized to `general kenobi` is 1. I don't understand how this calculation is correct.\r\nIs the comparison just counting the words?\r\nand if Yes, then how does this translates to summarization evaluation?\r\n```\r\n >>> rouge = datasets.load_metric('rouge')\r\n >>> predictions = [\"hello there\", \"general kenobi\"]\r\n >>> references = [\"hello there\", \"general kenobi\"]\r\n >>> results = rouge.compute(predictions=predictions, references=references)\r\n >>> print(list(results.keys()))\r\n ['rouge1', 'rouge2', 'rougeL', 'rougeLsum']\r\n >>> print(results[\"rouge1\"])\r\n AggregateScore(low=Score(precision=1.0, recall=1.0, fmeasure=1.0), mid=Score(precision=1.0, recall=1.0, fmeasure=1.0), high=Score(precision=1.0, recall=1.0, fmeasure=1.0))\r\n >>> print(results[\"rouge1\"].mid.fmeasure)\r\n 1.0\r\n\"\"\", stored examples: 0)\r\n```\r\n\r\n\r\n"
] | 2020-09-11T15:49:32
| 2023-03-22T12:08:44
| 2020-10-02T09:52:18
|
NONE
| null | null | null | null |
I used RougeL implementation provided in `datasets` [here](https://github.com/huggingface/datasets/blob/master/metrics/rouge/rouge.py) and it gives numbers that match those reported in the pegasus paper but very different from those reported in other papers, [this](https://arxiv.org/pdf/1909.03186.pdf) for example.
Can you make sure the google-research implementation you are using matches the official perl implementation?
There are a couple of python wrappers around the perl implementation, [this](https://pypi.org/project/pyrouge/) has been commonly used, and [this](https://github.com/pltrdy/files2rouge) is used in fairseq).
There's also a python reimplementation [here](https://github.com/pltrdy/rouge) but its RougeL numbers are way off.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/617/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/617/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 20 days, 18:02:46
|
https://api.github.com/repos/huggingface/datasets/issues/616
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/616/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/616/comments
|
https://api.github.com/repos/huggingface/datasets/issues/616/events
|
https://github.com/huggingface/datasets/issues/616
| 699,462,293
|
MDU6SXNzdWU2OTk0NjIyOTM=
| 616
|
UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/BramVanroy",
"id": 2779410,
"login": "BramVanroy",
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"type": "User",
"url": "https://api.github.com/users/BramVanroy",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[
"I have the same issue",
"Same issue here when Trying to load a dataset from disk.",
"I am also experiencing this issue, and don't know if it's affecting my training.",
"Same here. I hope the dataset is not being modified in-place.",
"I think the only way to avoid this warning would be to do a copy of the numpy array before providing it.\r\n\r\nThis would slow down a bit the iteration over the dataset but maybe it would be safer. We could disable the copy with a flag on the `set_format` command.\r\n\r\nIn most typical cases of training a NLP model, PyTorch shouldn't modify the input so it's ok to have a non-writable array but I can understand the warning is a bit scary so maybe we could choose the side of non-warning/slower by default and have an option to speedup.\r\n\r\nWhat do you think @lhoestq ? ",
"@thomwolf Would it be possible to have the array look writeable, but raise an error if it is actually written to?\r\n\r\nI would like to keep my code free of warning, but I also wouldn't like to slow down the program because of unnecessary copy operations. ",
"@AndreasMadsen probably not I would guess (no free lunch hahah)",
"@thomwolf Why not? Writable is checked with `arr.flags.writeable`, and writing is done via magic methods.",
"Well because I don't know the internal of numpy as well as you I guess hahahah, do you want to try to open a PR proposing a solution?",
"@thomwolf @AndreasMadsen I think this is a terrible idea, n/o, and I am very much against it. Modifying internals of an array in such a hacky way is bound to run into other (user) issues down the line. To users it would not be clear at all what is going on e.g. when they check for write access (which will return True) but then they get a warning that the array is not writeable. That's extremely confusing. \r\n\r\nIf your only goal is to get rid of warnings in your code, then you can just use a [simplefilter](https://docs.python.org/3.8/library/warnings.html#temporarily-suppressing-warnings) for UserWarnings in your own code. Changing the code-base in such an intuitive way does not seem like a good way to go and sets a bad precedent, imo. \r\n\r\n(Feel free to disagree, of course.)\r\n\r\nIMO a warning can stay (as they can be filtered by users anyway), but it can be clarified why the warning takes place.",
"> To users it would not be clear at all what is going on e.g. when they check for write access (which will return True) but then they get a warning that the array is not writeable. That's extremely confusing.\r\n\r\nConfusion can be resolved with a helpful error message. In this case, that error message can be controlled by huggingface/datasets. The right argument here is that if code depends on `.flags.writable` being truthful (not just for warnings), then it will cause unavoidable errors. Although, I can't imagine such a use-case.\r\n\r\n> If your only goal is to get rid of warnings in your code, then you can just use a simplefilter for UserWarnings in your own code. Changing the code-base in such an intuitive way does not seem like a good way to go and sets a bad precedent, imo.\r\n\r\nI don't want to ignore all `UserWarnings`, nor all warnings regarding non-writable arrays. Ignoring warnings leads to hard to debug issues.\r\n\r\n> IMO a warning can stay (as they can be filtered by users anyway), but it can be clarified why the warning takes place.\r\n\r\nPlain use cases should really not generate warnings. It teaches developers to ignore warnings which is a terrible practice.\r\n\r\n---\r\n\r\nThe best solution would be to allow non-writable arrays in `DataLoader`, but that is a PyTorch issue.",
"> The right argument here is that if code depends on `.flags.writable` being truthful (not just for warnings), then it will cause unavoidable errors. Although, I can't imagine such a use-case.\r\n\r\nThat's exactly the argument in my first sentence. Too often someone \"cannot think of a use-case\", but you can not foresee the use-cases of a whole research community.\r\n \r\n> I don't want to ignore all `UserWarnings`, nor all warnings regarding non-writable arrays. Ignoring warnings leads to hard to debug issues.\r\n\r\nThat's fair.\r\n\r\n> Plain use cases should really not generate warnings. It teaches developers to ignore warnings which is a terrible practice.\r\n\r\nBut this is not a plain use-case (because Pytorch does not support these read-only tensors). Manually setting the flag to writable will solve the issue on the surface but is basically just a hack to compensate for something that is not allowed in another library. \r\n\r\nWhat about an \"ignore_warnings\" flag in `set_format` that when True wraps the offending code in a block to ignore userwarnings at that specific step in [_convert_outputs](https://github.com/huggingface/datasets/blob/880c2c76a8223a00c303eab2909371e857113063/src/datasets/arrow_dataset.py#L821)? Something like:\r\n\r\n```python\r\ndef _convert_outputs(..., ignore_warnings=True):\r\n ...\r\n with warnings.catch_warnings():\r\n if ignore_warnings:\r\n warnings.simplefilter(\"ignore\", UserWarning)\r\n return torch.tensor(...)\r\n# continues without warning filter after context manager...\r\n```",
"> But this is not a plain use-case (because Pytorch does not support these read-only tensors).\r\n\r\nBy \"plain\", I mean the recommended way to use `datasets` with PyTorch according to the `datasets` documentation.",
"This error is what I see when I run the first lines of the Pytorch Quickstart. It should also say that it should be ignored and/or how to fix it. BTW, this is a Pytorch error message -- not a Huggingface error message. My code runs anyway."
] | 2020-09-11T15:39:16
| 2021-07-22T21:12:21
| null |
CONTRIBUTOR
| null | null | null | null |
I am trying out the library and want to load in pickled data with `from_dict`. In that dict, one column `text` should be tokenized and the other (an embedding vector) should be retained. All other columns should be removed. When I eventually try to set the format for the columns with `set_format` I am getting this strange Userwarning without a stack trace:
> Set __getitem__(key) output type to torch for ['input_ids', 'sembedding'] columns (when key is int or slice) and don't output other (un-formatted) columns.
> C:\Users\bramv\.virtualenvs\dutch-simplification-nbNdqK9u\lib\site-packages\datasets\arrow_dataset.py:835: UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors. This means you can write to the underlying (supposedly non-writeable) NumPy array using the tensor. You may want to copy the array to protect its data or make it writeable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at ..\torch\csrc\utils\tensor_numpy.cpp:141.)
> return torch.tensor(x, **format_kwargs)
The first one might not be related to the warning, but it is odd that it is shown, too. It is unclear whether that is something that I should do or something that that the program is doing at that moment.
Snippet:
```
dataset = Dataset.from_dict(torch.load("data/dummy.pt.pt"))
print(dataset)
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
keys_to_retain = {"input_ids", "sembedding"}
dataset = dataset.map(lambda example: tokenizer(example["text"], padding='max_length'), batched=True)
dataset.remove_columns_(set(dataset.column_names) - keys_to_retain)
dataset.set_format(type="torch", columns=["input_ids", "sembedding"])
dataloader = torch.utils.data.DataLoader(dataset, batch_size=2)
print(next(iter(dataloader)))
```
PS: the input type for `remove_columns_` should probably be an Iterable rather than just a List.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 4,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 4,
"url": "https://api.github.com/repos/huggingface/datasets/issues/616/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/616/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/615
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/615/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/615/comments
|
https://api.github.com/repos/huggingface/datasets/issues/615/events
|
https://github.com/huggingface/datasets/issues/615
| 699,410,773
|
MDU6SXNzdWU2OTk0MTA3NzM=
| 615
|
Offset overflow when slicing a big dataset with an array of indices in Pyarrow >= 1.0.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Related: https://issues.apache.org/jira/browse/ARROW-9773\r\n\r\nIt's definitely a size thing. I took a smaller dataset with 87000 rows and did:\r\n```\r\nfor i in range(10,1000,20):\r\n table = pa.concat_tables([dset._data]*i)\r\n table.take([0])\r\n```\r\nand it broke at around i=300.\r\n\r\nAlso when `_indices` is not None, this breaks indexing by slice. E.g. `dset.shuffle()[:1]` breaks.\r\n\r\nLuckily so far I haven't seen `_indices.column(0).take` break, which means it doesn't break `select` or anything like that which is where the speed really matters, it's just `_getitem`. So I'm currently working around it by just doing the arrow v0 method in `_getitem`:\r\n```\r\n#if PYARROW_V0:\r\ndata_subset = pa.concat_tables(\r\n self._data.slice(indices_array[i].as_py(), 1) for i in range(len(indices_array))\r\n)\r\n#else:\r\n #data_subset = self._data.take(indices_array)\r\n```",
"Let me know if you meet other offset overflow issues @joeddav ",
"Will this problem be solved in newer version?",
"This specific issue has been fixed in https://github.com/huggingface/datasets/pull/645\r\n\r\nIf you still have this error, could you open a new issue and explain how to reproduce the error ?",
"same error here in version 2.1.0",
"Facing the same issue. \r\nSteps to reproduce: (dataset is a few GB big so try in colab maybe)\r\nDatasets version - 2.11.0\r\n```\r\nimport datasets\r\nimport re\r\n\r\nds = datasets.load_dataset('nishanthc/dnd_map_dataset_v0.1', split = 'train')\r\n\r\ndef get_text_caption(example):\r\n regex_pattern = r'\\s\\d+x\\d+|,\\sLQ|,\\sgrid|\\.\\w+$'\r\n example['text_caption'] = re.sub(regex_pattern, '', example['picture_text'])\r\n return example\r\n\r\nds = ds.map(get_text_caption)\r\n```\r\n\r\nI am trying to apply a regex to remove certain patterns from a text column. Not sure why this error is showing up.",
"Got this error on a very large data set (900m rows, 35 cols) performing a similar batch map operation.",
"There is a solution that has been proposed here: https://github.com/huggingface/datasets/issues/5783",
"@lhoestq I ran into this problem with load_dataset. What should I do\r\n",
"What version of `datasets` are you using ? Feel free to open a new issue with some details (e.g. what dataset you loaded, what code you ran etc)",
"@lhoestq It's been solved,thanks",
"I am facing this problem.\r\n\r\nHere's my code:\r\n\r\n```python\r\nmodel.eval()\r\nmodel.to('cuda')\r\nblock_size = tokenizer.model_max_length\r\ndef group_texts(examples):\r\n # Concatenate all texts.\r\n concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()}\r\n total_length = len(concatenated_examples[list(examples.keys())[0]])\r\n # We drop the small remainder, we could add padding if the model supported it instead of this drop, you can\r\n # customize this part to your needs.\r\n total_length = (total_length // block_size) * block_size\r\n # Split by chunks of max_len.\r\n result = {\r\n k: [t[i : i + block_size] for i in range(0, total_length, block_size)]\r\n for k, t in concatenated_examples.items()\r\n }\r\n with torch.no_grad():\r\n input_ids = torch.tensor(result[\"input_ids\"]).to('cuda')\r\n attention_mask = torch.tensor(result[\"attention_mask\"]).to('cuda')\r\n r = model.forward(input_ids=input_ids, attention_mask=attention_mask)\r\n result[\"labels\"] = r.logits.cpu().numpy().tolist()\r\n return result\r\n\r\n\r\nlm_datasets = tokenized_datasets.map(\r\n group_texts,\r\n batched=True,\r\n batch_size=1000,\r\n num_proc=1,\r\n)\r\n```\r\n\r\nThis works for a few iterations and then gives the error:\r\n```sh\r\nTraceback (most recent call last):\r\n File \"/home/jpiabrantes/rosetta/tmp.py\", line 45, in <module>\r\n lm_datasets = tokenized_datasets.map(\r\n File \"/home/jpiabrantes/rosetta/.venv/lib/python3.10/site-packages/datasets/dataset_dict.py\", line 868, in map\r\n {\r\n File \"/home/jpiabrantes/rosetta/.venv/lib/python3.10/site-packages/datasets/dataset_dict.py\", line 869, in <dictcomp>\r\n k: dataset.map(\r\n File \"/home/jpiabrantes/rosetta/.venv/lib/python3.10/site-packages/datasets/arrow_dataset.py\", line 593, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n File \"/home/jpiabrantes/rosetta/.venv/lib/python3.10/site-packages/datasets/arrow_dataset.py\", line 558, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n File \"/home/jpiabrantes/rosetta/.venv/lib/python3.10/site-packages/datasets/arrow_dataset.py\", line 3105, in map\r\n for rank, done, content in Dataset._map_single(**dataset_kwargs):\r\n File \"/home/jpiabrantes/rosetta/.venv/lib/python3.10/site-packages/datasets/arrow_dataset.py\", line 3501, in _map_single\r\n writer.write_batch(batch)\r\n File \"/home/jpiabrantes/rosetta/.venv/lib/python3.10/site-packages/datasets/arrow_writer.py\", line 571, in write_batch\r\n self.write_table(pa_table, writer_batch_size)\r\n File \"/home/jpiabrantes/rosetta/.venv/lib/python3.10/site-packages/datasets/arrow_writer.py\", line 583, in write_table\r\n pa_table = pa_table.combine_chunks()\r\n File \"pyarrow/table.pxi\", line 3638, in pyarrow.lib.Table.combine_chunks\r\n File \"pyarrow/error.pxi\", line 154, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow/error.pxi\", line 91, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowInvalid: offset overflow while concatenating arrays\r\n```",
"Hi ! What version of `pyarrow` are you using ? Also what's the lengths of your texts ?",
"@lhoestq pyarrow version: 15.0.2\r\n\r\nlengths of texts are 1024 tokens.",
"```\r\nimport pandas as pd\r\nfrom datasets import Dataset,Image\r\n\r\n# Read the CSV file\r\ndf = pd.read_csv(\"MedMQ-2k/metadata.csv\")\r\n# Create a Hugging Face Dataset\r\ndataset = Dataset.from_pandas(df)\r\n\r\ndataset = dataset.map(lambda example: {\"image\": example[\"file_name\"]}, batched=True)\r\n# Convert the file_name column to Image type\r\ndataset = dataset.cast_column(\"image\", Image())\r\n\r\n# Upload to Hugging Face Hub (make sure authentication is set up)\r\ndataset.push_to_hub(\"MedMLLM-attack/3MAD-24K\", num_shards=16)\r\n```\r\n<img width=\"1143\" alt=\"截屏2024-05-02 13 04 09\" src=\"https://github.com/huggingface/datasets/assets/48406770/a722eadc-f8f1-4094-a38b-eaeebfa11b83\">\r\n\r\n<img width=\"176\" alt=\"截屏2024-05-02 13 03 40\" src=\"https://github.com/huggingface/datasets/assets/48406770/3f6fbeff-2a43-4d39-8ffa-fd21769608bc\">\r\n\r\nsame problem here\r\n\r\n\r\n- datasets 2.12.0\r\n- pyarrow 11.0.0\r\n",
"problem solved by using dataset split, but i don't know what's different between \"split and subset\""
] | 2020-09-11T14:50:38
| 2024-05-02T06:53:15
| 2020-09-19T16:46:31
|
MEMBER
| null | null | null | null |
How to reproduce:
```python
from datasets import load_dataset
wiki = load_dataset("wikipedia", "20200501.en", split="train")
wiki[[0]]
---------------------------------------------------------------------------
ArrowInvalid Traceback (most recent call last)
<ipython-input-13-381aedc9811b> in <module>
----> 1 wikipedia[[0]]
~/Desktop/hf/nlp/src/datasets/arrow_dataset.py in __getitem__(self, key)
1069 format_columns=self._format_columns,
1070 output_all_columns=self._output_all_columns,
-> 1071 format_kwargs=self._format_kwargs,
1072 )
1073
~/Desktop/hf/nlp/src/datasets/arrow_dataset.py in _getitem(self, key, format_type, format_columns, output_all_columns, format_kwargs)
1037 )
1038 else:
-> 1039 data_subset = self._data.take(indices_array)
1040
1041 if format_type is not None:
~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.take()
~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/compute.py in take(data, indices, boundscheck)
266 """
267 options = TakeOptions(boundscheck)
--> 268 return call_function('take', [data, indices], options)
269
270
~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/_compute.pyx in pyarrow._compute.call_function()
~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/_compute.pyx in pyarrow._compute.Function.call()
~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status()
~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status()
ArrowInvalid: offset overflow while concatenating arrays
```
It seems to work fine with small datasets or with pyarrow 0.17.1
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/615/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/615/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 8 days, 1:55:53
|
https://api.github.com/repos/huggingface/datasets/issues/611
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/611/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/611/comments
|
https://api.github.com/repos/huggingface/datasets/issues/611/events
|
https://github.com/huggingface/datasets/issues/611
| 698,863,988
|
MDU6SXNzdWU2OTg4NjM5ODg=
| 611
|
ArrowCapacityError: List array cannot contain more than 2147483646 child elements, have 2147483648
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/32364921?v=4",
"events_url": "https://api.github.com/users/sangyx/events{/privacy}",
"followers_url": "https://api.github.com/users/sangyx/followers",
"following_url": "https://api.github.com/users/sangyx/following{/other_user}",
"gists_url": "https://api.github.com/users/sangyx/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sangyx",
"id": 32364921,
"login": "sangyx",
"node_id": "MDQ6VXNlcjMyMzY0OTIx",
"organizations_url": "https://api.github.com/users/sangyx/orgs",
"received_events_url": "https://api.github.com/users/sangyx/received_events",
"repos_url": "https://api.github.com/users/sangyx/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sangyx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sangyx/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sangyx",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Can you give us stats/information on your pandas DataFrame?",
"```\r\n<class 'pandas.core.frame.DataFrame'>\r\nInt64Index: 17136104 entries, 0 to 17136103\r\nData columns (total 6 columns):\r\n # Column Dtype \r\n--- ------ ----- \r\n 0 item_id int64 \r\n 1 item_titl object \r\n 2 start_price float64\r\n 3 shipping_fee float64\r\n 4 picture_url object \r\n 5 embeddings object \r\ndtypes: float64(2), int64(1), object(3)\r\nmemory usage: 915.2+ MB\r\n```",
"Thanks and some more on the `embeddings` and `picture_url` would be nice as well (type and max lengths of the elements)",
"`embedding` is `np.array` of shape `(128,)`. `picture_url` is url, such as 'https://i.ebayimg.com/00/s/MTE5OVgxNjAw/z/ZOsAAOSwAG9fHQq5/$_12.JPG?set_id=880000500F;https://i.ebayimg.com/00/s/MTE5OVgxNjAw/z/OSgAAOSwokBfHQq8/$_12.JPG?set_id=880000500F'",
"It looks like a Pyarrow limitation.\r\nI was able to reproduce the error with \r\n\r\n```python\r\nimport pandas as pd\r\nimport numpy as np\r\nimport pyarrow as pa\r\n\r\n n = 1713614\r\ndf = pd.DataFrame.from_dict({\"a\": list(np.zeros((n, 128))), \"b\": range(n)})\r\npa.Table.from_pandas(df)\r\n```\r\n\r\nI also tried with 50% of the dataframe and it actually works.\r\nI created an issue on Apache Arrow's JIRA [here](https://issues.apache.org/jira/browse/ARROW-9976)\r\n\r\nOne way to fix that would be to chunk the dataframe and concatenate arrow tables.",
"It looks like it's going to be fixed in pyarrow 2.0.0 :)\r\n\r\nIn the meantime I suggest to chunk big dataframes to create several small datasets, and then concatenate them using [concatenate_datasets](https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=concatenate#datasets.concatenate_datasets)"
] | 2020-09-11T05:29:12
| 2022-06-01T15:11:43
| 2022-06-01T15:11:43
|
NONE
| null | null | null | null |
Hi, I'm trying to load a dataset from Dataframe, but I get the error:
```bash
---------------------------------------------------------------------------
ArrowCapacityError Traceback (most recent call last)
<ipython-input-7-146b6b495963> in <module>
----> 1 dataset = Dataset.from_pandas(emb)
~/miniconda3/envs/dev/lib/python3.7/site-packages/nlp/arrow_dataset.py in from_pandas(cls, df, features, info, split)
223 info.features = features
224 pa_table: pa.Table = pa.Table.from_pandas(
--> 225 df=df, schema=pa.schema(features.type) if features is not None else None
226 )
227 return cls(pa_table, info=info, split=split)
~/miniconda3/envs/dev/lib/python3.7/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.from_pandas()
~/miniconda3/envs/dev/lib/python3.7/site-packages/pyarrow/pandas_compat.py in dataframe_to_arrays(df, schema, preserve_index, nthreads, columns, safe)
591 for i, maybe_fut in enumerate(arrays):
592 if isinstance(maybe_fut, futures.Future):
--> 593 arrays[i] = maybe_fut.result()
594
595 types = [x.type for x in arrays]
~/miniconda3/envs/dev/lib/python3.7/concurrent/futures/_base.py in result(self, timeout)
426 raise CancelledError()
427 elif self._state == FINISHED:
--> 428 return self.__get_result()
429
430 self._condition.wait(timeout)
~/miniconda3/envs/dev/lib/python3.7/concurrent/futures/_base.py in __get_result(self)
382 def __get_result(self):
383 if self._exception:
--> 384 raise self._exception
385 else:
386 return self._result
~/miniconda3/envs/dev/lib/python3.7/concurrent/futures/thread.py in run(self)
55
56 try:
---> 57 result = self.fn(*self.args, **self.kwargs)
58 except BaseException as exc:
59 self.future.set_exception(exc)
~/miniconda3/envs/dev/lib/python3.7/site-packages/pyarrow/pandas_compat.py in convert_column(col, field)
557
558 try:
--> 559 result = pa.array(col, type=type_, from_pandas=True, safe=safe)
560 except (pa.ArrowInvalid,
561 pa.ArrowNotImplementedError,
~/miniconda3/envs/dev/lib/python3.7/site-packages/pyarrow/array.pxi in pyarrow.lib.array()
~/miniconda3/envs/dev/lib/python3.7/site-packages/pyarrow/array.pxi in pyarrow.lib._ndarray_to_array()
~/miniconda3/envs/dev/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status()
ArrowCapacityError: List array cannot contain more than 2147483646 child elements, have 2147483648
```
My code is :
```python
from nlp import Dataset
dataset = Dataset.from_pandas(emb)
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/611/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/611/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 628 days, 9:42:31
|
https://api.github.com/repos/huggingface/datasets/issues/610
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/610/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/610/comments
|
https://api.github.com/repos/huggingface/datasets/issues/610/events
|
https://github.com/huggingface/datasets/issues/610
| 698,349,388
|
MDU6SXNzdWU2OTgzNDkzODg=
| 610
|
Load text file for RoBERTa pre-training.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/33407613?v=4",
"events_url": "https://api.github.com/users/chiyuzhang94/events{/privacy}",
"followers_url": "https://api.github.com/users/chiyuzhang94/followers",
"following_url": "https://api.github.com/users/chiyuzhang94/following{/other_user}",
"gists_url": "https://api.github.com/users/chiyuzhang94/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/chiyuzhang94",
"id": 33407613,
"login": "chiyuzhang94",
"node_id": "MDQ6VXNlcjMzNDA3NjEz",
"organizations_url": "https://api.github.com/users/chiyuzhang94/orgs",
"received_events_url": "https://api.github.com/users/chiyuzhang94/received_events",
"repos_url": "https://api.github.com/users/chiyuzhang94/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/chiyuzhang94/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chiyuzhang94/subscriptions",
"type": "User",
"url": "https://api.github.com/users/chiyuzhang94",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Could you try\r\n```python\r\nload_dataset('text', data_files='test.txt',cache_dir=\"./\", split=\"train\")\r\n```\r\n?\r\n\r\n`load_dataset` returns a dictionary by default, like {\"train\": your_dataset}",
"Hi @lhoestq\r\nThanks for your suggestion.\r\n\r\nI tried \r\n```\r\ndataset = load_dataset('text', data_files='test.txt',cache_dir=\"./\", split=\"train\")\r\nprint(dataset)\r\ndataset.set_format(type='torch',columns=[\"text\"])\r\ndataloader = torch.utils.data.DataLoader(dataset, batch_size=8)\r\nnext(iter(dataloader))\r\n```\r\n\r\nBut it still doesn't work and got error:\r\n```\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\n<ipython-input-7-388aca337e2f> in <module>\r\n----> 1 next(iter(dataloader))\r\n\r\n/Library/Python/3.7/site-packages/torch/utils/data/dataloader.py in __next__(self)\r\n 361 \r\n 362 def __next__(self):\r\n--> 363 data = self._next_data()\r\n 364 self._num_yielded += 1\r\n 365 if self._dataset_kind == _DatasetKind.Iterable and \\\r\n\r\n/Library/Python/3.7/site-packages/torch/utils/data/dataloader.py in _next_data(self)\r\n 401 def _next_data(self):\r\n 402 index = self._next_index() # may raise StopIteration\r\n--> 403 data = self._dataset_fetcher.fetch(index) # may raise StopIteration\r\n 404 if self._pin_memory:\r\n 405 data = _utils.pin_memory.pin_memory(data)\r\n\r\n/Library/Python/3.7/site-packages/torch/utils/data/_utils/fetch.py in fetch(self, possibly_batched_index)\r\n 42 def fetch(self, possibly_batched_index):\r\n 43 if self.auto_collation:\r\n---> 44 data = [self.dataset[idx] for idx in possibly_batched_index]\r\n 45 else:\r\n 46 data = self.dataset[possibly_batched_index]\r\n\r\n/Library/Python/3.7/site-packages/torch/utils/data/_utils/fetch.py in <listcomp>(.0)\r\n 42 def fetch(self, possibly_batched_index):\r\n 43 if self.auto_collation:\r\n---> 44 data = [self.dataset[idx] for idx in possibly_batched_index]\r\n 45 else:\r\n 46 data = self.dataset[possibly_batched_index]\r\n\r\n/Library/Python/3.7/site-packages/datasets-0.4.0-py3.7.egg/datasets/arrow_dataset.py in __getitem__(self, key)\r\n 1069 format_columns=self._format_columns,\r\n 1070 output_all_columns=self._output_all_columns,\r\n-> 1071 format_kwargs=self._format_kwargs,\r\n 1072 )\r\n 1073 \r\n\r\n/Library/Python/3.7/site-packages/datasets-0.4.0-py3.7.egg/datasets/arrow_dataset.py in _getitem(self, key, format_type, format_columns, output_all_columns, format_kwargs)\r\n 1056 format_columns=format_columns,\r\n 1057 output_all_columns=output_all_columns,\r\n-> 1058 format_kwargs=format_kwargs,\r\n 1059 )\r\n 1060 return outputs\r\n\r\n/Library/Python/3.7/site-packages/datasets-0.4.0-py3.7.egg/datasets/arrow_dataset.py in _convert_outputs(self, outputs, format_type, format_columns, output_all_columns, format_kwargs)\r\n 872 continue\r\n 873 if format_columns is None or k in format_columns:\r\n--> 874 v = map_nested(command, v, **map_nested_kwargs)\r\n 875 output_dict[k] = v\r\n 876 return output_dict\r\n\r\n/Library/Python/3.7/site-packages/datasets-0.4.0-py3.7.egg/datasets/utils/py_utils.py in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, types)\r\n 214 # Singleton\r\n 215 if not isinstance(data_struct, dict) and not isinstance(data_struct, types):\r\n--> 216 return function(data_struct)\r\n 217 \r\n 218 disable_tqdm = bool(logger.getEffectiveLevel() > INFO)\r\n\r\n/Library/Python/3.7/site-packages/datasets-0.4.0-py3.7.egg/datasets/arrow_dataset.py in command(x)\r\n 833 if x.dtype == np.object: # pytorch tensors cannot be instantied from an array of objects\r\n 834 return [map_nested(command, i, **map_nested_kwargs) for i in x]\r\n--> 835 return torch.tensor(x, **format_kwargs)\r\n 836 \r\n 837 elif format_type == \"tensorflow\":\r\n\r\nTypeError: new(): invalid data type 'str'\r\n```\r\n\r\nI found type can be ['numpy', 'torch', 'tensorflow', 'pandas'] only, how can I deal with the string type?",
"You need to tokenize the string inputs to convert them in integers before you can feed them to a pytorch dataloader.\r\n\r\nYou can read the quicktour of the datasets or the transformers libraries to know more about that:\r\n- transformers: https://huggingface.co/transformers/quicktour.html\r\n- dataset: https://huggingface.co/docs/datasets/quicktour.html",
"Hey @chiyuzhang94, I was also having trouble in loading a large text file (11GB).\r\nBut finally got it working. This is what I did after looking into the documentation.\r\n\r\n1. split the whole dataset file into smaller files\r\n```bash\r\nmkdir ./shards\r\nsplit -a 4 -l 256000 -d full_raw_corpus.txt ./shards/shard_\r\n````\r\n2. Pass paths of small data files to `load_dataset`\r\n```python\r\nfiles = glob.glob('shards/*')\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('text', data_files=files, split='train')\r\n```\r\n(On passing the whole dataset file (11GB) directly to `load_dataset` was resulting into RAM issue)\r\n\r\n3. Tokenization\r\n```python\r\ndef encode(examples):\r\n return tokenizer(examples['text'], truncation=True, padding='max_length')\r\ndataset = dataset.map(encode, batched=True)\r\ndataset.set_format(type='torch', columns=['input_ids', 'attention_mask'])\r\n```\r\n Now you can pass `dataset` to `Trainer` or `pytorch DataLoader`\r\n```python\r\ndataloader = torch.utils.data.DataLoader(dataset, batch_size=4)\r\nnext(iter(dataloader))\r\n```\r\nHope this helps\r\n",
"Thanks, @thomwolf and @sipah00 ,\r\n\r\nI tried to implement your suggestions in my scripts. \r\nNow, I am facing some connection time-out error. I am using my local file, I have no idea why the module request s3 database.\r\n\r\nThe log is:\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/.local/lib/python3.6/site-packages/requests/adapters.py\", line 449, in send\r\n raise err\r\n File \"/home/.local/lib/python3.6/site-packages/urllib3/util/connection.py\", line 74, in create_connection\r\n timeout=timeout\r\n File \"/home/.local/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 720, in urlopen\r\n sock.connect(sa)\r\nTimeoutError: [Errno 110] Connection timed out\r\n\r\nTraceback (most recent call last):\r\n File \"/home/.local/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 672, in urlopen\r\n method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]\r\n File \"/home/.local/lib/python3.6/site-packages/urllib3/util/retry.py\", line 436, in increment\r\n chunked=chunked,\r\n File \"/home/.local/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 376, in _make_request\r\n raise MaxRetryError(_pool, url, error or ResponseError(cause))\r\nurllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='s3.amazonaws.com', port=443): Max retries exceeded with url: /datasets.huggingface.co/datasets/datasets/text/text.py (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection obj\r\nect at 0x7fff401e0e48>: Failed to establish a new connection: [Errno 110] Connection timed out',))\r\n\r\nTraceback (most recent call last):\r\n File \"/scratch/roberta_emohash/run_language_modeling.py\", line 1019, in <module>\r\n main()\r\n File \"/scratch/roberta_emohash/run_language_modeling.py\", line 962, in main\r\n train_dataset = load_and_cache_examples(args, tokenizer, evaluate=False)\r\n File \"/scratch/roberta_emohash/run_language_modeling.py\", line 177, in load_and_cache_examples\r\n return HG_Datasets(tokenizer, file_path, args)\r\n File \"/scratch/roberta_emohash/run_language_modeling.py\", line 117, in HG_Datasets\r\n dataset = load_dataset('text', data_files=files, cache_dir = args.data_cache_dir, split=\"train\")\r\n File \"/arc/project/evn_py36/datasets/datasets/src/datasets/load.py\", line 590, in load_dataset\r\n self._validate_conn(conn)\r\n File \"/home/.local/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 994, in _validate_conn\r\n conn.connect()\r\n File \"/home/.local/lib/python3.6/site-packages/urllib3/connection.py\", line 300, in connect\r\n conn = self._new_conn()\r\n File \"/home/.local/lib/python3.6/site-packages/urllib3/connection.py\", line 169, in _new_conn\r\n self, \"Failed to establish a new connection: %s\" % e\r\nurllib3.exceptions.NewConnectionError: <urllib3.connection.VerifiedHTTPSConnection object at 0x7fff401e0da0>: Failed to establish a new connection: [Errno 110] Connection timed out\r\n\r\n``` \r\n\r\nDo you have any experience on this issue?",
"No, I didn't encounter this problem, it seems to me a network problem",
"I noticed this is because I use a cloud server where does not provide for connections from our standard compute nodes to outside resources. \r\n\r\nFor the `datasets` package, it seems that if the loading script is not already cached in the library it will attempt to connect to an AWS resource to download the dataset loading script. \r\n\r\nI am wondering why the package works in this way. Do you have any suggestions to solve this issue? ",
"I solved the above issue by downloading text.py manually and passing the path to the `load_dataset` function. \r\n\r\nNow, I have a new issue with the Read-only file system.\r\n\r\nThe error is: \r\n```\r\nI0916 22:14:38.453380 140737353971520 filelock.py:274] Lock 140734268996072 acquired on /scratch/chiyuzh/roberta/text.py.lock\r\nFound main folder for dataset /scratch/chiyuzh/roberta/text.py at /home/chiyuzh/.cache/huggingface/modules/datasets_modules/datasets/text\r\nCreating specific version folder for dataset /scratch/chiyuzh/roberta/text.py at /home/chiyuzh/.cache/huggingface/modules/datasets_modules/datasets/text/512f465342e4f4cd07a8791428a629c043bb89d55ad7817cbf7fcc649178b014\r\nI0916 22:14:38.530371 140737353971520 filelock.py:318] Lock 140734268996072 released on /scratch/chiyuzh/roberta/text.py.lock\r\nTraceback (most recent call last):\r\n File \"/scratch/chiyuzh/roberta/run_language_modeling_hg.py\", line 1019, in <module>\r\n main()\r\n File \"/scratch/chiyuzh/roberta/run_language_modeling_hg.py\", line 962, in main\r\n train_dataset = load_and_cache_examples(args, tokenizer, evaluate=False)\r\n File \"/scratch/chiyuzh/roberta/run_language_modeling_hg.py\", line 177, in load_and_cache_examples\r\n return HG_Datasets(tokenizer, file_path, args)\r\n File \"/scratch/chiyuzh/roberta/run_language_modeling_hg.py\", line 117, in HG_Datasets\r\n dataset = load_dataset('/scratch/chiyuzh/roberta/text.py', data_files=files, cache_dir = args.data_cache_dir, split=\"train\")\r\n File \"/arc/project/chiyuzh/evn_py36/datasets/src/datasets/load.py\", line 590, in load_dataset\r\n path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True\r\n File \"/arc/project/chiyuzh/evn_py36/datasets/src/datasets/load.py\", line 385, in prepare_module\r\n os.makedirs(hash_folder_path)\r\n File \"/project/chiyuzh/evn_py36/lib/python3.6/os.py\", line 220, in makedirs\r\n mkdir(name, mode)\r\nOSError: [Errno 30] Read-only file system: '/home/chiyuzh/.cache/huggingface/modules/datasets_modules/datasets/text/512f465342e4f4cd07a8791428a629c043bb89d55ad7817cbf7fcc649178b014'\r\n\r\n```\r\n\r\nI installed datasets at /project/chiyuzh/evn_py36/datasets/src where is a writable directory.\r\nI also tried change the environment variables to the writable directory:\r\n`export HF_MODULES_PATH=/project/chiyuzh/evn_py36/datasets/cache_dir/`\r\n`export HF_DATASETS_CACHE=/project/chiyuzh/evn_py36/datasets/cache_dir/`\r\n \r\nIn my scripts, I also changed to:\r\n`dataset = load_dataset('/scratch/chiyuzh/roberta/text.py', data_files=files, cache_dir = args.data_cache_dir, split=\"train\")`\r\n`data_cache_dir = $TMPDIR/data/` that also a writable directory.\r\n \r\nBut it still try to make directory at /home/chiyuzh/.cache/huggingface/modules/.\r\nDo you have any idea about this issue? @thomwolf \r\n",
"> Hey @chiyuzhang94, I was also having trouble in loading a large text file (11GB).\r\n> But finally got it working. This is what I did after looking into the documentation.\r\n> \r\n> 1. split the whole dataset file into smaller files\r\n> \r\n> ```shell\r\n> mkdir ./shards\r\n> split -a 4 -l 256000 -d full_raw_corpus.txt ./shards/shard_\r\n> ```\r\n> \r\n> 1. Pass paths of small data files to `load_dataset`\r\n> \r\n> ```python\r\n> files = glob.glob('shards/*')\r\n> from datasets import load_dataset\r\n> dataset = load_dataset('text', data_files=files, split='train')\r\n> ```\r\n> \r\n> (On passing the whole dataset file (11GB) directly to `load_dataset` was resulting into RAM issue)\r\n> \r\n> 1. Tokenization\r\n> \r\n> ```python\r\n> def encode(examples):\r\n> return tokenizer(examples['text'], truncation=True, padding='max_length')\r\n> dataset = dataset.map(encode, batched=True)\r\n> dataset.set_format(type='torch', columns=['input_ids', 'attention_mask'])\r\n> ```\r\n> \r\n> Now you can pass `dataset` to `Trainer` or `pytorch DataLoader`\r\n> \r\n> ```python\r\n> dataloader = torch.utils.data.DataLoader(dataset, batch_size=4)\r\n> next(iter(dataloader))\r\n> ```\r\n> \r\n> Hope this helps\r\n\r\nWhen I run 'dataset = dataset.map(encode, batched=True)',\r\nI encountered a problem like this:\r\n\r\n> Testing the mapped function outputs\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/datasets/dataset_dict.py\", line 300, in map\r\n for k, dataset in self.items()\r\n File \"/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/datasets/dataset_dict.py\", line 300, in <dictcomp>\r\n for k, dataset in self.items()\r\n File \"/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/datasets/arrow_dataset.py\", line 1224, in map\r\n update_data = does_function_return_dict(test_inputs, test_indices)\r\n File \"/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/datasets/arrow_dataset.py\", line 1195, in does_function_return_dict\r\n function(*fn_args, indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)\r\n File \"<stdin>\", line 3, in encode\r\nTypeError: __init__() takes 1 positional argument but 2 were given",
"> > Hey @chiyuzhang94, I was also having trouble in loading a large text file (11GB).\r\n> > But finally got it working. This is what I did after looking into the documentation.\r\n> > \r\n> > 1. split the whole dataset file into smaller files\r\n> > \r\n> > ```shell\r\n> > mkdir ./shards\r\n> > split -a 4 -l 256000 -d full_raw_corpus.txt ./shards/shard_\r\n> > ```\r\n> > \r\n> > \r\n> > \r\n> > 1. Pass paths of small data files to `load_dataset`\r\n> > \r\n> > ```python\r\n> > files = glob.glob('shards/*')\r\n> > from datasets import load_dataset\r\n> > dataset = load_dataset('text', data_files=files, split='train')\r\n> > ```\r\n> > \r\n> > \r\n> > (On passing the whole dataset file (11GB) directly to `load_dataset` was resulting into RAM issue)\r\n> > \r\n> > 1. Tokenization\r\n> > \r\n> > ```python\r\n> > def encode(examples):\r\n> > return tokenizer(examples['text'], truncation=True, padding='max_length')\r\n> > dataset = dataset.map(encode, batched=True)\r\n> > dataset.set_format(type='torch', columns=['input_ids', 'attention_mask'])\r\n> > ```\r\n> > \r\n> > \r\n> > Now you can pass `dataset` to `Trainer` or `pytorch DataLoader`\r\n> > ```python\r\n> > dataloader = torch.utils.data.DataLoader(dataset, batch_size=4)\r\n> > next(iter(dataloader))\r\n> > ```\r\n> > \r\n> > \r\n> > Hope this helps\r\n> \r\n> When I run 'dataset = dataset.map(encode, batched=True)',\r\n> I encountered a problem like this:\r\n> \r\n> > Testing the mapped function outputs\r\n> > Traceback (most recent call last):\r\n> > File \"\", line 1, in \r\n> > File \"/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/datasets/dataset_dict.py\", line 300, in map\r\n> > for k, dataset in self.items()\r\n> > File \"/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/datasets/dataset_dict.py\", line 300, in \r\n> > for k, dataset in self.items()\r\n> > File \"/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/datasets/arrow_dataset.py\", line 1224, in map\r\n> > update_data = does_function_return_dict(test_inputs, test_indices)\r\n> > File \"/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/datasets/arrow_dataset.py\", line 1195, in does_function_return_dict\r\n> > function(*fn_args, indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)\r\n> > File \"\", line 3, in encode\r\n> > TypeError: **init**() takes 1 positional argument but 2 were given\r\n\r\nWhat is your encoder function?",
"> ```python\r\n> def encode(examples):\r\n> return tokenizer(examples['text'], truncation=True, padding='max_length')\r\n> ```\r\n\r\nIt is the same as suggested:\r\n\r\n> def encode(examples):\r\n return tokenizer(examples['text'], truncation=True, padding='max_length')",
"> > ```python\r\n> > def encode(examples):\r\n> > return tokenizer(examples['text'], truncation=True, padding='max_length')\r\n> > ```\r\n> \r\n> It is the same as suggested:\r\n> \r\n> > def encode(examples):\r\n> > return tokenizer(examples['text'], truncation=True, padding='max_length')\r\n\r\nDo you use this function in a `class` object? \r\n\r\ninit() takes 1 positional argument but 2 were given. I guess the additional argument is self?",
"> > > ```python\r\n> > > def encode(examples):\r\n> > > return tokenizer(examples['text'], truncation=True, padding='max_length')\r\n> > > ```\r\n> > \r\n> > \r\n> > It is the same as suggested:\r\n> > > def encode(examples):\r\n> > > return tokenizer(examples['text'], truncation=True, padding='max_length')\r\n> \r\n> Do you use this function in a `class` object?\r\n> \r\n> init() takes 1 positional argument but 2 were given. I guess the additional argument is self?\r\n\r\nThanks for your reply.\r\nCould you provide some simple example here?\r\nCurrently, I do not use this function in a class object. \r\nI think you are right and I was wondering how to construct this class.\r\nI try to modify it based on transformers' LineByLineTextDataset. Am I correct?\r\n\r\n> class LineByLineTextDataset(Dataset):\r\n \"\"\"\r\n This will be superseded by a framework-agnostic approach\r\n soon.\r\n \"\"\"\r\n\r\n def __init__(self, tokenizer: PreTrainedTokenizer, file_path: str, block_size: int):\r\n assert os.path.isfile(file_path), f\"Input file path {file_path} not found\"\r\n # Here, we do not cache the features, operating under the assumption\r\n # that we will soon use fast multithreaded tokenizers from the\r\n # `tokenizers` repo everywhere =)\r\n #logger.info(\"Creating features from dataset file at %s\", file_path)\r\n #with open(file_path, encoding=\"utf-8\") as f:\r\n # lines = [line for line in f.read().splitlines() if (len(line) > 0 and not line.isspace())]\r\n #batch_encoding = tokenizer(lines, add_special_tokens=True, truncation=True, max_length=block_size)\r\n\r\n\timport glob\r\n\tfiles = glob.glob('/home/mtzhang111/fairseq/cs_doc/shards/shard_003*')\r\n\tfrom datasets import load_dataset\r\n\tdataset = load_dataset('text', data_files=files)\r\n batch_encoding= dataset.map(encode, batched=True)\r\n self.examples = batch_encoding[\"input_ids\"]\r\n\t\r\n\r\n def encode(examples):\r\n return tokenizer(examples['text'], truncation=True, padding='max_length')\r\n\r\n def __len__(self):\r\n return len(self.examples)\r\n\r\n def __getitem__(self, i) -> torch.Tensor:\r\n return torch.tensor(self.examples[i], dtype=torch.long)\r\n",
"> > > > ```python\r\n> > > > def encode(examples):\r\n> > > > return tokenizer(examples['text'], truncation=True, padding='max_length')\r\n> > > > ```\r\n> > > \r\n> > > \r\n> > > It is the same as suggested:\r\n> > > > def encode(examples):\r\n> > > > return tokenizer(examples['text'], truncation=True, padding='max_length')\r\n> > \r\n> > \r\n> > Do you use this function in a `class` object?\r\n> > init() takes 1 positional argument but 2 were given. I guess the additional argument is self?\r\n> \r\n> Thanks for your reply.\r\n> Could you provide some simple example here?\r\n> Currently, I do not use this function in a class object.\r\n> I think you are right and I was wondering how to construct this class.\r\n> I try to modify it based on transformers' LineByLineTextDataset. Am I correct?\r\n> \r\n> > class LineByLineTextDataset(Dataset):\r\n> > \"\"\"\r\n> > This will be superseded by a framework-agnostic approach\r\n> > soon.\r\n> > \"\"\"\r\n> \r\n> ```\r\n> def __init__(self, tokenizer: PreTrainedTokenizer, file_path: str, block_size: int):\r\n> assert os.path.isfile(file_path), f\"Input file path {file_path} not found\"\r\n> # Here, we do not cache the features, operating under the assumption\r\n> # that we will soon use fast multithreaded tokenizers from the\r\n> # `tokenizers` repo everywhere =)\r\n> #logger.info(\"Creating features from dataset file at %s\", file_path)\r\n> #with open(file_path, encoding=\"utf-8\") as f:\r\n> # lines = [line for line in f.read().splitlines() if (len(line) > 0 and not line.isspace())]\r\n> #batch_encoding = tokenizer(lines, add_special_tokens=True, truncation=True, max_length=block_size)\r\n> \r\n> import glob\r\n> files = glob.glob('/home/mtzhang111/fairseq/cs_doc/shards/shard_003*')\r\n> from datasets import load_dataset\r\n> dataset = load_dataset('text', data_files=files)\r\n> batch_encoding= dataset.map(encode, batched=True)\r\n> self.examples = batch_encoding[\"input_ids\"]\r\n> \r\n> \r\n> def encode(examples):\r\n> return tokenizer(examples['text'], truncation=True, padding='max_length')\r\n> \r\n> def __len__(self):\r\n> return len(self.examples)\r\n> \r\n> def __getitem__(self, i) -> torch.Tensor:\r\n> return torch.tensor(self.examples[i], dtype=torch.long)\r\n> ```\r\n\r\nI am also struggling with this adaptation. \r\nI am not sure whether I am right.\r\n\r\nI think you don't need to construct `class LazyLineByLineTextDataset(Dataset)` at all. \r\ntorch.utils.data.Dataset is a generator. \r\n\r\nNow, we use `dataset = dataset.map(encode, batched=True)` as a generator. So we just pass dataset to torch.utils.data.DataLoader. ",
"@chiyuzhang94 Thanks for your reply. After some changes, currently, I managed to make the data loading process running.\r\nI published it in case you might want to take a look. Thanks for your help!\r\nhttps://github.com/shizhediao/Transformers_TPU",
"Hi @shizhediao ,\r\n\r\nThanks! It looks great!\r\n\r\nBut my problem still is the cache directory is a read-only file system. \r\n[As I mentioned](https://github.com/huggingface/datasets/issues/610#issuecomment-693912285), I tried to change the cache directory but it didn't work. \r\n\r\nDo you have any suggestions?\r\n\r\n",
"> I installed datasets at /project/chiyuzh/evn_py36/datasets/src where is a writable directory.\r\n> I also tried change the environment variables to the writable directory:\r\n> `export HF_MODULES_PATH=/project/chiyuzh/evn_py36/datasets/cache_dir/`\r\n\r\nI think it is `HF_MODULES_CACHE` and not `HF_MODULES_PATH` @chiyuzhang94 .\r\nCould you try again and let me know if it fixes your issue ?\r\n",
"We should probably add a section in the doc on the caching system with the env variables in particular.",
"Hi @thomwolf , @lhoestq ,\r\n\r\nThanks for your suggestions. With the latest version of this package, I can load text data without Internet. \r\n\r\nBut I found the speed of dataset loading is very slow. \r\n\r\nMy scrips like this: \r\n```\r\n def token_encode(examples):\r\n tokenizer_out = tokenizer(examples['text'], truncation=True, padding=\"max_length\", add_special_tokens=True, max_length=args.block_size)\r\n return tokenizer_out\r\n \r\n path = Path(file_path)\r\n files = sorted(path.glob('*'))\r\n dataset = load_dataset('./text.py', data_files=files, cache_dir = args.data_cache_dir, split=\"train\")\r\n dataset = dataset.map(token_encode, batched=True)\r\n\r\n dataset.set_format(type='torch', columns=['input_ids', 'attention_mask'])\r\n```\r\n\r\nI have 1,123,870,657 lines in my input directory. \r\nI can find the processing speed as following. It is very slow. \r\n```\r\n| 13/1123871 [00:02<62:37:39, 4.98ba/s]^M 0%| \r\n| 14/1123871 [00:03<61:27:31, 5.08ba/s]^M 0%| \r\n| 15/1123871 [00:03<66:34:19, 4.69ba/s]^M 0%| \r\n| 16/1123871 [00:03<68:25:01, 4.56ba/s]^M 0%| \r\n| 17/1123871 [00:03<72:00:03, 4.34ba/s]^M 0%| \r\n```\r\nDo you have any suggestions to accelerate this loading process?",
"You can use multiprocessing by specifying `num_proc=` in `.map()`\r\n\r\nAlso it looks like you have `1123871` batches of 1000 elements (default batch size), i.e. 1,123,871,000 lines in total.\r\nAm I right ?",
"> You can use multiprocessing by specifying `num_proc=` in `.map()`\r\n> \r\n> Also it looks like you have `1123871` batches of 1000 elements (default batch size), i.e. 1,123,871,000 lines in total.\r\n> Am I right ?\r\n\r\nHi @lhoestq ,\r\n\r\nThanks. I will try it.\r\n\r\nYou are right. I have 1,123,870,657 lines totally in the path. I split the large file into 440 small files. Each file has 2,560,000 lines.\r\n\r\nI have another question. Because I am using a cloud server where only allows running a job up to 7 days. Hence, I need to resume my model every week. If the script needs to load and process the dataset every time. It is very low efficient based on the current processing speed. Is it possible that I process the dataset once and use the process cache to in the future work? \r\n",
"Hi @lhoestq ,\r\n\r\nI tried to use multi-processor, but I got errors as follow: \r\nBecause I am using python distributed training, it seems some conflicts with the distributed job. \r\n\r\nDo you have any suggestions?\r\n```\r\nI0925 10:19:35.603023 140737353971520 filelock.py:318] Lock 140737229443368 released on /tmp/pbs.1120510.pbsha.ib.sockeye/cache/_tmp_pbs.1120510.pbsha.ib.sockeye_cache_text_default-7fb934ed6fac5d01_0.0.0_512f465342e4f4cd07a8791428a629c043bb89d55ad7817cbf7\r\nfcc649178b014.lock\r\nTraceback (most recent call last):\r\n File \"/scratch/chiyuzh/roberta/run_language_modeling.py\", line 1024, in <module>\r\n main()\r\n File \"/scratch/chiyuzh/roberta/run_language_modeling.py\", line 967, in main\r\n train_dataset = load_and_cache_examples(args, tokenizer, evaluate=False)\r\n File \"/scratch/chiyuzh/roberta/run_language_modeling.py\", line 180, in load_and_cache_examples\r\n return HG_Datasets(tokenizer, file_path, args)\r\n File \"/scratch/chiyuzh/roberta/run_language_modeling.py\", line 119, in HG_Datasets\r\n dataset = dataset.map(token_encode, batched=True, batch_size = 10000, num_proc = 16)\r\n File \"/project/chiyuzh/evn_py36/lib/python3.6/site-packages/datasets/arrow_dataset.py\", line 1287, in map\r\n transformed_shards = [r.get() for r in results]\r\n File \"/project/chiyuzh/evn_py36/lib/python3.6/site-packages/datasets/arrow_dataset.py\", line 1287, in <listcomp>\r\n transformed_shards = [r.get() for r in results]\r\n File \"/project/chiyuzh/evn_py36/lib/python3.6/multiprocessing/pool.py\", line 644, in get\r\n raise self._value\r\n File \"/project/chiyuzh/evn_py36/lib/python3.6/multiprocessing/pool.py\", line 424, in _handle_tasks\r\n put(task)\r\n File \"/project/chiyuzh/evn_py36/lib/python3.6/multiprocessing/connection.py\", line 206, in send\r\n self._send_bytes(_ForkingPickler.dumps(obj))\r\n File \"/project/chiyuzh/evn_py36/lib/python3.6/multiprocessing/reduction.py\", line 51, in dumps\r\n cls(buf, protocol).dump(obj)\r\nAttributeError: Can't pickle local object 'HG_Datasets.<locals>.token_encode'\r\n```",
"For multiprocessing, the function given to `map` must be picklable.\r\nMaybe you could try to define `token_encode` outside `HG_Datasets` ?\r\n\r\nAlso maybe #656 could make functions defined locally picklable for multiprocessing, once it's merged.",
"> I have another question. Because I am using a cloud server where only allows running a job up to 7 days. Hence, I need to resume my model every week. If the script needs to load and process the dataset every time. It is very low efficient based on the current processing speed. Is it possible that I process the dataset once and use the process cache to in the future work?\r\n\r\nFeel free to save your processed dataset using `dataset.save_to_disk(\"path/to/save/directory\")`.\r\n\r\nThen you'll be able to reload it again using\r\n```python\r\nfrom datasets import load_from_disk\r\n\r\ndataset = load_from_disk(\"path/to/save/directory\")\r\n```",
"Hi @lhoestq ,\r\n\r\nThanks for your suggestion. \r\nI tried to process the dataset and save it to disk. \r\nI have 1.12B samples in the raw dataset. I used 16 processors.\r\nI run this process job for 7 days. But it didn't finish. I don't why the processing is such slow. \r\n\r\nThe log shows that some processors (\\#12, \\#14, \\#15) are very slow. The different processor has a different speed. These slow processors look like a bottleneck. \r\n\r\nCould you please give me any suggestion to improve the processing speed? \r\n\r\nThanks. \r\nChiyu\r\n\r\nHere is my code:\r\n```\r\ndef token_encode(examples):\r\n tokenizer_out = tokenizer(examples['text'], truncation=True, padding=\"max_length\", add_special_tokens=True, max_length=args.block_size)\r\n return tokenizer_out\r\n\r\npath = Path(file_path)\r\nfiles = sorted(path.glob('*'))\r\ndataset = load_dataset('./text.py', data_files=files, cache_dir = args.data_cache_dir, split=\"train\")\r\ndataset = dataset.map(token_encode, batched=True, batch_size = 16384, num_proc = 16)\r\ndataset.set_format(type='torch', columns=['input_ids', 'attention_mask'])\r\ndataset.save_to_disk(output_dir)\r\n```\r\nHere is the log. \r\n```\r\n^M#6: 1%|▏ | 59/4288 [55:10<66:11:58, 56.35s/ba]\r\n^M#1: 8%|▊ | 356/4288 [55:39<10:40:02, 9.77s/ba]\r\n^M#2: 5%|▍ | 210/4288 [55:33<17:47:19, 15.70s/ba]\r\n^M#0: 19%|█▉ | 836/4288 [55:53<4:08:56, 4.33s/ba]\r\n^M#0: 20%|█▉ | 837/4288 [55:57<4:01:52, 4.21s/ba]\r\n^M#1: 8%|▊ | 357/4288 [55:48<10:38:09, 9.74s/ba]\r\n^M#0: 20%|█▉ | 838/4288 [56:01<4:02:56, 4.23s/ba]\r\n^M#3: 4%|▎ | 155/4288 [55:43<24:41:20, 21.51s/ba]\r\n^M#0: 20%|█▉ | 839/4288 [56:05<4:04:48, 4.26s/ba]\r\n^M#12: 1%| | 29/4288 [54:50<133:20:53, 112.72s/ba]\r\n^M#2: 5%|▍ | 211/4288 [55:48<17:40:33, 15.61s/ba]\r\n^M#14: 0%| | 2/4288 [04:24<157:17:50, 132.12s/ba]\r\n^M#15: 0%| | 1/4288 [02:24<172:11:37, 144.60s/ba]\r\n```",
"Hi !\r\n\r\nAs far as I can tell, there could be several reasons for your processes to have different speeds:\r\n- some parts of your dataset have short passages while some have longer passages, that take more time to be processed\r\n- OR there are other processes running that prevent some of them to run at full speed\r\n- OR the value of `num_proc` is higher than the number of actual processes that you can run in parallel at full speed.\r\n\r\nSo I'd suggest you to check that you have nothing else running in parallel to your processing job, and also maybe take a look at the slow parts of the datasets.\r\nWhen doing multiprocessing, the dataset is sharded in `num_proc` contiguous parts that are processed individually in each process. If you want to take a look at the dataset processed in the 12th shard of 16 for example, you can do:\r\n\r\n```python\r\nmy_shard = dataset.shard(num_shards=16, index=12, contiguous=True)\r\n```\r\n\r\nHope this helps, let me know if you find what is causing this slow down.",
"Do you use a fast or a slow tokenizer from the `transformers` library @chiyuzhang94?",
"> Do you use a fast or a slow tokenizer from the `transformers` library @chiyuzhang94?\r\n\r\nHi @thomwolf ,\r\n I use this: \r\n```\r\nfrom transformers import\r\nAutoTokenizer.from_pretrained(args.model_name_or_path, cache_dir=args.cache_dir)\r\n```\r\n\r\nI guess this is a slow one, let me explore the fast tokenizer. ",
"> Hi !\r\n> \r\n> As far as I can tell, there could be several reasons for your processes to have different speeds:\r\n> \r\n> * some parts of your dataset have short passages while some have longer passages, that take more time to be processed\r\n> * OR there are other processes running that prevent some of them to run at full speed\r\n> * OR the value of `num_proc` is higher than the number of actual processes that you can run in parallel at full speed.\r\n> \r\n> So I'd suggest you to check that you have nothing else running in parallel to your processing job, and also maybe take a look at the slow parts of the datasets.\r\n> When doing multiprocessing, the dataset is sharded in `num_proc` contiguous parts that are processed individually in each process. If you want to take a look at the dataset processed in the 12th shard of 16 for example, you can do:\r\n> \r\n> ```python\r\n> my_shard = dataset.shard(num_shards=16, index=12, contiguous=True)\r\n> ```\r\n> \r\n> Hope this helps, let me know if you find what is causing this slow down.\r\n\r\nHi @lhoestq ,\r\n\r\nThanks for your suggestions. \r\nI don't think my problem is due to any one of these seasons. \r\n\r\n1. I have 1,123,870,657 lines totally in the path. I split the large file into 440 small files. Each file has 2,560,000 lines. The last file is smaller a little bit. But they are similar. I randomly shuffled all the 1,123,870,657 lines. Hence, the sequences should also be similar across all the files. \r\n\r\n2. I run this script on the entire node. I requested all the resources on the nodes (40 CPUs, 384GB memory). Hence, these were not any other processes. \r\n\r\n 3. As I say, the node has 40 CPUs, but I set num_proc = 16. This should not be a problem.",
"Hi @thomwolf \r\nI am using `RobertaTokenizerFast` now. \r\n\r\nBut the speed is still imbalanced, some processors are still slow. \r\nHere is the part of the log. #0 is always much fast than lower rank processors. \r\n\r\n```\r\n#15: 3%|▎ | 115/3513 [3:18:36<98:01:33, 103.85s/ba]\r\n#2: 24%|██▍ | 847/3513 [3:20:43<11:06:49, 15.01s/ba]\r\n#1: 37%|███▋ | 1287/3513 [3:20:52<6:19:02, 10.22s/ba]\r\n#0: 72%|███████▏ | 2546/3513 [3:20:52<1:51:03, 6.89s/ba]\r\n#3: 18%|█▊ | 617/3513 [3:20:36<15:50:30, 19.69s/ba]\r\n#0: 73%|███████▎ | 2547/3513 [3:20:59<1:50:25, 6.86s/ba]\r\n#1: 37%|███▋ | 1288/3513 [3:21:02<6:21:13, 10.28s/ba]\r\n#7: 7%|▋ | 252/3513 [3:20:09<44:09:03, 48.74s/ba]\r\n#12: 4%|▍ | 144/3513 [3:19:19<78:00:54, 83.36s/ba]\r\n#4: 14%|█▍ | 494/3513 [3:20:37<20:46:06, 24.77s/ba]\r\n#0: 73%|███████▎ | 2548/3513 [3:21:06<1:49:26, 6.80s/ba]\r\n#2: 24%|██▍ | 848/3513 [3:20:58<11:06:17, 15.00s/ba]\r\n```\r\nHere is my script related to the datasets processing, \r\n\r\n```\r\ntokenizer = RobertaTokenizerFast.from_pretrained(args.model_name_or_path, cache_dir=args.cache_dir)\r\n\r\ndef token_encode(examples):\r\n tokenizer_out = tokenizer(examples['text'], truncation=True, padding=\"max_length\", add_special_tokens=True, max_length=128)\r\n return tokenizer_out\r\n\r\ndef HG_Datasets(tokenizer, file_path, args):\r\n \r\n path = Path(file_path)\r\n files = sorted(path.glob('*'))\r\n dataset = load_dataset('./text.py', data_files=files, cache_dir = \"\"./, split=\"train\")\r\n dataset = dataset.map(token_encode, batched=True, batch_size = 20000, num_proc = 16)\r\n\r\n dataset.set_format(type='torch', columns=['input_ids', 'attention_mask'])\r\n return dataset\r\n\r\n```\r\nI have 1,123,870,657 lines totally in the path. I split the large file into 440 small files. Each file has 2,560,000 lines.\r\n\r\nCould you please give any suggestion? Thanks very much!!"
] | 2020-09-10T18:41:38
| 2022-11-22T13:51:24
| 2022-11-22T13:51:23
|
NONE
| null | null | null | null |
I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444
I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file.
According to the suggestion from @thomwolf , I tried to implement `datasets` to load my text file. This test.txt is a simple sample where each line is a sentence.
```
from datasets import load_dataset
dataset = load_dataset('text', data_files='test.txt',cache_dir="./")
dataset.set_format(type='torch',columns=["text"])
dataloader = torch.utils.data.DataLoader(dataset, batch_size=8)
next(iter(dataloader))
```
But dataload cannot yield sample and error is:
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-12-388aca337e2f> in <module>
----> 1 next(iter(dataloader))
/Library/Python/3.7/site-packages/torch/utils/data/dataloader.py in __next__(self)
361
362 def __next__(self):
--> 363 data = self._next_data()
364 self._num_yielded += 1
365 if self._dataset_kind == _DatasetKind.Iterable and \
/Library/Python/3.7/site-packages/torch/utils/data/dataloader.py in _next_data(self)
401 def _next_data(self):
402 index = self._next_index() # may raise StopIteration
--> 403 data = self._dataset_fetcher.fetch(index) # may raise StopIteration
404 if self._pin_memory:
405 data = _utils.pin_memory.pin_memory(data)
/Library/Python/3.7/site-packages/torch/utils/data/_utils/fetch.py in fetch(self, possibly_batched_index)
42 def fetch(self, possibly_batched_index):
43 if self.auto_collation:
---> 44 data = [self.dataset[idx] for idx in possibly_batched_index]
45 else:
46 data = self.dataset[possibly_batched_index]
/Library/Python/3.7/site-packages/torch/utils/data/_utils/fetch.py in <listcomp>(.0)
42 def fetch(self, possibly_batched_index):
43 if self.auto_collation:
---> 44 data = [self.dataset[idx] for idx in possibly_batched_index]
45 else:
46 data = self.dataset[possibly_batched_index]
KeyError: 0
```
`dataset.set_format(type='torch',columns=["text"])` returns a log says:
```
Set __getitem__(key) output type to torch for ['text'] columns (when key is int or slice) and don't output other (un-formatted) columns.
```
I noticed the dataset is `DatasetDict({'train': Dataset(features: {'text': Value(dtype='string', id=None)}, num_rows: 44)})`.
Each sample can be accessed by `dataset["train"]["text"]` instead of `dataset["text"]`.
Could you please give me any suggestions on how to modify this code to load the text file?
Versions:
Python version 3.7.3
PyTorch version 1.6.0
TensorFlow version 2.3.0
datasets version: 1.0.1
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/610/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/610/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 802 days, 19:09:45
|
https://api.github.com/repos/huggingface/datasets/issues/608
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/608/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/608/comments
|
https://api.github.com/repos/huggingface/datasets/issues/608/events
|
https://github.com/huggingface/datasets/issues/608
| 698,291,156
|
MDU6SXNzdWU2OTgyOTExNTY=
| 608
|
Don't use the old NYU GLUE dataset URLs
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/57466294?v=4",
"events_url": "https://api.github.com/users/jeswan/events{/privacy}",
"followers_url": "https://api.github.com/users/jeswan/followers",
"following_url": "https://api.github.com/users/jeswan/following{/other_user}",
"gists_url": "https://api.github.com/users/jeswan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jeswan",
"id": 57466294,
"login": "jeswan",
"node_id": "MDQ6VXNlcjU3NDY2Mjk0",
"organizations_url": "https://api.github.com/users/jeswan/orgs",
"received_events_url": "https://api.github.com/users/jeswan/received_events",
"repos_url": "https://api.github.com/users/jeswan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jeswan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jeswan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jeswan",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Feel free to open the PR ;)\r\nThanks for updating the dataset_info.json file !"
] | 2020-09-10T17:47:02
| 2020-09-16T06:53:18
| 2020-09-16T06:53:18
|
CONTRIBUTOR
| null | null | null | null |
NYU is switching dataset hosting from Google to FB. Initial changes to `datasets` are in https://github.com/jeswan/nlp/commit/b7d4a071d432592ded971e30ef73330529de25ce. What tests do you suggest I run before opening a PR?
See: https://github.com/jiant-dev/jiant/issues/161 and https://github.com/nyu-mll/jiant/pull/1112
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/thomwolf",
"id": 7353373,
"login": "thomwolf",
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"type": "User",
"url": "https://api.github.com/users/thomwolf",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/608/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/608/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 5 days, 13:06:16
|
https://api.github.com/repos/huggingface/datasets/issues/600
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/600/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/600/comments
|
https://api.github.com/repos/huggingface/datasets/issues/600/events
|
https://github.com/huggingface/datasets/issues/600
| 697,496,913
|
MDU6SXNzdWU2OTc0OTY5MTM=
| 600
|
Pickling error when loading dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/17310286?v=4",
"events_url": "https://api.github.com/users/kandorm/events{/privacy}",
"followers_url": "https://api.github.com/users/kandorm/followers",
"following_url": "https://api.github.com/users/kandorm/following{/other_user}",
"gists_url": "https://api.github.com/users/kandorm/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kandorm",
"id": 17310286,
"login": "kandorm",
"node_id": "MDQ6VXNlcjE3MzEwMjg2",
"organizations_url": "https://api.github.com/users/kandorm/orgs",
"received_events_url": "https://api.github.com/users/kandorm/received_events",
"repos_url": "https://api.github.com/users/kandorm/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kandorm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kandorm/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kandorm",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"When I change from python3.6 to python3.8, it works! ",
"Does it work when you install `nlp` from source on python 3.6?",
"No, still the pickling error.",
"I wasn't able to reproduce on google colab (python 3.6.9 as well) with \r\n\r\npickle==4.0\r\ndill=0.3.2\r\ntransformers==3.1.0\r\ndatasets=1.0.1 (also tried nlp 0.4.0)\r\n\r\nIf I try\r\n\r\n```python\r\nfrom datasets import load_dataset # or from nlp\r\nfrom transformers import BertTokenizer\r\n\r\ntokenizer = BertTokenizer.from_pretrained(\"bert-base-uncased\")\r\ndataset = load_dataset(\"text\", data_files=file_path, split=\"train\")\r\ndataset = dataset.map(lambda ex: tokenizer(ex[\"text\"], add_special_tokens=True,\r\n truncation=True, max_length=512), batched=True)\r\ndataset.set_format(type='torch', columns=['input_ids'])\r\n```\r\nIt runs without error",
"Closing since it looks like it's working on >= 3.6.9\r\nFeel free to re-open if you have other questions :)"
] | 2020-09-10T06:28:08
| 2020-09-25T14:31:54
| 2020-09-25T14:31:54
|
NONE
| null | null | null | null |
Hi,
I modified line 136 in the original [run_language_modeling.py](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_language_modeling.py) as:
```
# line 136: return LineByLineTextDataset(tokenizer=tokenizer, file_path=file_path, block_size=args.block_size)
dataset = load_dataset("text", data_files=file_path, split="train")
dataset = dataset.map(lambda ex: tokenizer(ex["text"], add_special_tokens=True,
truncation=True, max_length=args.block_size), batched=True)
dataset.set_format(type='torch', columns=['input_ids'])
return dataset
```
When I run this with transformers (3.1.0) and nlp (0.4.0), I get the following error:
```
Traceback (most recent call last):
File "src/run_language_modeling.py", line 319, in <module>
main()
File "src/run_language_modeling.py", line 248, in main
get_dataset(data_args, tokenizer=tokenizer, cache_dir=model_args.cache_dir) if training_args.do_train else None
File "src/run_language_modeling.py", line 139, in get_dataset
dataset = dataset.map(lambda ex: tokenizer(ex["text"], add_special_tokens=True, truncation=True, max_length=args.block_size), batched=True)
File "/data/nlp/src/nlp/arrow_dataset.py", line 1136, in map
new_fingerprint=new_fingerprint,
File "/data/nlp/src/nlp/fingerprint.py", line 158, in wrapper
self._fingerprint, transform, kwargs_for_fingerprint
File "/data/nlp/src/nlp/fingerprint.py", line 105, in update_fingerprint
hasher.update(transform_args[key])
File "/data/nlp/src/nlp/fingerprint.py", line 57, in update
self.m.update(self.hash(value).encode("utf-8"))
File "/data/nlp/src/nlp/fingerprint.py", line 53, in hash
return cls.hash_default(value)
File "/data/nlp/src/nlp/fingerprint.py", line 46, in hash_default
return cls.hash_bytes(dumps(value))
File "/data/nlp/src/nlp/utils/py_utils.py", line 362, in dumps
dump(obj, file)
File "/data/nlp/src/nlp/utils/py_utils.py", line 339, in dump
Pickler(file, recurse=True).dump(obj)
File "/root/miniconda3/envs/py3.6/lib/python3.6/site-packages/dill/_dill.py", line 446, in dump
StockPickler.dump(self, obj)
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 409, in dump
self.save(obj)
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/root/miniconda3/envs/py3.6/lib/python3.6/site-packages/dill/_dill.py", line 1438, in save_function
obj.__dict__, fkwdefaults), obj=obj)
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 610, in save_reduce
save(args)
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 751, in save_tuple
save(element)
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 736, in save_tuple
save(element)
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/root/miniconda3/envs/py3.6/lib/python3.6/site-packages/dill/_dill.py", line 1170, in save_cell
pickler.save_reduce(_create_cell, (f,), obj=obj)
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 610, in save_reduce
save(args)
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 736, in save_tuple
save(element)
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 521, in save
self.save_reduce(obj=obj, *rv)
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 605, in save_reduce
save(cls)
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/root/miniconda3/envs/py3.6/lib/python3.6/site-packages/dill/_dill.py", line 1365, in save_type
obj.__bases__, _dict), obj=obj)
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 610, in save_reduce
save(args)
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 751, in save_tuple
save(element)
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/root/miniconda3/envs/py3.6/lib/python3.6/site-packages/dill/_dill.py", line 933, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 821, in save_dict
self._batch_setitems(obj.items())
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 847, in _batch_setitems
save(v)
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/root/miniconda3/envs/py3.6/lib/python3.6/site-packages/dill/_dill.py", line 933, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 821, in save_dict
self._batch_setitems(obj.items())
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 847, in _batch_setitems
save(v)
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 507, in save
self.save_global(obj, rv)
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 927, in save_global
(obj, module_name, name))
_pickle.PicklingError: Can't pickle typing.Union[str, NoneType]: it's not the same object as typing.Union
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/600/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/600/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 15 days, 8:03:46
|
https://api.github.com/repos/huggingface/datasets/issues/598
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/598/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/598/comments
|
https://api.github.com/repos/huggingface/datasets/issues/598/events
|
https://github.com/huggingface/datasets/issues/598
| 697,156,501
|
MDU6SXNzdWU2OTcxNTY1MDE=
| 598
|
The current version of the package on github has an error when loading dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/43428393?v=4",
"events_url": "https://api.github.com/users/zeyuyun1/events{/privacy}",
"followers_url": "https://api.github.com/users/zeyuyun1/followers",
"following_url": "https://api.github.com/users/zeyuyun1/following{/other_user}",
"gists_url": "https://api.github.com/users/zeyuyun1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/zeyuyun1",
"id": 43428393,
"login": "zeyuyun1",
"node_id": "MDQ6VXNlcjQzNDI4Mzkz",
"organizations_url": "https://api.github.com/users/zeyuyun1/orgs",
"received_events_url": "https://api.github.com/users/zeyuyun1/received_events",
"repos_url": "https://api.github.com/users/zeyuyun1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/zeyuyun1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zeyuyun1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/zeyuyun1",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Thanks for reporting !\r\nWhich version of transformers are you using ?\r\nIt looks like it doesn't have the PreTrainedTokenizerBase class",
"I was using transformer 2.9. And I switch to the latest transformer package. Everything works just fine!!\r\n\r\nThanks for helping! I should look more carefully next time. Didn't realize loading the data part requires using tokenizer.\r\n",
"Yes it shouldn’t fail with older version of transformers since this is only a special feature to make caching more efficient when using transformers for tokenization.\r\nWe’ll update this."
] | 2020-09-09T21:03:23
| 2020-09-10T06:25:21
| 2020-09-09T22:57:28
|
NONE
| null | null | null | null |
Instead of downloading the package from pip, downloading the version from source will result in an error when loading dataset (the pip version is completely fine):
To recreate the error:
First, installing nlp directly from source:
```
git clone https://github.com/huggingface/nlp.git
cd nlp
pip install -e .
```
Then run:
```
from nlp import load_dataset
dataset = load_dataset('wikitext', 'wikitext-2-v1',split = 'train')
```
will give error:
```
>>> dataset = load_dataset('wikitext', 'wikitext-2-v1',split = 'train')
Checking /home/zeyuy/.cache/huggingface/datasets/84a754b488511b109e2904672d809c041008416ae74e38f9ee0c80a8dffa1383.2e21f48d63b5572d19c97e441fbb802257cf6a4c03fbc5ed8fae3d2c2273f59e.py for additional imports.
Found main folder for dataset https://raw.githubusercontent.com/huggingface/nlp/0.4.0/datasets/wikitext/wikitext.py at /home/zeyuy/.cache/huggingface/modules/nlp_modules/datasets/wikitext
Found specific version folder for dataset https://raw.githubusercontent.com/huggingface/nlp/0.4.0/datasets/wikitext/wikitext.py at /home/zeyuy/.cache/huggingface/modules/nlp_modules/datasets/wikitext/5de6e79516446f747fcccc09aa2614fa159053b75909594d28d262395f72d89d
Found script file from https://raw.githubusercontent.com/huggingface/nlp/0.4.0/datasets/wikitext/wikitext.py to /home/zeyuy/.cache/huggingface/modules/nlp_modules/datasets/wikitext/5de6e79516446f747fcccc09aa2614fa159053b75909594d28d262395f72d89d/wikitext.py
Found dataset infos file from https://raw.githubusercontent.com/huggingface/nlp/0.4.0/datasets/wikitext/dataset_infos.json to /home/zeyuy/.cache/huggingface/modules/nlp_modules/datasets/wikitext/5de6e79516446f747fcccc09aa2614fa159053b75909594d28d262395f72d89d/dataset_infos.json
Found metadata file for dataset https://raw.githubusercontent.com/huggingface/nlp/0.4.0/datasets/wikitext/wikitext.py at /home/zeyuy/.cache/huggingface/modules/nlp_modules/datasets/wikitext/5de6e79516446f747fcccc09aa2614fa159053b75909594d28d262395f72d89d/wikitext.json
Loading Dataset Infos from /home/zeyuy/.cache/huggingface/modules/nlp_modules/datasets/wikitext/5de6e79516446f747fcccc09aa2614fa159053b75909594d28d262395f72d89d
Overwrite dataset info from restored data version.
Loading Dataset info from /home/zeyuy/.cache/huggingface/datasets/wikitext/wikitext-2-v1/1.0.0/5de6e79516446f747fcccc09aa2614fa159053b75909594d28d262395f72d89d
Reusing dataset wikitext (/home/zeyuy/.cache/huggingface/datasets/wikitext/wikitext-2-v1/1.0.0/5de6e79516446f747fcccc09aa2614fa159053b75909594d28d262395f72d89d)
Constructing Dataset for split train, from /home/zeyuy/.cache/huggingface/datasets/wikitext/wikitext-2-v1/1.0.0/5de6e79516446f747fcccc09aa2614fa159053b75909594d28d262395f72d89d
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/zeyuy/transformers/examples/language-modeling/nlp/src/nlp/load.py", line 600, in load_dataset
ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications)
File "/home/zeyuy/transformers/examples/language-modeling/nlp/src/nlp/builder.py", line 611, in as_dataset
datasets = utils.map_nested(
File "/home/zeyuy/transformers/examples/language-modeling/nlp/src/nlp/utils/py_utils.py", line 216, in map_nested
return function(data_struct)
File "/home/zeyuy/transformers/examples/language-modeling/nlp/src/nlp/builder.py", line 631, in _build_single_dataset
ds = self._as_dataset(
File "/home/zeyuy/transformers/examples/language-modeling/nlp/src/nlp/builder.py", line 704, in _as_dataset
return Dataset(**dataset_kwargs)
File "/home/zeyuy/transformers/examples/language-modeling/nlp/src/nlp/arrow_dataset.py", line 188, in __init__
self._fingerprint = generate_fingerprint(self)
File "/home/zeyuy/transformers/examples/language-modeling/nlp/src/nlp/fingerprint.py", line 91, in generate_fingerprint
hasher.update(key)
File "/home/zeyuy/transformers/examples/language-modeling/nlp/src/nlp/fingerprint.py", line 57, in update
self.m.update(self.hash(value).encode("utf-8"))
File "/home/zeyuy/transformers/examples/language-modeling/nlp/src/nlp/fingerprint.py", line 53, in hash
return cls.hash_default(value)
File "/home/zeyuy/transformers/examples/language-modeling/nlp/src/nlp/fingerprint.py", line 46, in hash_default
return cls.hash_bytes(dumps(value))
File "/home/zeyuy/transformers/examples/language-modeling/nlp/src/nlp/utils/py_utils.py", line 361, in dumps
with _no_cache_fields(obj):
File "/home/zeyuy/miniconda3/lib/python3.8/contextlib.py", line 113, in __enter__
return next(self.gen)
File "/home/zeyuy/transformers/examples/language-modeling/nlp/src/nlp/utils/py_utils.py", line 348, in _no_cache_fields
if isinstance(obj, tr.PreTrainedTokenizerBase) and hasattr(obj, "cache") and isinstance(obj.cache, dict):
AttributeError: module 'transformers' has no attribute 'PreTrainedTokenizerBase'
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/43428393?v=4",
"events_url": "https://api.github.com/users/zeyuyun1/events{/privacy}",
"followers_url": "https://api.github.com/users/zeyuyun1/followers",
"following_url": "https://api.github.com/users/zeyuyun1/following{/other_user}",
"gists_url": "https://api.github.com/users/zeyuyun1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/zeyuyun1",
"id": 43428393,
"login": "zeyuyun1",
"node_id": "MDQ6VXNlcjQzNDI4Mzkz",
"organizations_url": "https://api.github.com/users/zeyuyun1/orgs",
"received_events_url": "https://api.github.com/users/zeyuyun1/received_events",
"repos_url": "https://api.github.com/users/zeyuyun1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/zeyuyun1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zeyuyun1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/zeyuyun1",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/598/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/598/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 1:54:05
|
https://api.github.com/repos/huggingface/datasets/issues/597
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/597/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/597/comments
|
https://api.github.com/repos/huggingface/datasets/issues/597/events
|
https://github.com/huggingface/datasets/issues/597
| 697,112,029
|
MDU6SXNzdWU2OTcxMTIwMjk=
| 597
|
Indices incorrect with multiprocessing
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4",
"events_url": "https://api.github.com/users/joeddav/events{/privacy}",
"followers_url": "https://api.github.com/users/joeddav/followers",
"following_url": "https://api.github.com/users/joeddav/following{/other_user}",
"gists_url": "https://api.github.com/users/joeddav/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/joeddav",
"id": 9353833,
"login": "joeddav",
"node_id": "MDQ6VXNlcjkzNTM4MzM=",
"organizations_url": "https://api.github.com/users/joeddav/orgs",
"received_events_url": "https://api.github.com/users/joeddav/received_events",
"repos_url": "https://api.github.com/users/joeddav/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/joeddav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joeddav/subscriptions",
"type": "User",
"url": "https://api.github.com/users/joeddav",
"user_view_type": "public"
}
|
[] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
] |
[
"I fixed a bug that could cause this issue earlier today. Could you pull the latest version and try again ?",
"Still the case on master.\r\nI guess we should have an offset in the multi-procs indeed (hopefully it's enough).\r\n\r\nAlso, side note is that we should add some logging before the \"test\" to say we are testing the function otherwise its confusing for the user to see two outputs I think. Proposal (see the \"Testing the mapped function outputs:\" lines):\r\n```\r\n>>> d.select(range(10)).map(fn, with_indices=True, batched=True, num_proc=2)\r\nDone writing 10 indices in 80 bytes .\r\nDone writing 5 indices in 41 bytes .\r\nDone writing 5 indices in 41 bytes .\r\nSpawning 2 processes\r\nTesting the mapped function outputs:\r\ninds: [0, 1]\r\ninds: [0, 1]\r\nTesting finished, running the mapped function on the dataset:\r\n#0: 0%| | 0/1 [00:00<?, ?ba/s]\r\ninds: [0, 1, 2, 3, 4] inds: [0, 1, 2, 3, 4] | 0/1 [00:00<?, ?ba/s]\r\n#0: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 1321.04ba/s]\r\n#1: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 1841.22ba/s]\r\nConcatenating 2 shards from multiprocessing\r\nDataset(features: {'text': Value(dtype='string', id=None), 'label': ClassLabel(num_classes=2, names=['neg', 'pos'], names_file=None, id=None)}, num_rows: 10)\r\n```"
] | 2020-09-09T19:50:56
| 2020-09-10T11:03:37
| 2020-09-10T11:03:37
|
CONTRIBUTOR
| null | null | null | null |
When `num_proc` > 1, the indices argument passed to the map function is incorrect:
```python
d = load_dataset('imdb', split='test[:1%]')
def fn(x, inds):
print(inds)
return x
d.select(range(10)).map(fn, with_indices=True, batched=True)
# [0, 1]
# [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
d.select(range(10)).map(fn, with_indices=True, batched=True, num_proc=2)
# [0, 1]
# [0, 1]
# [0, 1, 2, 3, 4]
# [0, 1, 2, 3, 4]
```
As you can see, the subset passed to each thread is indexed from 0 to N which doesn't reflect their positions in `d`.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/597/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/597/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 15:12:41
|
https://api.github.com/repos/huggingface/datasets/issues/595
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/595/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/595/comments
|
https://api.github.com/repos/huggingface/datasets/issues/595/events
|
https://github.com/huggingface/datasets/issues/595
| 696,892,304
|
MDU6SXNzdWU2OTY4OTIzMDQ=
| 595
|
`Dataset`/`DatasetDict` has no attribute 'save_to_disk'
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/488428?v=4",
"events_url": "https://api.github.com/users/sudarshan85/events{/privacy}",
"followers_url": "https://api.github.com/users/sudarshan85/followers",
"following_url": "https://api.github.com/users/sudarshan85/following{/other_user}",
"gists_url": "https://api.github.com/users/sudarshan85/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sudarshan85",
"id": 488428,
"login": "sudarshan85",
"node_id": "MDQ6VXNlcjQ4ODQyOA==",
"organizations_url": "https://api.github.com/users/sudarshan85/orgs",
"received_events_url": "https://api.github.com/users/sudarshan85/received_events",
"repos_url": "https://api.github.com/users/sudarshan85/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sudarshan85/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sudarshan85/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sudarshan85",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"`pip install git+https://github.com/huggingface/nlp.git` should have done the job.\r\n\r\nDid you uninstall `nlp` before installing from github ?",
"> Did you uninstall `nlp` before installing from github ?\r\n\r\nI did not. I created a new environment and installed `nlp` directly from `github` and it worked!\r\n\r\nThanks.\r\n"
] | 2020-09-09T15:01:52
| 2020-09-09T16:20:19
| 2020-09-09T16:20:18
|
NONE
| null | null | null | null |
Hi,
As the title indicates, both `Dataset` and `DatasetDict` classes don't seem to have the `save_to_disk` method. While the file [`arrow_dataset.py`](https://github.com/huggingface/nlp/blob/34bf0b03bfe03e7f77b8fec1cd48f5452c4fc7c1/src/nlp/arrow_dataset.py) in the repo here has the method, the file `arrow_dataset.py` which is saved after `pip install nlp -U` in my `conda` environment DOES NOT contain the `save_to_disk` method. I even tried `pip install git+https://github.com/huggingface/nlp.git ` and still no luck. Do I need to install the library in another way?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/488428?v=4",
"events_url": "https://api.github.com/users/sudarshan85/events{/privacy}",
"followers_url": "https://api.github.com/users/sudarshan85/followers",
"following_url": "https://api.github.com/users/sudarshan85/following{/other_user}",
"gists_url": "https://api.github.com/users/sudarshan85/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sudarshan85",
"id": 488428,
"login": "sudarshan85",
"node_id": "MDQ6VXNlcjQ4ODQyOA==",
"organizations_url": "https://api.github.com/users/sudarshan85/orgs",
"received_events_url": "https://api.github.com/users/sudarshan85/received_events",
"repos_url": "https://api.github.com/users/sudarshan85/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sudarshan85/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sudarshan85/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sudarshan85",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/595/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/595/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 1:18:26
|
https://api.github.com/repos/huggingface/datasets/issues/590
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/590/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/590/comments
|
https://api.github.com/repos/huggingface/datasets/issues/590/events
|
https://github.com/huggingface/datasets/issues/590
| 696,501,827
|
MDU6SXNzdWU2OTY1MDE4Mjc=
| 590
|
The process cannot access the file because it is being used by another process (windows)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/22762845?v=4",
"events_url": "https://api.github.com/users/saareliad/events{/privacy}",
"followers_url": "https://api.github.com/users/saareliad/followers",
"following_url": "https://api.github.com/users/saareliad/following{/other_user}",
"gists_url": "https://api.github.com/users/saareliad/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/saareliad",
"id": 22762845,
"login": "saareliad",
"node_id": "MDQ6VXNlcjIyNzYyODQ1",
"organizations_url": "https://api.github.com/users/saareliad/orgs",
"received_events_url": "https://api.github.com/users/saareliad/received_events",
"repos_url": "https://api.github.com/users/saareliad/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/saareliad/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/saareliad/subscriptions",
"type": "User",
"url": "https://api.github.com/users/saareliad",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi, which version of `nlp` are you using?\r\n\r\nBy the way we'll be releasing today a significant update fixing many issues (but also comprising a few breaking changes).\r\nYou can see more informations here #545 and try it by installing from source from the master branch.",
"I'm using version 0.4.0.\r\n\r\n",
"Ok, it's probably fixed on master. Otherwise if you can give me a fully self-contained exemple to reproduce the error, I can try to investigate.",
"I get the same behavior, on Windows, when `map`ping a function to a loaded dataset. \r\nThe error doesn't occur if I re-run the cell a second time though! \r\nI'm on version 1.0.1.",
"This is going to be fixed by #644 ",
"@saareliad I got the same issue that troubled me quite a while. Unfortunately, there are no good answers to this issue online, I tried it on Linux and that's absolutely fine. After hacking the source code, I solved this problem as follows.\r\n\r\nIn the source code file: arrow_dataset.py -> _map_single(...)\r\n\r\nchange\r\n```python\r\nif update_data and tmp_file is not None:\r\n shutil.move(tmp_file.name, cache_file_name)\r\n```\r\nto\r\n```python\r\ntmp_file.close()\r\nif update_data and tmp_file is not None:\r\n shutil.move(tmp_file.name, cache_file_name)\r\n```\r\n\r\nThen it works without needing multiple times runs to avoid the permission error.\r\nI know this solution is unusual since it changes the source code. Hopefully, the lib's contributors can have better solutions in the future.\r\n",
"@wangcongcong123 thanks for sharing.\n(BTW I also solved it locally on windows by putting the problematic line under try except and not using cache... On windows I just needed 1% of the dataset anyway)"
] | 2020-09-09T07:01:36
| 2020-09-25T14:02:28
| 2020-09-25T14:02:28
|
NONE
| null | null | null | null |
Hi, I consistently get the following error when developing in my PC (windows 10):
```
train_dataset = train_dataset.map(convert_to_features, batched=True)
File "C:\Users\saareliad\AppData\Local\Continuum\miniconda3\envs\py38\lib\site-packages\nlp\arrow_dataset.py", line 970, in map
shutil.move(tmp_file.name, cache_file_name)
File "C:\Users\saareliad\AppData\Local\Continuum\miniconda3\envs\py38\lib\shutil.py", line 803, in move
os.unlink(src)
PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\\Users\\saareliad\\.cache\\huggingface\\datasets\\squad\\plain_text\\1.0.0\\408a8fa46a1e2805445b793f1022e743428ca739a34809fce872f0c7f17b44ab\\tmpsau1bep1'
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/590/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/590/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 16 days, 7:00:52
|
https://api.github.com/repos/huggingface/datasets/issues/589
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/589/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/589/comments
|
https://api.github.com/repos/huggingface/datasets/issues/589/events
|
https://github.com/huggingface/datasets/issues/589
| 696,488,447
|
MDU6SXNzdWU2OTY0ODg0NDc=
| 589
|
Cannot use nlp.load_dataset text, AttributeError: module 'nlp.utils' has no attribute 'logging'
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/17930170?v=4",
"events_url": "https://api.github.com/users/ksjae/events{/privacy}",
"followers_url": "https://api.github.com/users/ksjae/followers",
"following_url": "https://api.github.com/users/ksjae/following{/other_user}",
"gists_url": "https://api.github.com/users/ksjae/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ksjae",
"id": 17930170,
"login": "ksjae",
"node_id": "MDQ6VXNlcjE3OTMwMTcw",
"organizations_url": "https://api.github.com/users/ksjae/orgs",
"received_events_url": "https://api.github.com/users/ksjae/received_events",
"repos_url": "https://api.github.com/users/ksjae/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ksjae/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ksjae/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ksjae",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[] | 2020-09-09T06:46:53
| 2020-09-09T08:57:54
| 2020-09-09T08:57:54
|
NONE
| null | null | null | null |
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/root/anaconda3/envs/pytorch/lib/python3.7/site-packages/nlp/load.py", line 533, in load_dataset
builder_cls = import_main_class(module_path, dataset=True)
File "/root/anaconda3/envs/pytorch/lib/python3.7/site-packages/nlp/load.py", line 61, in import_main_class
module = importlib.import_module(module_path)
File "/root/anaconda3/envs/pytorch/lib/python3.7/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/root/anaconda3/envs/pytorch/lib/python3.7/site-packages/nlp/datasets/text/5dc629379536c4037d9c2063e1caa829a1676cf795f8e030cd90a537eba20c08/text.py", line 9, in <module>
logger = nlp.utils.logging.get_logger(__name__)
AttributeError: module 'nlp.utils' has no attribute 'logging'
```
Occurs on the following code, or any code including the load_dataset('text'):
```
dataset = load_dataset("text", data_files=file_path, split="train")
dataset = dataset.map(lambda ex: tokenizer(ex["text"], add_special_tokens=True,
truncation=True, max_length=args.block_size), batched=True)
dataset.set_format(type='torch', columns=['input_ids'])
return dataset
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/589/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/589/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 2:11:01
|
https://api.github.com/repos/huggingface/datasets/issues/583
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/583/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/583/comments
|
https://api.github.com/repos/huggingface/datasets/issues/583/events
|
https://github.com/huggingface/datasets/issues/583
| 695,166,265
|
MDU6SXNzdWU2OTUxNjYyNjU=
| 583
|
ArrowIndexError on Dataset.select
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[] | 2020-09-07T14:36:29
| 2020-09-08T07:43:15
| 2020-09-08T07:43:15
|
MEMBER
| null | null | null | null |
If the indices table consists in several chunks, then `dataset.select` results in an `ArrowIndexError` error for pyarrow < 1.0.0
Example:
```python
from nlp import load_dataset
mnli = load_dataset("glue", "mnli", split="train")
shuffled = mnli.shuffle(seed=42)
mnli.select(list(range(len(mnli))))
```
raises:
```python
---------------------------------------------------------------------------
ArrowIndexError Traceback (most recent call last)
<ipython-input-64-006a5d38d418> in <module>
----> 1 mnli.shuffle(seed=42).select(list(range(len(mnli))))
~/Desktop/hf/nlp/src/nlp/fingerprint.py in wrapper(*args, **kwargs)
161 # Call actual function
162
--> 163 out = func(self, *args, **kwargs)
164
165 # Update fingerprint of in-place transforms + update in-place history of transforms
~/Desktop/hf/nlp/src/nlp/arrow_dataset.py in select(self, indices, keep_in_memory, indices_cache_file_name, writer_batch_size, new_fingerprint)
1653 if self._indices is not None:
1654 if PYARROW_V0:
-> 1655 indices_array = self._indices.column(0).chunk(0).take(indices_array)
1656 else:
1657 indices_array = self._indices.column(0).take(indices_array)
~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/array.pxi in pyarrow.lib.Array.take()
~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status()
ArrowIndexError: take index out of bounds
```
This is because the `take` method is only done on the first chunk which only contains 1000 elements by default (mnli has ~400 000 elements).
Shall we change that to use
```python
pa.concat_tables(self._indices._indices.slice(i, 1) for i in indices_array)
```
instead of `take` ? @thomwolf
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/583/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/583/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 17:06:46
|
https://api.github.com/repos/huggingface/datasets/issues/582
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/582/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/582/comments
|
https://api.github.com/repos/huggingface/datasets/issues/582/events
|
https://github.com/huggingface/datasets/issues/582
| 695,126,456
|
MDU6SXNzdWU2OTUxMjY0NTY=
| 582
|
Allow for PathLike objects
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/BramVanroy",
"id": 2779410,
"login": "BramVanroy",
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"type": "User",
"url": "https://api.github.com/users/BramVanroy",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[] | 2020-09-07T13:54:51
| 2020-09-08T07:45:17
| 2020-09-08T07:45:17
|
CONTRIBUTOR
| null | null | null | null |
Using PathLike objects as input for `load_dataset` does not seem to work. The following will throw an error.
```python
files = list(Path(r"D:\corpora\yourcorpus").glob("*.txt"))
dataset = load_dataset("text", data_files=files)
```
Traceback:
```
Traceback (most recent call last):
File "C:/dev/python/dutch-simplification/main.py", line 7, in <module>
dataset = load_dataset("text", data_files=files)
File "C:\Users\bramv\.virtualenvs\dutch-simplification-nbNdqK9u\lib\site-packages\nlp\load.py", line 548, in load_dataset
builder_instance.download_and_prepare(
File "C:\Users\bramv\.virtualenvs\dutch-simplification-nbNdqK9u\lib\site-packages\nlp\builder.py", line 470, in download_and_prepare
self._save_info()
File "C:\Users\bramv\.virtualenvs\dutch-simplification-nbNdqK9u\lib\site-packages\nlp\builder.py", line 564, in _save_info
self.info.write_to_directory(self._cache_dir)
File "C:\Users\bramv\.virtualenvs\dutch-simplification-nbNdqK9u\lib\site-packages\nlp\info.py", line 149, in write_to_directory
self._dump_info(f)
File "C:\Users\bramv\.virtualenvs\dutch-simplification-nbNdqK9u\lib\site-packages\nlp\info.py", line 156, in _dump_info
file.write(json.dumps(asdict(self)).encode("utf-8"))
File "c:\users\bramv\appdata\local\programs\python\python38\lib\json\__init__.py", line 231, in dumps
return _default_encoder.encode(obj)
File "c:\users\bramv\appdata\local\programs\python\python38\lib\json\encoder.py", line 199, in encode
chunks = self.iterencode(o, _one_shot=True)
File "c:\users\bramv\appdata\local\programs\python\python38\lib\json\encoder.py", line 257, in iterencode
return _iterencode(o, 0)
TypeError: keys must be str, int, float, bool or None, not WindowsPath
```
We have to cast to a string explicitly to make this work. It would be nicer if we could actually use PathLike objects.
```python
files = [str(f) for f in Path(r"D:\corpora\wablieft").glob("*.txt")]
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/582/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/582/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 17:50:26
|
https://api.github.com/repos/huggingface/datasets/issues/581
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/581/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/581/comments
|
https://api.github.com/repos/huggingface/datasets/issues/581/events
|
https://github.com/huggingface/datasets/issues/581
| 695,120,517
|
MDU6SXNzdWU2OTUxMjA1MTc=
| 581
|
Better error message when input file does not exist
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/BramVanroy",
"id": 2779410,
"login": "BramVanroy",
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"type": "User",
"url": "https://api.github.com/users/BramVanroy",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[] | 2020-09-07T13:47:59
| 2020-09-09T09:00:07
| 2020-09-09T09:00:07
|
CONTRIBUTOR
| null | null | null | null |
In the following scenario, when `data_files` is an empty list, the stack trace and error message could be improved. This can probably be solved by checking for each file whether it actually exists and/or whether the argument is not false-y.
```python
dataset = load_dataset("text", data_files=[])
```
Example error trace.
```
Using custom data configuration default
Downloading and preparing dataset text/default-d18f9b6611eb8e16 (download: Unknown size, generated: Unknown size, post-processed: Unknown sizetotal: Unknown size) to C:\Users\bramv\.cache\huggingface\datasets\text\default-d18f9b6611eb8e16\0.0.0\3a79870d85f1982d6a2af884fde86a71c771747b4b161fd302d28ad22adf985b...
Traceback (most recent call last):
File "C:\Users\bramv\.virtualenvs\dutch-simplification-nbNdqK9u\lib\site-packages\nlp\builder.py", line 424, in incomplete_dir
yield tmp_dir
File "C:\Users\bramv\.virtualenvs\dutch-simplification-nbNdqK9u\lib\site-packages\nlp\builder.py", line 462, in download_and_prepare
self._download_and_prepare(
File "C:\Users\bramv\.virtualenvs\dutch-simplification-nbNdqK9u\lib\site-packages\nlp\builder.py", line 537, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "C:\Users\bramv\.virtualenvs\dutch-simplification-nbNdqK9u\lib\site-packages\nlp\builder.py", line 813, in _prepare_split
num_examples, num_bytes = writer.finalize()
File "C:\Users\bramv\.virtualenvs\dutch-simplification-nbNdqK9u\lib\site-packages\nlp\arrow_writer.py", line 217, in finalize
self.pa_writer.close()
AttributeError: 'NoneType' object has no attribute 'close'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:/dev/python/dutch-simplification/main.py", line 7, in <module>
dataset = load_dataset("text", data_files=files)
File "C:\Users\bramv\.virtualenvs\dutch-simplification-nbNdqK9u\lib\site-packages\nlp\load.py", line 548, in load_dataset
builder_instance.download_and_prepare(
File "C:\Users\bramv\.virtualenvs\dutch-simplification-nbNdqK9u\lib\site-packages\nlp\builder.py", line 470, in download_and_prepare
self._save_info()
File "c:\users\bramv\appdata\local\programs\python\python38\lib\contextlib.py", line 131, in __exit__
self.gen.throw(type, value, traceback)
File "C:\Users\bramv\.virtualenvs\dutch-simplification-nbNdqK9u\lib\site-packages\nlp\builder.py", line 430, in incomplete_dir
shutil.rmtree(tmp_dir)
File "c:\users\bramv\appdata\local\programs\python\python38\lib\shutil.py", line 737, in rmtree
return _rmtree_unsafe(path, onerror)
File "c:\users\bramv\appdata\local\programs\python\python38\lib\shutil.py", line 615, in _rmtree_unsafe
onerror(os.unlink, fullname, sys.exc_info())
File "c:\users\bramv\appdata\local\programs\python\python38\lib\shutil.py", line 613, in _rmtree_unsafe
os.unlink(fullname)
PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\\Users\\bramv\\.cache\\huggingface\\datasets\\text\\default-d18f9b6611eb8e16\\0.0.0\\3a79870d85f1982d6a2af884fde86a71c771747b4b161fd302d28ad22adf985b.incomplete\\text-train.arrow'
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/581/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/581/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 1 day, 19:12:08
|
https://api.github.com/repos/huggingface/datasets/issues/580
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/580/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/580/comments
|
https://api.github.com/repos/huggingface/datasets/issues/580/events
|
https://github.com/huggingface/datasets/issues/580
| 694,954,551
|
MDU6SXNzdWU2OTQ5NTQ1NTE=
| 580
|
nlp re-creates already-there caches when using a script, but not within a shell
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4",
"events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}",
"followers_url": "https://api.github.com/users/TevenLeScao/followers",
"following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}",
"gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/TevenLeScao",
"id": 26709476,
"login": "TevenLeScao",
"node_id": "MDQ6VXNlcjI2NzA5NDc2",
"organizations_url": "https://api.github.com/users/TevenLeScao/orgs",
"received_events_url": "https://api.github.com/users/TevenLeScao/received_events",
"repos_url": "https://api.github.com/users/TevenLeScao/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions",
"type": "User",
"url": "https://api.github.com/users/TevenLeScao",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Couln't reproduce on my side :/ \r\nlet me know if you manage to reproduce on another env (colab for example)",
"Fixed with a clean re-install!"
] | 2020-09-07T10:23:50
| 2020-09-07T15:19:09
| 2020-09-07T14:26:41
|
CONTRIBUTOR
| null | null | null | null |
`nlp` keeps creating new caches for the same file when launching `filter` from a script, and behaves correctly from within the shell.
Example: try running
```
import nlp
hans_easy_data = nlp.load_dataset('hans', split="validation").filter(lambda x: x['label'] == 0)
hans_hard_data = nlp.load_dataset('hans', split="validation").filter(lambda x: x['label'] == 1)
```
twice. If launched from a `file.py` script, the cache will be re-created the second time. If launched as 3 shell/`ipython` commands, `nlp` will correctly re-use the cache.
As observed with @lhoestq.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4",
"events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}",
"followers_url": "https://api.github.com/users/TevenLeScao/followers",
"following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}",
"gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/TevenLeScao",
"id": 26709476,
"login": "TevenLeScao",
"node_id": "MDQ6VXNlcjI2NzA5NDc2",
"organizations_url": "https://api.github.com/users/TevenLeScao/orgs",
"received_events_url": "https://api.github.com/users/TevenLeScao/received_events",
"repos_url": "https://api.github.com/users/TevenLeScao/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions",
"type": "User",
"url": "https://api.github.com/users/TevenLeScao",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/580/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/580/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 4:02:51
|
https://api.github.com/repos/huggingface/datasets/issues/577
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/577/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/577/comments
|
https://api.github.com/repos/huggingface/datasets/issues/577/events
|
https://github.com/huggingface/datasets/issues/577
| 694,607,148
|
MDU6SXNzdWU2OTQ2MDcxNDg=
| 577
|
Some languages in wikipedia dataset are not loading
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/5833357?v=4",
"events_url": "https://api.github.com/users/gaguilar/events{/privacy}",
"followers_url": "https://api.github.com/users/gaguilar/followers",
"following_url": "https://api.github.com/users/gaguilar/following{/other_user}",
"gists_url": "https://api.github.com/users/gaguilar/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gaguilar",
"id": 5833357,
"login": "gaguilar",
"node_id": "MDQ6VXNlcjU4MzMzNTc=",
"organizations_url": "https://api.github.com/users/gaguilar/orgs",
"received_events_url": "https://api.github.com/users/gaguilar/received_events",
"repos_url": "https://api.github.com/users/gaguilar/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gaguilar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gaguilar/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gaguilar",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Some wikipedia languages have already been processed by us and are hosted on our google storage. This is the case for \"fr\" and \"en\" for example.\r\n\r\nFor other smaller languages (in terms of bytes), they are directly downloaded and parsed from the wikipedia dump site.\r\nParsing can take some time for languages with hundreds of MB of xml.\r\n\r\nLet me know if you encounter an error or if you feel that is is taking too long for you.\r\nWe could process those that really take too much time",
"Ok, thanks for clarifying, that makes sense. I will time those examples later today and post back here.\r\n\r\nAlso, it seems that not all dumps should use the same date. For instance, I was checking the Spanish dump doing the following:\r\n```\r\ndata = nlp.load_dataset('wikipedia', '20200501.es', beam_runner='DirectRunner', split='train')\r\n```\r\n\r\nI got the error below because this URL does not exist: https://dumps.wikimedia.org/eswiki/20200501/dumpstatus.json. So I checked the actual available dates here https://dumps.wikimedia.org/eswiki/ and there is no 20200501. If one tries for a date available in the link, then the nlp library does not allow such a request because is not in the list of expected datasets.\r\n\r\n```\r\nDownloading and preparing dataset wikipedia/20200501.es (download: Unknown size, generated: Unknown size, post-processed: Unknown sizetotal: Unknown size) to /home/gaguilar/.cache/huggingface/datasets/wikipedia/20200501.es/1.0.0/7be7f4324255faf70687be8692de57cf79197afdc33ff08d6a04ed602df32d50...\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/gaguilar/.conda/envs/pytorch/lib/python3.8/site-packages/nlp/load.py\", line 548, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/home/gaguilar/.conda/envs/pytorch/lib/python3.8/site-packages/nlp/builder.py\", line 462, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/gaguilar/.conda/envs/pytorch/lib/python3.8/site-packages/nlp/builder.py\", line 965, in _download_and_prepare\r\n super(BeamBasedBuilder, self)._download_and_prepare(\r\n File \"/home/gaguilar/.conda/envs/pytorch/lib/python3.8/site-packages/nlp/builder.py\", line 518, in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n File \"/home/gaguilar/.conda/envs/pytorch/lib/python3.8/site-packages/nlp/datasets/wikipedia/7be7f4324255faf70687be8692de57cf79197afdc33ff08d6a04ed602df32d50/wikipedia.py\", line 422, in _split_generators\r\n downloaded_files = dl_manager.download_and_extract({\"info\": info_url})\r\n File \"/home/gaguilar/.conda/envs/pytorch/lib/python3.8/site-packages/nlp/utils/download_manager.py\", line 220, in download_and_extract\r\n return self.extract(self.download(url_or_urls))\r\n File \"/home/gaguilar/.conda/envs/pytorch/lib/python3.8/site-packages/nlp/utils/download_manager.py\", line 155, in download\r\n downloaded_path_or_paths = map_nested(\r\n File \"/home/gaguilar/.conda/envs/pytorch/lib/python3.8/site-packages/nlp/utils/py_utils.py\", line 163, in map_nested\r\n return {\r\n File \"/home/gaguilar/.conda/envs/pytorch/lib/python3.8/site-packages/nlp/utils/py_utils.py\", line 164, in <dictcomp>\r\n k: map_nested(\r\n File \"/home/gaguilar/.conda/envs/pytorch/lib/python3.8/site-packages/nlp/utils/py_utils.py\", line 191, in map_nested\r\n return function(data_struct)\r\n File \"/home/gaguilar/.conda/envs/pytorch/lib/python3.8/site-packages/nlp/utils/download_manager.py\", line 156, in <lambda>\r\n lambda url: cached_path(url, download_config=self._download_config,), url_or_urls,\r\n File \"/home/gaguilar/.conda/envs/pytorch/lib/python3.8/site-packages/nlp/utils/file_utils.py\", line 191, in cached_path\r\n output_path = get_from_cache(\r\n File \"/home/gaguilar/.conda/envs/pytorch/lib/python3.8/site-packages/nlp/utils/file_utils.py\", line 356, in get_from_cache\r\n raise ConnectionError(\"Couldn't reach {}\".format(url))\r\nConnectionError: Couldn't reach https://dumps.wikimedia.org/eswiki/20200501/dumpstatus.json\r\n```",
"Thanks ! This will be very helpful.\r\n\r\nAbout the date issue, I think it's possible to use another date with\r\n\r\n```python\r\nload_dataset(\"wikipedia\", language=\"es\", date=\"...\", beam_runner=\"...\")\r\n```\r\n\r\nHowever we've not processed wikipedia dumps for other dates than 20200501 (yet ?)\r\n\r\nOne more thing that is specific to 20200501.es: it was available once but the `mwparserfromhell` was not able to parse it for some reason, so we didn't manage to get a processed version of 20200501.es (see #321 )",
"Cool! Thanks for the trick regarding different dates!\r\n\r\nI checked the download/processing time for retrieving the Arabic Wikipedia dump, and it took about 3.2 hours. I think that this may be a bit impractical when it comes to working with multiple languages (although I understand that storing those datasets in your Google storage may not be very appealing either). \r\n\r\nFor the record, here's what I did:\r\n```python\r\nimport nlp\r\nimport time\r\n\r\ndef timeit(filename):\r\n elapsed = time.time()\r\n data = nlp.load_dataset('wikipedia', filename, beam_runner='DirectRunner', split='train')\r\n elapsed = time.time() - elapsed\r\n print(f\"Loading the '{filename}' data took {elapsed:,.1f} seconds...\")\r\n return data\r\n\r\ndata = timeit('20200501.ar')\r\n```\r\n\r\nHere's the output:\r\n```\r\nDownloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 13.0k/13.0k [00:00<00:00, 8.34MB/s]\r\nDownloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 28.7k/28.7k [00:00<00:00, 954kB/s]\r\nDownloading and preparing dataset wikipedia/20200501.ar (download: Unknown size, generated: Unknown size, post-processed: Unknown sizetotal: Unknown size) to /home/gaguil20/.cache/huggingface/datasets/wikipedia/20200501.ar/1.0.0/7be7f4324255faf70687be8692de57cf79197afdc33ff08d6a04ed602df32d50...\r\nDownloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47.4k/47.4k [00:00<00:00, 1.40MB/s]\r\nDownloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 79.8M/79.8M [00:15<00:00, 5.13MB/s]\r\nDownloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 171M/171M [00:33<00:00, 5.13MB/s]\r\nDownloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 103M/103M [00:20<00:00, 5.14MB/s]\r\nDownloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 227M/227M [00:44<00:00, 5.06MB/s]\r\nDownloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 140M/140M [00:28<00:00, 4.96MB/s]\r\nDownloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 160M/160M [00:30<00:00, 5.20MB/s]\r\nDownloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 97.5M/97.5M [00:19<00:00, 5.06MB/s]\r\nDownloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 222M/222M [00:42<00:00, 5.21MB/s]\r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [03:16<00:00, 196.39s/sources]\r\nDataset wikipedia downloaded and prepared to /home/gaguil20/.cache/huggingface/datasets/wikipedia/20200501.ar/1.0.0/7be7f4324255faf70687be8692de57cf79197afdc33ff08d6a04ed602df32d50. Subsequent calls will reuse this data.\r\nLoading the '20200501.ar' data took 11,582.7 seconds...\r\n````",
"> About the date issue, I think it's possible to use another date with\r\n> ```python\r\n> load_dataset(\"wikipedia\", language=\"es\", date=\"...\", beam_runner=\"...\")\r\n> ```\r\n\r\nI tried your suggestion about the date and the function does not accept the language and date keywords. I tried both on `nlp` v0.4 and the new `datasets` library (v1.0.2):\r\n```\r\nload_dataset(\"wikipedia\", language=\"es\", date=\"20200601\", beam_runner='DirectRunner', split='train')\r\n```\r\nFor now, my quick workaround to keep things moving was to simply change the date inside the library at this line: [https://github.com/huggingface/datasets/blob/master/datasets/wikipedia/wikipedia.py#L403](https://github.com/huggingface/datasets/blob/master/datasets/wikipedia/wikipedia.py#L403)\r\n\r\nNote that the date and languages are valid: [https://dumps.wikimedia.org/eswiki/20200601/dumpstatus.json](https://dumps.wikimedia.org/eswiki/20200601/dumpstatus.json)\r\n\r\nAny suggestion is welcome :) @lhoestq \r\n\r\n\r\n## **[UPDATE]**\r\n\r\nThe workaround I mentioned fetched the data, but then I faced another issue (even the log says to report this as bug):\r\n```\r\nERROR:root:mwparserfromhell ParseError: This is a bug and should be reported. Info: C tokenizer exited with non-empty token stack.\r\n```\r\n\r\nHere's the full stack (which says that there is a key error caused by this key: `KeyError: '000nbsp'`):\r\n\r\n```Downloading and preparing dataset wikipedia/20200601.es (download: Unknown size, generated: Unknown size, post-processed: Unknown sizetotal: Unknown size) to /home/gustavoag/.cache/huggingface/datasets/wikipedia/20200601.es/1.0.0/7be7f4324255faf70687be8692de57cf79197afdc33ff08d6a04ed602df32d50...\r\nDownloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 74.7k/74.7k [00:00<00:00, 1.53MB/s]\r\nDownloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 232M/232M [00:48<00:00, 4.75MB/s]\r\nDownloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 442M/442M [01:39<00:00, 4.44MB/s]\r\nDownloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 173M/173M [00:33<00:00, 5.12MB/s]\r\nDownloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 344M/344M [01:14<00:00, 4.59MB/s]\r\nDownloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 541M/541M [01:59<00:00, 4.52MB/s]\r\nDownloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 476M/476M [01:31<00:00, 5.18MB/s]\r\nDownloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 545M/545M [02:02<00:00, 4.46MB/s]\r\nDownloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 299M/299M [01:01<00:00, 4.89MB/s]\r\nDownloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 9.60M/9.60M [00:01<00:00, 4.84MB/s]\r\nDownloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 423M/423M [01:36<00:00, 4.38MB/s]\r\nWARNING:apache_beam.options.pipeline_options:Discarding unparseable args: ['--lang', 'es', '--date', '20200601', '--tokenizer', 'bert-base-multilingual-cased', '--cache', 'train', 'valid', '--max_dataset_length', '200000', '10000']\r\n\r\nERROR:root:mwparserfromhell ParseError: This is a bug and should be reported. Info: C tokenizer exited with non-empty token stack.\r\nERROR:root:mwparserfromhell ParseError: This is a bug and should be reported. Info: C tokenizer exited with non-empty token stack.\r\nERROR:root:mwparserfromhell ParseError: This is a bug and should be reported. Info: C tokenizer exited with non-empty token stack.\r\nERROR:root:mwparserfromhell ParseError: This is a bug and should be reported. Info: C tokenizer exited with non-empty token stack.\r\nTraceback (most recent call last):\r\n File \"apache_beam/runners/common.py\", line 961, in apache_beam.runners.common.DoFnRunner.process\r\n File \"apache_beam/runners/common.py\", line 553, in apache_beam.runners.common.SimpleInvoker.invoke_process\r\n File \"apache_beam/runners/common.py\", line 1095, in apache_beam.runners.common._OutputProcessor.process_outputs\r\n File \"/home/gustavoag/anaconda3/envs/pytorch/lib/python3.8/site-packages/nlp/datasets/wikipedia/7be7f4324255faf70687be8692de57cf79197afdc33ff08d6a04ed602df32d50/wikipedia.py\", line 500, in _clean_content\r\n text = _parse_and_clean_wikicode(raw_content, parser=mwparserfromhell)\r\n File \"/home/gustavoag/anaconda3/envs/pytorch/lib/python3.8/site-packages/nlp/datasets/wikipedia/7be7f4324255faf70687be8692de57cf79197afdc33ff08d6a04ed602df32d50/wikipedia.py\", line 556, in _parse_and_clean_wikicode\r\n section_text.append(section.strip_code().strip())\r\n File \"/home/gustavoag/anaconda3/envs/pytorch/lib/python3.8/site-packages/mwparserfromhell/wikicode.py\", line 643, in strip_code\r\n stripped = node.__strip__(**kwargs)\r\n File \"/home/gustavoag/anaconda3/envs/pytorch/lib/python3.8/site-packages/mwparserfromhell/nodes/html_entity.py\", line 63, in __strip__\r\n return self.normalize()\r\n File \"/home/gustavoag/anaconda3/envs/pytorch/lib/python3.8/site-packages/mwparserfromhell/nodes/html_entity.py\", line 178, in normalize\r\n return chrfunc(htmlentities.name2codepoint[self.value])\r\nKeyError: '000nbsp'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/home/gustavoag/anaconda3/envs/pytorch/lib/python3.8/runpy.py\", line 194, in _run_module_as_main\r\n return _run_code(code, main_globals, None,\r\n File \"/home/gustavoag/anaconda3/envs/pytorch/lib/python3.8/runpy.py\", line 87, in _run_code\r\n exec(code, run_globals)\r\n File \"/raid/data/gustavoag/projects/char2subword/research/preprocessing/split_wiki.py\", line 96, in <module>\r\n main()\r\n File \"/raid/data/gustavoag/projects/char2subword/research/preprocessing/split_wiki.py\", line 65, in main\r\n data = nlp.load_dataset('wikipedia', f'{args.date}.{args.lang}', beam_runner='DirectRunner', split='train')\r\n File \"/home/gustavoag/anaconda3/envs/pytorch/lib/python3.8/site-packages/nlp/load.py\", line 548, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/home/gustavoag/anaconda3/envs/pytorch/lib/python3.8/site-packages/nlp/builder.py\", line 462, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/gustavoag/anaconda3/envs/pytorch/lib/python3.8/site-packages/nlp/builder.py\", line 969, in _download_and_prepare\r\n pipeline_results = pipeline.run()\r\n File \"/home/gustavoag/anaconda3/envs/pytorch/lib/python3.8/site-packages/apache_beam/pipeline.py\", line 534, in run\r\n return self.runner.run_pipeline(self, self._options)\r\n File \"/home/gustavoag/anaconda3/envs/pytorch/lib/python3.8/site-packages/apache_beam/runners/direct/direct_runner.py\", line 119, in run_pipeline\r\n return runner.run_pipeline(pipeline, options)\r\n File \"/home/gustavoag/anaconda3/envs/pytorch/lib/python3.8/site-packages/apache_beam/runners/portability/fn_api_runner/fn_runner.py\", line 172, in run_pipeline\r\n self._latest_run_result = self.run_via_runner_api(\r\n File \"/home/gustavoag/anaconda3/envs/pytorch/lib/python3.8/site-packages/apache_beam/runners/portability/fn_api_runner/fn_runner.py\", line 183, in run_via_runner_api\r\n return self.run_stages(stage_context, stages)\r\n File \"/home/gustavoag/anaconda3/envs/pytorch/lib/python3.8/site-packages/apache_beam/runners/portability/fn_api_runner/fn_runner.py\", line 338, in run_stages\r\n stage_results = self._run_stage(\r\n File \"/home/gustavoag/anaconda3/envs/pytorch/lib/python3.8/site-packages/apache_beam/runners/portability/fn_api_runner/fn_runner.py\", line 512, in _run_stage\r\n last_result, deferred_inputs, fired_timers = self._run_bundle(\r\n File \"/home/gustavoag/anaconda3/envs/pytorch/lib/python3.8/site-packages/apache_beam/runners/portability/fn_api_runner/fn_runner.py\", line 556, in _run_bundle\r\n result, splits = bundle_manager.process_bundle(\r\n File \"/home/gustavoag/anaconda3/envs/pytorch/lib/python3.8/site-packages/apache_beam/runners/portability/fn_api_runner/fn_runner.py\", line 940, in process_bundle\r\n for result, split_result in executor.map(execute, zip(part_inputs, # pylint: disable=zip-builtin-not-iterating\r\n File \"/home/gustavoag/anaconda3/envs/pytorch/lib/python3.8/concurrent/futures/_base.py\", line 611, in result_iterator\r\n yield fs.pop().result()\r\n File \"/home/gustavoag/anaconda3/envs/pytorch/lib/python3.8/concurrent/futures/_base.py\", line 439, in result\r\n return self.__get_result()\r\n File \"/home/gustavoag/anaconda3/envs/pytorch/lib/python3.8/concurrent/futures/_base.py\", line 388, in __get_result\r\n raise self._exception\r\n File \"/home/gustavoag/anaconda3/envs/pytorch/lib/python3.8/site-packages/apache_beam/utils/thread_pool_executor.py\", line 44, in run\r\n self._future.set_result(self._fn(*self._fn_args, **self._fn_kwargs))\r\n File \"/home/gustavoag/anaconda3/envs/pytorch/lib/python3.8/site-packages/apache_beam/runners/portability/fn_api_runner/fn_runner.py\", line 932, in execute\r\n return bundle_manager.process_bundle(\r\n File \"/home/gustavoag/anaconda3/envs/pytorch/lib/python3.8/site-packages/apache_beam/runners/portability/fn_api_runner/fn_runner.py\", line 837, in process_bundle\r\n result_future = self._worker_handler.control_conn.push(process_bundle_req)\r\n File \"/home/gustavoag/anaconda3/envs/pytorch/lib/python3.8/site-packages/apache_beam/runners/portability/fn_api_runner/worker_handlers.py\", line 352, in push\r\n response = self.worker.do_instruction(request)\r\n File \"/home/gustavoag/anaconda3/envs/pytorch/lib/python3.8/site-packages/apache_beam/runners/worker/sdk_worker.py\", line 479, in do_instruction\r\n return getattr(self, request_type)(\r\n File \"/home/gustavoag/anaconda3/envs/pytorch/lib/python3.8/site-packages/apache_beam/runners/worker/sdk_worker.py\", line 515, in process_bundle\r\n bundle_processor.process_bundle(instruction_id))\r\n File \"/home/gustavoag/anaconda3/envs/pytorch/lib/python3.8/site-packages/apache_beam/runners/worker/bundle_processor.py\", line 977, in process_bundle\r\n input_op_by_transform_id[element.transform_id].process_encoded(\r\n File \"/home/gustavoag/anaconda3/envs/pytorch/lib/python3.8/site-packages/apache_beam/runners/worker/bundle_processor.py\", line 218, in process_encoded\r\n self.output(decoded_value)\r\n File \"apache_beam/runners/worker/operations.py\", line 330, in apache_beam.runners.worker.operations.Operation.output\r\n File \"apache_beam/runners/worker/operations.py\", line 332, in apache_beam.runners.worker.operations.Operation.output\r\n File \"apache_beam/runners/worker/operations.py\", line 195, in apache_beam.runners.worker.operations.SingletonConsumerSet.receive\r\n File \"apache_beam/runners/worker/operations.py\", line 670, in apache_beam.runners.worker.operations.DoOperation.process\r\n File \"apache_beam/runners/worker/operations.py\", line 671, in apache_beam.runners.worker.operations.DoOperation.process\r\n File \"apache_beam/runners/common.py\", line 963, in apache_beam.runners.common.DoFnRunner.process\r\n File \"apache_beam/runners/common.py\", line 1030, in apache_beam.runners.common.DoFnRunner._reraise_augmented\r\n File \"apache_beam/runners/common.py\", line 961, in apache_beam.runners.common.DoFnRunner.process\r\n File \"apache_beam/runners/common.py\", line 553, in apache_beam.runners.common.SimpleInvoker.invoke_process\r\n File \"apache_beam/runners/common.py\", line 1122, in apache_beam.runners.common._OutputProcessor.process_outputs\r\n File \"apache_beam/runners/worker/operations.py\", line 195, in apache_beam.runners.worker.operations.SingletonConsumerSet.receive\r\n File \"apache_beam/runners/worker/operations.py\", line 670, in apache_beam.runners.worker.operations.DoOperation.process\r\n File \"apache_beam/runners/worker/operations.py\", line 671, in apache_beam.runners.worker.operations.DoOperation.process\r\n File \"apache_beam/runners/common.py\", line 963, in apache_beam.runners.common.DoFnRunner.process\r\n File \"apache_beam/runners/common.py\", line 1030, in apache_beam.runners.common.DoFnRunner._reraise_augmented\r\n File \"apache_beam/runners/common.py\", line 961, in apache_beam.runners.common.DoFnRunner.process\r\n File \"apache_beam/runners/common.py\", line 553, in apache_beam.runners.common.SimpleInvoker.invoke_process\r\n File \"apache_beam/runners/common.py\", line 1122, in apache_beam.runners.common._OutputProcessor.process_outputs\r\n File \"apache_beam/runners/worker/operations.py\", line 195, in apache_beam.runners.worker.operations.SingletonConsumerSet.receive\r\n File \"apache_beam/runners/worker/operations.py\", line 670, in apache_beam.runners.worker.operations.DoOperation.process\r\n File \"apache_beam/runners/worker/operations.py\", line 671, in apache_beam.runners.worker.operations.DoOperation.process\r\n File \"apache_beam/runners/common.py\", line 963, in apache_beam.runners.common.DoFnRunner.process\r\n File \"apache_beam/runners/common.py\", line 1045, in apache_beam.runners.common.DoFnRunner._reraise_augmented\r\n File \"/home/gustavoag/anaconda3/envs/pytorch/lib/python3.8/site-packages/future/utils/__init__.py\", line 446, in raise_with_traceback\r\n raise exc.with_traceback(traceback)\r\n File \"apache_beam/runners/common.py\", line 961, in apache_beam.runners.common.DoFnRunner.process\r\n File \"apache_beam/runners/common.py\", line 553, in apache_beam.runners.common.SimpleInvoker.invoke_process\r\n File \"apache_beam/runners/common.py\", line 1095, in apache_beam.runners.common._OutputProcessor.process_outputs\r\n File \"/home/gustavoag/anaconda3/envs/pytorch/lib/python3.8/site-packages/nlp/datasets/wikipedia/7be7f4324255faf70687be8692de57cf79197afdc33ff08d6a04ed602df32d50/wikipedia.py\", line 500, in _clean_content\r\n text = _parse_and_clean_wikicode(raw_content, parser=mwparserfromhell)\r\n File \"/home/gustavoag/anaconda3/envs/pytorch/lib/python3.8/site-packages/nlp/datasets/wikipedia/7be7f4324255faf70687be8692de57cf79197afdc33ff08d6a04ed602df32d50/wikipedia.py\", line 556, in _parse_and_clean_wikicode\r\n section_text.append(section.strip_code().strip())\r\n File \"/home/gustavoag/anaconda3/envs/pytorch/lib/python3.8/site-packages/mwparserfromhell/wikicode.py\", line 643, in strip_code\r\n stripped = node.__strip__(**kwargs)\r\n File \"/home/gustavoag/anaconda3/envs/pytorch/lib/python3.8/site-packages/mwparserfromhell/nodes/html_entity.py\", line 63, in __strip__\r\n return self.normalize()\r\n File \"/home/gustavoag/anaconda3/envs/pytorch/lib/python3.8/site-packages/mwparserfromhell/nodes/html_entity.py\", line 178, in normalize\r\n return chrfunc(htmlentities.name2codepoint[self.value])\r\nKeyError: \"000nbsp [while running 'train/Clean content']\"```",
"@lhoestq Any updates on this? I have similar issues with the Romanian dump, tnx.",
"Hey @gaguilar ,\r\n\r\nI just found the [\"char2subword\" paper](https://arxiv.org/pdf/2010.12730.pdf) and I'm really interested in trying it out on own vocabs/datasets like for historical texts (I've already [trained some lms](https://github.com/stefan-it/europeana-bert) on newspaper articles with OCR errors).\r\n\r\nDo you plan to release the code for your paper or is it possible to get the implementation 🤔 Many thanks :hugs: ",
"Hi @stefan-it! Thanks for your interest in our work! We do plan to release the code, but we will make it available once the paper has been published at a conference. Sorry for the inconvenience!\r\n\r\nHi @lhoestq, do you have any insights for this issue by any chance? Thanks!",
"This is an issue on the `mwparserfromhell` side. You could try to update `mwparserfromhell` and see if it fixes the issue. If it doesn't we'll have to create an issue on their repo for them to fix it.\r\nBut first let's see if the latest version of `mwparserfromhell` does the job.",
"I think the work around as suggested in the issue [#886] is not working for several languages, such as `id`. For example, I tried all the dates to download dataset for `id` langauge from the following link: (https://github.com/huggingface/datasets/pull/886) [https://dumps.wikimedia.org/idwiki/](https://dumps.wikimedia.org/idwiki/ )\r\n\r\n> >>> dataset = load_dataset('wikipedia', language='id', date=\"20210501\", beam_runner='DirectRunner')\r\nWARNING:datasets.builder:Using custom data configuration 20210501.id-date=20210501,language=id\r\nDownloading and preparing dataset wikipedia/20210501.id (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /Users/.cache/huggingface/datasets/wikipedia/20210501.id-date=20210501,language=id/0.0.0/2fe8db1405aef67dff9fcc51e133e1f9c5b0106f9d9e9638188176d278fd5ff1...\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/Users/opt/anaconda3/envs/proj/lib/python3.9/site-packages/datasets/load.py\", line 745, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/Users/opt/anaconda3/envs/proj/lib/python3.9/site-packages/datasets/builder.py\", line 574, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/Users/opt/anaconda3/envs/proj/lib/python3.9/site-packages/datasets/builder.py\", line 1139, in _download_and_prepare\r\n super(BeamBasedBuilder, self)._download_and_prepare(\r\n File \"/Users/opt/anaconda3/envs/proj/lib/python3.9/site-packages/datasets/builder.py\", line 630, in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n File \"/Users/.cache/huggingface/modules/datasets_modules/datasets/wikipedia/2fe8db1405aef67dff9fcc51e133e1f9c5b0106f9d9e9638188176d278fd5ff1/wikipedia.py\", line 420, in _split_generators\r\n downloaded_files = dl_manager.download_and_extract({\"info\": info_url})\r\n File \"/Users/opt/anaconda3/envs/proj/lib/python3.9/site-packages/datasets/utils/download_manager.py\", line 287, in download_and_extract\r\n return self.extract(self.download(url_or_urls))\r\n File \"/Users/opt/anaconda3/envs/proj/lib/python3.9/site-packages/datasets/utils/download_manager.py\", line 195, in download\r\n downloaded_path_or_paths = map_nested(\r\n File \"/Users/opt/anaconda3/envs/proj/lib/python3.9/site-packages/datasets/utils/py_utils.py\", line 203, in map_nested\r\n mapped = [\r\n File \"/Users/opt/anaconda3/envs/proj/lib/python3.9/site-packages/datasets/utils/py_utils.py\", line 204, in <listcomp>\r\n _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)\r\n File \"/Users/opt/anaconda3/envs/proj/lib/python3.9/site-packages/datasets/utils/py_utils.py\", line 142, in _single_map_nested\r\n return function(data_struct)\r\n File \"/Users/opt/anaconda3/envs/proj/lib/python3.9/site-packages/datasets/utils/download_manager.py\", line 218, in _download\r\n return cached_path(url_or_filename, download_config=download_config)\r\n File \"/Users/opt/anaconda3/envs/proj/lib/python3.9/site-packages/datasets/utils/file_utils.py\", line 281, in cached_path\r\n output_path = get_from_cache(\r\n File \"/Users/opt/anaconda3/envs/proj/lib/python3.9/site-packages/datasets/utils/file_utils.py\", line 623, in get_from_cache\r\n raise ConnectionError(\"Couldn't reach {}\".format(url))\r\nConnectionError: Couldn't reach https://dumps.wikimedia.org/idwiki/20210501/dumpstatus.json\r\n\r\nMoreover the downloading speed for `non-en` language is very very slow. And interestingly the download stopped after approx a couple minutes due to the read time-out. I tried numerous times and the results is same. Is there any feasible way to download non-en language using huggingface?\r\n\r\n> File \"/Users/miislamg/opt/anaconda3/envs/proj-semlm/lib/python3.9/site-packages/requests/models.py\", line 760, in generate\r\n raise ConnectionError(e)\r\nrequests.exceptions.ConnectionError: HTTPSConnectionPool(host='dumps.wikimedia.org', port=443): Read timed out.\r\nDownloading: 7%|████████▎ | 10.2M/153M [03:35<50:07, 47.4kB/s]",
"Hi ! The link https://dumps.wikimedia.org/idwiki/20210501/dumpstatus.json seems to be working fine for me.\r\n\r\nRegarding the time outs, it must come either from an issue on the wikimedia host side, or from your internet connection.\r\nFeel free to try again several times.",
"I was trying to download dataset for `es` language, however I am getting the following error:\r\n```\r\ndataset = load_dataset('wikipedia', language='es', date=\"20210320\", beam_runner='DirectRunner') \r\n```\r\n\r\n```\r\nDownloading and preparing dataset wikipedia/20210320.es (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /scratch/user_name/datasets/wikipedia/20210320.es-date=20210320,language=es/0.0.0/2fe8db1405aef67dff9fcc51e133e1f9c5b0106f9d9e9638188176d278fd5ff1...\r\nTraceback (most recent call last):\r\n File \"apache_beam/runners/common.py\", line 1233, in apache_beam.runners.common.DoFnRunner.process\r\n File \"apache_beam/runners/common.py\", line 581, in apache_beam.runners.common.SimpleInvoker.invoke_process\r\n File \"apache_beam/runners/common.py\", line 1368, in apache_beam.runners.common._OutputProcessor.process_outputs\r\n File \"/scratch/user_name/modules/datasets_modules/datasets/wikipedia/2fe8db1405aef67dff9fcc51e133e1f9c5b0106f9d9e9638188176d278fd5ff1/wikipedia.py\", line 492, in _clean_content\r\n text = _parse_and_clean_wikicode(raw_content, parser=mwparserfromhell)\r\n File \"/scratch/user_name/modules/datasets_modules/datasets/wikipedia/2fe8db1405aef67dff9fcc51e133e1f9c5b0106f9d9e9638188176d278fd5ff1/wikipedia.py\", line 548, in _parse_and_clean_wikicode\r\n section_text.append(section.strip_code().strip())\r\n File \"/opt/conda/lib/python3.7/site-packages/mwparserfromhell/wikicode.py\", line 639, in strip_code\r\n stripped = node.__strip__(**kwargs)\r\n File \"/opt/conda/lib/python3.7/site-packages/mwparserfromhell/nodes/html_entity.py\", line 60, in __strip__\r\n return self.normalize()\r\n File \"/opt/conda/lib/python3.7/site-packages/mwparserfromhell/nodes/html_entity.py\", line 150, in normalize\r\n return chr(htmlentities.name2codepoint[self.value])\r\nKeyError: '000nbsp'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"download_dataset_all.py\", line 8, in <module>\r\n dataset = load_dataset('wikipedia', language=language, date=\"20210320\", beam_runner='DirectRunner') \r\n File \"/opt/conda/lib/python3.7/site-packages/datasets/load.py\", line 748, in load_dataset\r\n use_auth_token=use_auth_token,\r\n File \"/opt/conda/lib/python3.7/site-packages/datasets/builder.py\", line 575, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"/opt/conda/lib/python3.7/site-packages/datasets/builder.py\", line 1152, in _download_and_prepare\r\n pipeline_results = pipeline.run()\r\n File \"/opt/conda/lib/python3.7/site-packages/apache_beam/pipeline.py\", line 564, in run\r\n return self.runner.run_pipeline(self, self._options)\r\n File \"/opt/conda/lib/python3.7/site-packages/apache_beam/runners/direct/direct_runner.py\", line 131, in run_pipeline\r\n return runner.run_pipeline(pipeline, options)\r\n File \"/opt/conda/lib/python3.7/site-packages/apache_beam/runners/portability/fn_api_runner/fn_runner.py\", line 190, in run_pipeline\r\n pipeline.to_runner_api(default_environment=self._default_environment))\r\n File \"/opt/conda/lib/python3.7/site-packages/apache_beam/runners/portability/fn_api_runner/fn_runner.py\", line 200, in run_via_runner_api\r\n return self.run_stages(stage_context, stages)\r\n File \"/opt/conda/lib/python3.7/site-packages/apache_beam/runners/portability/fn_api_runner/fn_runner.py\", line 366, in run_stages\r\n bundle_context_manager,\r\n File \"/opt/conda/lib/python3.7/site-packages/apache_beam/runners/portability/fn_api_runner/fn_runner.py\", line 562, in _run_stage\r\n bundle_manager)\r\n File \"/opt/conda/lib/python3.7/site-packages/apache_beam/runners/portability/fn_api_runner/fn_runner.py\", line 602, in _run_bundle\r\n data_input, data_output, input_timers, expected_timer_output)\r\n File \"/opt/conda/lib/python3.7/site-packages/apache_beam/runners/portability/fn_api_runner/fn_runner.py\", line 903, in process_bundle\r\n result_future = self._worker_handler.control_conn.push(process_bundle_req)\r\n File \"/opt/conda/lib/python3.7/site-packages/apache_beam/runners/portability/fn_api_runner/worker_handlers.py\", line 378, in push\r\n response = self.worker.do_instruction(request)\r\n File \"/opt/conda/lib/python3.7/site-packages/apache_beam/runners/worker/sdk_worker.py\", line 610, in do_instruction\r\n getattr(request, request_type), request.instruction_id)\r\n File \"/opt/conda/lib/python3.7/site-packages/apache_beam/runners/worker/sdk_worker.py\", line 647, in process_bundle\r\n bundle_processor.process_bundle(instruction_id))\r\n File \"/opt/conda/lib/python3.7/site-packages/apache_beam/runners/worker/bundle_processor.py\", line 1001, in process_bundle\r\n element.data)\r\n File \"/opt/conda/lib/python3.7/site-packages/apache_beam/runners/worker/bundle_processor.py\", line 229, in process_encoded\r\n self.output(decoded_value)\r\n File \"apache_beam/runners/worker/operations.py\", line 356, in apache_beam.runners.worker.operations.Operation.output\r\n File \"apache_beam/runners/worker/operations.py\", line 358, in apache_beam.runners.worker.operations.Operation.output\r\n File \"apache_beam/runners/worker/operations.py\", line 220, in apache_beam.runners.worker.operations.SingletonConsumerSet.receive\r\n File \"apache_beam/runners/worker/operations.py\", line 717, in apache_beam.runners.worker.operations.DoOperation.process\r\n File \"apache_beam/runners/worker/operations.py\", line 718, in apache_beam.runners.worker.operations.DoOperation.process\r\n File \"apache_beam/runners/common.py\", line 1235, in apache_beam.runners.common.DoFnRunner.process\r\n File \"apache_beam/runners/common.py\", line 1300, in apache_beam.runners.common.DoFnRunner._reraise_augmented\r\n File \"apache_beam/runners/common.py\", line 1233, in apache_beam.runners.common.DoFnRunner.process\r\n File \"apache_beam/runners/common.py\", line 581, in apache_beam.runners.common.SimpleInvoker.invoke_process\r\n File \"apache_beam/runners/common.py\", line 1395, in apache_beam.runners.common._OutputProcessor.process_outputs\r\n File \"apache_beam/runners/worker/operations.py\", line 220, in apache_beam.runners.worker.operations.SingletonConsumerSet.receive\r\n File \"apache_beam/runners/worker/operations.py\", line 717, in apache_beam.runners.worker.operations.DoOperation.process\r\n File \"apache_beam/runners/worker/operations.py\", line 718, in apache_beam.runners.worker.operations.DoOperation.process\r\n File \"apache_beam/runners/common.py\", line 1235, in apache_beam.runners.common.DoFnRunner.process\r\n File \"apache_beam/runners/common.py\", line 1300, in apache_beam.runners.common.DoFnRunner._reraise_augmented\r\n File \"apache_beam/runners/common.py\", line 1233, in apache_beam.runners.common.DoFnRunner.process\r\n File \"apache_beam/runners/common.py\", line 581, in apache_beam.runners.common.SimpleInvoker.invoke_process\r\n File \"apache_beam/runners/common.py\", line 1395, in apache_beam.runners.common._OutputProcessor.process_outputs\r\n File \"apache_beam/runners/worker/operations.py\", line 220, in apache_beam.runners.worker.operations.SingletonConsumerSet.receive\r\n File \"apache_beam/runners/worker/operations.py\", line 717, in apache_beam.runners.worker.operations.DoOperation.process\r\n File \"apache_beam/runners/worker/operations.py\", line 718, in apache_beam.runners.worker.operations.DoOperation.process\r\n File \"apache_beam/runners/common.py\", line 1235, in apache_beam.runners.common.DoFnRunner.process\r\n File \"apache_beam/runners/common.py\", line 1315, in apache_beam.runners.common.DoFnRunner._reraise_augmented\r\n File \"/opt/conda/lib/python3.7/site-packages/future/utils/__init__.py\", line 446, in raise_with_traceback\r\n raise exc.with_traceback(traceback)\r\n File \"apache_beam/runners/common.py\", line 1233, in apache_beam.runners.common.DoFnRunner.process\r\n File \"apache_beam/runners/common.py\", line 581, in apache_beam.runners.common.SimpleInvoker.invoke_process\r\n File \"apache_beam/runners/common.py\", line 1368, in apache_beam.runners.common._OutputProcessor.process_outputs\r\n File \"/scratch/user_name/modules/datasets_modules/datasets/wikipedia/2fe8db1405aef67dff9fcc51e133e1f9c5b0106f9d9e9638188176d278fd5ff1/wikipedia.py\", line 492, in _clean_content\r\n text = _parse_and_clean_wikicode(raw_content, parser=mwparserfromhell)\r\n File \"/scratch/user_name/modules/datasets_modules/datasets/wikipedia/2fe8db1405aef67dff9fcc51e133e1f9c5b0106f9d9e9638188176d278fd5ff1/wikipedia.py\", line 548, in _parse_and_clean_wikicode\r\n section_text.append(section.strip_code().strip())\r\n File \"/opt/conda/lib/python3.7/site-packages/mwparserfromhell/wikicode.py\", line 639, in strip_code\r\n stripped = node.__strip__(**kwargs)\r\n File \"/opt/conda/lib/python3.7/site-packages/mwparserfromhell/nodes/html_entity.py\", line 60, in __strip__\r\n return self.normalize()\r\n File \"/opt/conda/lib/python3.7/site-packages/mwparserfromhell/nodes/html_entity.py\", line 150, in normalize\r\n return chr(htmlentities.name2codepoint[self.value])\r\nKeyError: \"000nbsp [while running 'train/Clean content']\"\r\n```",
"Hi ! This looks related to this issue: https://github.com/huggingface/datasets/issues/1994\r\nBasically the parser that is used (mwparserfromhell) has some issues for some pages in `es`.\r\nWe already reported some issues for `es` on their repo at https://github.com/earwig/mwparserfromhell/issues/247 but it looks like there are still a few issues. Might be a good idea to open a new issue on the mwparserfromhell repo",
"Any updates on this so far?",
"The issue:\r\n```\r\nKeyError: \"000nbsp [while running 'train/Clean content']\"\r\n```\r\nreported in comments:\r\n- https://github.com/huggingface/datasets/issues/577#issuecomment-701890059 (by @gaguilar)\r\n- https://github.com/huggingface/datasets/issues/577#issuecomment-879513227 (by @mmiakashs)\r\n\r\nwas normally fixed in the `mwparserfromhell` library and will be accessible in their next release version `0.7`:\r\n- https://github.com/earwig/mwparserfromhell/issues/288",
"mwparserfromhell 0.7 has still not been released, but you might have luck with the dev version:\r\n`pip install git+https://github.com/earwig/mwparserfromhell.git@0f89f44`"
] | 2020-09-07T01:16:29
| 2023-04-11T22:50:48
| 2022-10-11T11:16:04
|
CONTRIBUTOR
| null | null | null | null |
Hi,
I am working with the `wikipedia` dataset and I have a script that goes over 92 of the available languages in that dataset. So far I have detected that `ar`, `af`, `an` are not loading. Other languages like `fr` and `en` are working fine. Here's how I am loading them:
```
import nlp
langs = ['ar'. 'af', 'an']
for lang in langs:
data = nlp.load_dataset('wikipedia', f'20200501.{lang}', beam_runner='DirectRunner', split='train')
print(lang, len(data))
```
Here's what I see for 'ar' (it gets stuck there):
```
Downloading and preparing dataset wikipedia/20200501.ar (download: Unknown size, generated: Unknown size, post-processed: Unknown sizetotal: Unknown size) to /home/gaguilar/.cache/huggingface/datasets/wikipedia/20200501.ar/1.0.0/7be7f4324255faf70687be8692de57cf79197afdc33ff08d6a04ed602df32d50...
```
Note that those languages are indeed in the list of expected languages. Any suggestions on how to work around this? Thanks!
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/577/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/577/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 764 days, 9:59:35
|
https://api.github.com/repos/huggingface/datasets/issues/575
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/575/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/575/comments
|
https://api.github.com/repos/huggingface/datasets/issues/575/events
|
https://github.com/huggingface/datasets/issues/575
| 693,691,611
|
MDU6SXNzdWU2OTM2OTE2MTE=
| 575
|
Couldn't reach certain URLs and for the ones that can be reached, code just blocks after downloading.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/488428?v=4",
"events_url": "https://api.github.com/users/sudarshan85/events{/privacy}",
"followers_url": "https://api.github.com/users/sudarshan85/followers",
"following_url": "https://api.github.com/users/sudarshan85/following{/other_user}",
"gists_url": "https://api.github.com/users/sudarshan85/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sudarshan85",
"id": 488428,
"login": "sudarshan85",
"node_id": "MDQ6VXNlcjQ4ODQyOA==",
"organizations_url": "https://api.github.com/users/sudarshan85/orgs",
"received_events_url": "https://api.github.com/users/sudarshan85/received_events",
"repos_url": "https://api.github.com/users/sudarshan85/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sudarshan85/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sudarshan85/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sudarshan85",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Update:\r\n\r\nThe imdb download completed after a long time (about 45 mins). Ofcourse once download loading was instantaneous. Also, the loaded object was of type `arrow_dataset`. \r\n\r\nThe urls for glue still doesn't work though.",
"Thanks for the report, I'll give a look!",
"I am also seeing a similar error when running the following:\r\n\r\n```\r\nimport nlp\r\ndataset = load_dataset('cola')\r\n```\r\nError:\r\n```\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/js11133/.conda/envs/jiant/lib/python3.8/site-packages/nlp/load.py\", line 509, in load_dataset\r\n module_path = prepare_module(path, download_config=download_config, dataset=True)\r\n File \"/home/js11133/.conda/envs/jiant/lib/python3.8/site-packages/nlp/load.py\", line 248, in prepare_module\r\n local_path = cached_path(file_path, download_config=download_config)\r\n File \"/home/js11133/.conda/envs/jiant/lib/python3.8/site-packages/nlp/utils/file_utils.py\", line 191, in cached_path\r\n output_path = get_from_cache(\r\n File \"/home/js11133/.conda/envs/jiant/lib/python3.8/site-packages/nlp/utils/file_utils.py\", line 356, in get_from_cache\r\n raise ConnectionError(\"Couldn't reach {}\".format(url))\r\nConnectionError: Couldn't reach https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/cola/cola.py\r\n```",
"@jeswan `\"cola\"` is not a valid dataset identifier (you can check the up-to-date list on https://huggingface.co/datasets) but you can find cola inside glue.",
"Ah right. Thanks!",
"Hi. Closing this one since #626 updated the glue urls.\r\n\r\n> 1. Why is it still blocking? Is it still downloading?\r\n\r\nAfter downloading it generates the arrow file by iterating through the examples.\r\nThe number of examples processed by second is shown during the processing (not sure why it was not the case for you)\r\n\r\n> 2. I specified split as train, so why is the test folder being populated?\r\n\r\nIt downloads every split\r\n\r\n\r\n\r\n"
] | 2020-09-04T21:46:25
| 2020-09-22T10:41:36
| 2020-09-22T10:41:36
|
NONE
| null | null | null | null |
Hi,
I'm following the [quick tour](https://huggingface.co/nlp/quicktour.html) and tried to load the glue dataset:
```
>>> from nlp import load_dataset
>>> dataset = load_dataset('glue', 'mrpc', split='train')
```
However, this ran into a `ConnectionError` saying it could not reach the URL (just pasting the last few lines):
```
/net/vaosl01/opt/NFS/su0/miniconda3/envs/hf/lib/python3.7/site-packages/nlp/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only)
354 " to False."
355 )
--> 356 raise ConnectionError("Couldn't reach {}".format(url))
357
358 # From now on, connected is True.
ConnectionError: Couldn't reach https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2Fmrpc_dev_ids.tsv?alt=media&token=ec5c0836-31d5-48f4-b431-7480817f1adc
```
I tried glue with cola and sst2. I got the same error, just instead of mrpc in the URL, it was replaced with cola and sst2.
Since this was not working, I thought I'll try another dataset. So I tried downloading the imdb dataset:
```
ds = load_dataset('imdb', split='train')
```
This downloads the data, but it just blocks after that:
```
Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4.56k/4.56k [00:00<00:00, 1.38MB/s]
Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2.07k/2.07k [00:00<00:00, 1.15MB/s]
Downloading and preparing dataset imdb/plain_text (download: 80.23 MiB, generated: 127.06 MiB, post-processed: Unknown sizetotal: 207.28 MiB) to /net/vaosl01/opt/NFS/su0/huggingface/datasets/imdb/plain_text/1.0.0/76cdbd7249ea3548c928bbf304258dab44d09cd3638d9da8d42480d1d1be3743...
Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 84.1M/84.1M [00:07<00:00, 11.1MB/s]
```
I checked the folder `$HF_HOME/datasets/downloads/extracted/<id>/aclImdb`. This folder is constantly growing in size. When I navigated to the train folder within, there was no file. However, the test folder seemed to be populating. The last time I checked it was 327M. I thought the Imdb dataset was smaller than that. My questions are:
1. Why is it still blocking? Is it still downloading?
2. I specified split as train, so why is the test folder being populated?
3. I read somewhere that after downloading, `nlp` converts the text files into some sort of `arrow` files, which will also take a while. Is this also happening here?
Thanks.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/575/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/575/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 17 days, 12:55:11
|
https://api.github.com/repos/huggingface/datasets/issues/568
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/568/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/568/comments
|
https://api.github.com/repos/huggingface/datasets/issues/568/events
|
https://github.com/huggingface/datasets/issues/568
| 691,638,656
|
MDU6SXNzdWU2OTE2Mzg2NTY=
| 568
|
`metric.compute` throws `ArrowInvalid` error
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/2287797?v=4",
"events_url": "https://api.github.com/users/ibeltagy/events{/privacy}",
"followers_url": "https://api.github.com/users/ibeltagy/followers",
"following_url": "https://api.github.com/users/ibeltagy/following{/other_user}",
"gists_url": "https://api.github.com/users/ibeltagy/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ibeltagy",
"id": 2287797,
"login": "ibeltagy",
"node_id": "MDQ6VXNlcjIyODc3OTc=",
"organizations_url": "https://api.github.com/users/ibeltagy/orgs",
"received_events_url": "https://api.github.com/users/ibeltagy/received_events",
"repos_url": "https://api.github.com/users/ibeltagy/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ibeltagy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ibeltagy/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ibeltagy",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hmm might be related to what we are solving in #564",
"Could you try to update to `datasets>=1.0.0` (we changed the name of the library) and try again ?\r\nIf is was related to the distributed setup settings it must be fixed.\r\nIf it was related to empty metric inputs it's going to be fixed in #654 ",
"Closing this one as it was fixed in #654 \r\nFeel free to re-open if you have other questions"
] | 2020-09-03T04:56:57
| 2020-10-05T16:33:53
| 2020-10-05T16:33:53
|
NONE
| null | null | null | null |
I get the following error with `rouge.compute`. It happens only with distributed training, and it occurs randomly I can't easily reproduce it. This is using `nlp==0.4.0`
```
File "/home/beltagy/trainer.py", line 92, in validation_step
rouge_scores = rouge.compute(predictions=generated_str, references=gold_str, rouge_types=['rouge2', 'rouge1', 'rougeL'])
File "/home/beltagy/miniconda3/envs/allennlp/lib/python3.7/site-packages/nlp/metric.py", line 224, in compute
self.finalize(timeout=timeout)
File "/home/beltagy/miniconda3/envs/allennlp/lib/python3.7/site-packages/nlp/metric.py", line 213, in finalize
self.data = Dataset(**reader.read_files(node_files))
File "/home/beltagy/miniconda3/envs/allennlp/lib/python3.7/site-packages/nlp/arrow_reader.py", line 217, in read_files
dataset_kwargs = self._read_files(files=files, info=self._info, original_instructions=original_instructions)
File "/home/beltagy/miniconda3/envs/allennlp/lib/python3.7/site-packages/nlp/arrow_reader.py", line 162, in _read_files
pa_table: pa.Table = self._get_dataset_from_filename(f_dict)
File "/home/beltagy/miniconda3/envs/allennlp/lib/python3.7/site-packages/nlp/arrow_reader.py", line 276, in _get_dataset_from_filename
f = pa.ipc.open_stream(mmap)
File "/home/beltagy/miniconda3/envs/allennlp/lib/python3.7/site-packages/pyarrow/ipc.py", line 173, in open_stream
return RecordBatchStreamReader(source)
File "/home/beltagy/miniconda3/envs/allennlp/lib/python3.7/site-packages/pyarrow/ipc.py", line 64, in __init__
self._open(source)
File "pyarrow/ipc.pxi", line 469, in pyarrow.lib._RecordBatchStreamReader._open
File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Tried reading schema message, was null or length 0
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/568/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/568/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 32 days, 11:36:56
|
https://api.github.com/repos/huggingface/datasets/issues/565
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/565/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/565/comments
|
https://api.github.com/repos/huggingface/datasets/issues/565/events
|
https://github.com/huggingface/datasets/issues/565
| 691,039,121
|
MDU6SXNzdWU2OTEwMzkxMjE=
| 565
|
No module named 'nlp.logging'
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/66633754?v=4",
"events_url": "https://api.github.com/users/melody-ju/events{/privacy}",
"followers_url": "https://api.github.com/users/melody-ju/followers",
"following_url": "https://api.github.com/users/melody-ju/following{/other_user}",
"gists_url": "https://api.github.com/users/melody-ju/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/melody-ju",
"id": 66633754,
"login": "melody-ju",
"node_id": "MDQ6VXNlcjY2NjMzNzU0",
"organizations_url": "https://api.github.com/users/melody-ju/orgs",
"received_events_url": "https://api.github.com/users/melody-ju/received_events",
"repos_url": "https://api.github.com/users/melody-ju/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/melody-ju/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/melody-ju/subscriptions",
"type": "User",
"url": "https://api.github.com/users/melody-ju",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Thanks for reporting.\r\n\r\nApparently this is a versioning issue: the lib downloaded the `bleurt` script from the master branch where we did this change recently. We'll fix that in a new release this week or early next week. Cc @thomwolf \r\n\r\nUntil that, I'd suggest you to download the right bleurt folder from github ([this one](https://github.com/huggingface/nlp/tree/0.4.0/metrics/bleurt)) and do\r\n\r\n```python\r\nfrom nlp import load_metric\r\n\r\nbleurt = load_metric(\"path/to/bleurt/folder\")\r\n```\r\n\r\nTo download it you can either clone the repo or download the `bleurt.py` file and place it in a folder named `bleurt` ",
"Actually we can fix this on our side, this script didn't had to be updated. I'll do it in a few minutes"
] | 2020-09-02T13:49:50
| 2020-09-03T07:29:50
| 2020-09-03T07:29:50
|
NONE
| null | null | null | null |
Hi, I am using nlp version 0.4.0. Trying to use bleurt as an eval metric, however, the bleurt script imports nlp.logging which creates the following error. What am I missing?
```
>>> import nlp
2020-09-02 13:47:09.210310: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
>>> bleurt = nlp.load_metric("bleurt")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/melody/anaconda3/envs/transformers/lib/python3.6/site-packages/nlp/load.py", line 443, in load_metric
metric_cls = import_main_class(module_path, dataset=False)
File "/home/melody/anaconda3/envs/transformers/lib/python3.6/site-packages/nlp/load.py", line 61, in import_main_class
module = importlib.import_module(module_path)
File "/home/melody/anaconda3/envs/transformers/lib/python3.6/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 994, in _gcd_import
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 665, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 678, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/melody/anaconda3/envs/transformers/lib/python3.6/site-packages/nlp/metrics/bleurt/43448cf2959ea81d3ae0e71c5c8ee31dc15eed9932f197f5f50673cbcecff2b5/bleurt.py", line 20, in <module>
from nlp.logging import get_logger
ModuleNotFoundError: No module named 'nlp.logging'
```
Just to show once again that I can't import the logging module:
```
>>> import nlp
2020-09-02 13:48:38.190621: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
>>> nlp.__version__
'0.4.0'
>>> from nlp.logging import get_logger
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'nlp.logging'
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/thomwolf",
"id": 7353373,
"login": "thomwolf",
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"type": "User",
"url": "https://api.github.com/users/thomwolf",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/565/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/565/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 17:40:00
|
https://api.github.com/repos/huggingface/datasets/issues/560
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/560/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/560/comments
|
https://api.github.com/repos/huggingface/datasets/issues/560/events
|
https://github.com/huggingface/datasets/issues/560
| 690,488,764
|
MDU6SXNzdWU2OTA0ODg3NjQ=
| 560
|
Using custom DownloadConfig results in an error
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1789921?v=4",
"events_url": "https://api.github.com/users/ynouri/events{/privacy}",
"followers_url": "https://api.github.com/users/ynouri/followers",
"following_url": "https://api.github.com/users/ynouri/following{/other_user}",
"gists_url": "https://api.github.com/users/ynouri/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ynouri",
"id": 1789921,
"login": "ynouri",
"node_id": "MDQ6VXNlcjE3ODk5MjE=",
"organizations_url": "https://api.github.com/users/ynouri/orgs",
"received_events_url": "https://api.github.com/users/ynouri/received_events",
"repos_url": "https://api.github.com/users/ynouri/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ynouri/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ynouri/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ynouri",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"From my limited understanding, part of the issue seems related to the `prepare_module` and `download_and_prepare` functions each handling the case where no config is passed. For example, `prepare_module` does mutate the object passed and forces the flags `extract_compressed_file` and `force_extract` to `True`.\r\n\r\nSee:\r\n* https://github.com/huggingface/nlp/blob/5fb61e1012bda724a9b6b847307d90a1380abfa5/src/nlp/load.py#L227\r\n* https://github.com/huggingface/nlp/blob/5fb61e1012bda724a9b6b847307d90a1380abfa5/src/nlp/builder.py#L388\r\n\r\nMaybe a cleaner solution would be to always instantiate a default `DownloadConfig` object at the top-level, have it as non-optional for the lower-level functions and treat it as immutable. ",
"Thanks for the report, I'll take a look.\r\n\r\nWhat is your specific use-case for providing a DownloadConfig object?\r\n",
"Thanks. Our use case involves running a training job behind a corporate firewall with no access to any external resources (S3, GCP or other web resources).\r\n\r\nI was thinking about a 2-steps process:\r\n1) Download the resources / artifacts using some secure corporate channel, ie run `nlp.load_dataset()` without a specific `DownloadConfig`. After that, collect the files from the `$HF_HOME` folder\r\n2) Copy the `$HF_HOME` folder in the firewalled environment. Run `nlp.load_dataset()` with a custom config `DownloadConfig(local_files_only=True)`\r\n\r\nHowever this ends up a bit clunky in practice, even when solving the `DownloadConfig` issue above. For example, the `filename` hash computed in `get_from_cache()` differs in the `local_files_only=False` vs `local_files_only=True` case (local case defaults `etag` to `None`, which results in a different hash). So effectively step 2) above doesn't work because the hash computed differs from the hash in the cache folder. Some hacks / workaround are possible but this solution becomes very convoluted.\r\nhttps://github.com/huggingface/nlp/blob/c214aa5a4430c1df1bcd0619fd94d6abdf9d2da7/src/nlp/utils/file_utils.py#L417\r\n\r\nWould you recommend a different path?\r\n",
"I see.\r\n\r\nProbably the easiest way for you would be that we add simple serialization/deserialization methods to the Dataset and DatasetDict objects once the data files have been downloaded and all the dataset is processed.\r\n\r\nWhat do you think @lhoestq ?",
"This use-case will be solved with #571 ",
"Thank you very much @thomwolf and @lhoestq we will give it a try"
] | 2020-09-01T22:23:02
| 2022-10-04T17:23:45
| 2022-10-04T17:23:45
|
NONE
| null | null | null | null |
## Version / Environment
Ubuntu 18.04
Python 3.6.8
nlp 0.4.0
## Description
Loading `imdb` dataset works fine when when I don't specify any `download_config` argument. When I create a custom `DownloadConfig` object and pass it to the `nlp.load_dataset` function, this results in an error.
## How to reproduce
### Example without DownloadConfig --> works
```python
import os
os.environ["HF_HOME"] = "/data/hf-test-without-dl-config-01/"
import logging
import nlp
logging.basicConfig(level=logging.INFO)
if __name__ == "__main__":
imdb = nlp.load_dataset(path="imdb")
```
### Example with DownloadConfig --> doesn't work
```python
import os
os.environ["HF_HOME"] = "/data/hf-test-with-dl-config-01/"
import logging
import nlp
from nlp.utils import DownloadConfig
logging.basicConfig(level=logging.INFO)
if __name__ == "__main__":
download_config = DownloadConfig()
imdb = nlp.load_dataset(path="imdb", download_config=download_config)
```
Error traceback:
```
Traceback (most recent call last):
File "/.../example_with_dl_config.py", line 13, in <module>
imdb = nlp.load_dataset(path="imdb", download_config=download_config)
File "/.../python3.6/python3.6/site-packages/nlp/load.py", line 549, in load_dataset
download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications,
File "/.../python3.6/python3.6/site-packages/nlp/builder.py", line 463, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/.../python3.6/python3.6/site-packages/nlp/builder.py", line 518, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/.../python3.6/python3.6/site-packages/nlp/datasets/imdb/76cdbd7249ea3548c928bbf304258dab44d09cd3638d9da8d42480d1d1be3743/imdb.py", line 86, in _split_generators
arch_path = dl_manager.download_and_extract(_DOWNLOAD_URL)
File "/.../python3.6/python3.6/site-packages/nlp/utils/download_manager.py", line 220, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/.../python3.6/python3.6/site-packages/nlp/utils/download_manager.py", line 158, in download
self._record_sizes_checksums(url_or_urls, downloaded_path_or_paths)
File "/.../python3.6/python3.6/site-packages/nlp/utils/download_manager.py", line 108, in _record_sizes_checksums
self._recorded_sizes_checksums[url] = get_size_checksum_dict(path)
File "/.../python3.6/python3.6/site-packages/nlp/utils/info_utils.py", line 79, in get_size_checksum_dict
with open(path, "rb") as f:
IsADirectoryError: [Errno 21] Is a directory: '/data/hf-test-with-dl-config-01/datasets/extracted/b6802c5b61824b2c1f7dbf7cda6696b5f2e22214e18d171ce1ed3be90c931ce5'
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/560/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/560/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 762 days, 19:00:43
|
https://api.github.com/repos/huggingface/datasets/issues/554
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/554/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/554/comments
|
https://api.github.com/repos/huggingface/datasets/issues/554/events
|
https://github.com/huggingface/datasets/issues/554
| 690,173,214
|
MDU6SXNzdWU2OTAxNzMyMTQ=
| 554
|
nlp downloads to its module path
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/49398?v=4",
"events_url": "https://api.github.com/users/danieldk/events{/privacy}",
"followers_url": "https://api.github.com/users/danieldk/followers",
"following_url": "https://api.github.com/users/danieldk/following{/other_user}",
"gists_url": "https://api.github.com/users/danieldk/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/danieldk",
"id": 49398,
"login": "danieldk",
"node_id": "MDQ6VXNlcjQ5Mzk4",
"organizations_url": "https://api.github.com/users/danieldk/orgs",
"received_events_url": "https://api.github.com/users/danieldk/received_events",
"repos_url": "https://api.github.com/users/danieldk/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/danieldk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/danieldk/subscriptions",
"type": "User",
"url": "https://api.github.com/users/danieldk",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Indeed this is a known issue arising from the fact that we try to be compatible with cloupickle.\r\n\r\nDoes this also happen if you are installing in a virtual environment?",
"> Indeed this is a know issue with the fact that we try to be compatible with cloupickle.\r\n> \r\n> Does this also happen if you are installing in a virtual environment?\r\n\r\nThen it would work, because the package is in a writable path.",
"If it's fine for you then this is the recommended way to solve this issue.",
"> If it's fine for you then this is the recommended way to solve this issue.\r\n\r\nI don't want to use a virtual environment, because Nix is fully reproducible, and virtual environments are not. And I am the maintainer of the `transformers` in nixpkgs, so sooner or later I will have to package `nlp`, since it is becoming a dependency of `transformers` ;).",
"Ok interesting. We could have another check to see if it's possible to download and import the datasets script at another location than the module path. I think this would probably involve tweaking the python system path dynamically.\r\n\r\nI don't know anything about Nix so if you want to give this a try your self we can guide you or you can give us more information on your general project and how this works.\r\n\r\nRegarding `nlp` and `transformers`, we are not sure `nlp` will become a required dependency for `transformers`. It will probably be used a lot in the examples but I think it probably won't be a required dependency for the main package since we try to keep it as light as possible in terms of deps.\r\n\r\nHappy to help you make all these things work better for your use-case ",
"@danieldk modules are now installed in a different location (by default in the cache directory of the lib, in `~/.cache/huggingface/modules`). You can also change that using the environment variable `HF_MODULES_PATH`\r\n\r\nFeel free to play with this change from the master branch for now, and let us know if it sounds good for you :)\r\nWe plan to do a release in the next coming days",
"Awesome! I’ll hopefully have some time in the coming days to try this.",
"> Feel free to play with this change from the master branch for now, and let us know if it sounds good for you :)\r\n> We plan to do a release in the next coming days\r\n\r\nThanks for making this change! I just packaged the latest commit on master and it works like a charm now! :partying_face: "
] | 2020-09-01T14:06:14
| 2020-09-11T06:19:24
| 2020-09-11T06:19:24
|
MEMBER
| null | null | null | null |
I am trying to package `nlp` for Nix, because it is now an optional dependency for `transformers`. The problem that I encounter is that the `nlp` library downloads to the module path, which is typically not writable in most package management systems:
```>>> import nlp
>>> squad_dataset = nlp.load_dataset('squad')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/nix/store/2yhik0hhqayksmkkfb0ylqp8cf5wa5wp-python3-3.8.5-env/lib/python3.8/site-packages/nlp/load.py", line 530, in load_dataset
module_path, hash = prepare_module(path, download_config=download_config, dataset=True)
File "/nix/store/2yhik0hhqayksmkkfb0ylqp8cf5wa5wp-python3-3.8.5-env/lib/python3.8/site-packages/nlp/load.py", line 329, in prepare_module
os.makedirs(main_folder_path, exist_ok=True)
File "/nix/store/685kq8pyhrvajah1hdsfn4q7gm3j4yd4-python3-3.8.5/lib/python3.8/os.py", line 223, in makedirs
mkdir(name, mode)
OSError: [Errno 30] Read-only file system: '/nix/store/2yhik0hhqayksmkkfb0ylqp8cf5wa5wp-python3-3.8.5-env/lib/python3.8/site-packages/nlp/datasets/squad'
```
Do you have any suggested workaround for this issue?
Perhaps overriding the default value for `force_local_path` of `prepare_module`?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/49398?v=4",
"events_url": "https://api.github.com/users/danieldk/events{/privacy}",
"followers_url": "https://api.github.com/users/danieldk/followers",
"following_url": "https://api.github.com/users/danieldk/following{/other_user}",
"gists_url": "https://api.github.com/users/danieldk/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/danieldk",
"id": 49398,
"login": "danieldk",
"node_id": "MDQ6VXNlcjQ5Mzk4",
"organizations_url": "https://api.github.com/users/danieldk/orgs",
"received_events_url": "https://api.github.com/users/danieldk/received_events",
"repos_url": "https://api.github.com/users/danieldk/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/danieldk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/danieldk/subscriptions",
"type": "User",
"url": "https://api.github.com/users/danieldk",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/554/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/554/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 9 days, 16:13:10
|
https://api.github.com/repos/huggingface/datasets/issues/546
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/546/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/546/comments
|
https://api.github.com/repos/huggingface/datasets/issues/546/events
|
https://github.com/huggingface/datasets/issues/546
| 689,186,526
|
MDU6SXNzdWU2ODkxODY1MjY=
| 546
|
Very slow data loading on large dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/6087313?v=4",
"events_url": "https://api.github.com/users/agemagician/events{/privacy}",
"followers_url": "https://api.github.com/users/agemagician/followers",
"following_url": "https://api.github.com/users/agemagician/following{/other_user}",
"gists_url": "https://api.github.com/users/agemagician/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/agemagician",
"id": 6087313,
"login": "agemagician",
"node_id": "MDQ6VXNlcjYwODczMTM=",
"organizations_url": "https://api.github.com/users/agemagician/orgs",
"received_events_url": "https://api.github.com/users/agemagician/received_events",
"repos_url": "https://api.github.com/users/agemagician/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/agemagician/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/agemagician/subscriptions",
"type": "User",
"url": "https://api.github.com/users/agemagician",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"When you load a text file for the first time with `nlp`, the file is converted into Apache Arrow format. Arrow allows to use memory-mapping, which means that you can load an arbitrary large dataset.\r\n\r\nNote that as soon as the conversion has been done once, the next time you'll load the dataset it will be much faster.\r\n\r\nHowever for a 1TB dataset, the conversion can indeed take time. You could try to load parts of it in parallel, and then use `nlp.concatenate_datasets` to get your full dataset.",
"Humm, we can give a look at these large scale datasets indeed.\r\n\r\nDo you mind sharing a few stats on your dataset so I can try to test on a similar one?\r\n\r\nIn particular some orders of magnitudes for the number of files, number of lines per files, line lengths.",
"@lhoestq Yes, I understand that the first time requires more time. The concatenate_datasets seems to be a workaround, but I believe a multi-processing method should be integrated into load_dataset to make it easier and more efficient for users.\r\n\r\n@thomwolf Sure, here are the statistics:\r\nNumber of lines: 4.2 Billion\r\nNumber of files: 6K\r\nNumber of tokens: 800 Billion\r\nThe number of lines is distributed equally across these 6k files.\r\nThe line length varies between 100 tokens to 40k tokens.\r\n",
"@agemagician you can give a try at a multithreaded version if you want (currently on the #548).\r\n\r\nTo test it, you just need to copy the new `text` processing script which is [here](https://github.com/huggingface/nlp/blob/07d92a82b7594498ff702f3cca55c074e2052257/datasets/text/text.py) somewhere on your drive and give it's local path instead of `text` to `load_dataset`. E.g. in your example:\r\n```python\r\ntrain_files = glob.glob(\"xxx/*.txt\",recursive=True)\r\nrandom.shuffle(train_files)\r\n\r\nprint(train_files)\r\n\r\ndataset = nlp.load_dataset('./datasets/text.py', # path to where you've dowloaded the multi-threaded text loading script\r\n data_files=train_files,\r\n name=\"customDataset\",\r\n version=\"1.0.0\",\r\n cache_dir=\"xxx/nlp\")\r\n```",
"I have already generated the dataset, but now I tried to reload it and it is still very slow.\r\n\r\nI also have installed your commit and it is slow, even after the dataset was already generated.\r\n`pip install git+https://github.com/huggingface/nlp.git@07d92a82b7594498ff702f3cca55c074e2052257`\r\n\r\nIt uses only a single thread.\r\n\r\nDid I miss something ?",
"As mentioned in #548 , each time you call `load_dataset` with `data_files=`, they are hashed to get the cache directory name. Hashing can be too slow with 1TB of data. I feel like we should have a faster way of getting a hash that identifies the input data files",
"I believe this is really a very important feature, otherwise, we will still have the issue of too slow loading problems even if the data cache generation is fast.",
"Hmm ok then maybe it's the hashing step indeed.\r\n\r\nLet's see if we can improve this as well.\r\n\r\n(you will very likely have to regenerate your dataset if we change this part of the lib though since I expect modifications on this part of the lib to results in new hashes)",
"Also, @agemagician you have to follow the step I indicate in my previous message [here](https://github.com/huggingface/nlp/issues/546#issuecomment-684648927) to use the new text loading script.\r\n\r\nJust doing `pip install git+https://github.com/huggingface/nlp.git@07d92a82b7594498ff702f3cca55c074e2052257` like you did won't use the new script (they are not inside the library but hosted on our hub).",
"No problem, I will regenerate it. This will make us see if we solved both issues and now both the data generation step, as well as the hashing step, is fast.",
"Any news for the hashing ?",
"I'm working on it today :)",
"Ok so now the text files won't be hashed.\r\n\r\nI also updated #548 to include this change.\r\nLet us know if it helps @agemagician :)",
"Perfect thanks for your amazing work.",
"Right now, for caching 18Gb data, it is taking 1 hour 10 minute. Is that proper expected time? @lhoestq @agemagician \r\nIn this rate (assuming large file will caching at the same rate) caching full mC4 (27TB) requires a month (~26 days). \r\n",
"Hi ! Currently it is that slow because we haven't implemented parallelism for the dataset generation yet.\r\nThough we will definitely work on this :)\r\n\r\nFor now I'd recommend loading the dataset shard by shard in parallel, and then concatenate them:\r\n```python\r\n# in one process, load first 100 files for english\r\nshard1 = load_dataset(\"allenai/c4\", data_files=\"multilingual/c4-en.tfrecord-000**.json.gz\")\r\n# in another process load next 100 files for english\r\nshard2 = load_dataset(\"allenai/c4\", data_files=\"multilingual/c4-en.tfrecord-001**.json.gz\")\r\n\r\n# finally\r\nconcatenate_datasets([shard1, shard2, ...])",
"Thanks for the help..!!!",
"Sorry to write on a closed issue but, has there been any progress on parallelizing the `load_dataset` function?",
"Hi ! No but this is in our plans (probably a few weeks)",
"I'm literally crying waiting for the trainer to restart from checkpoint. It's getting stuck at `get_train_dataloader` and I think this is to do with the same issue... has there been any progress on this?",
"> I'm literally crying waiting for the trainer to restart from checkpoint. It's getting stuck at get_train_dataloader and I think this is to do with the same issue...\r\n\r\nOnce the dataset is cached once, it's not regenerated again. Your issue seems different",
"hmmm, yes. I'll come back with details on this, fairly easy to reproduce. Takes about 30 minutes to get from checkpoint loading to starting training...",
"@lhoestq yo, any news in making it possible to download large datasets faster?",
"@lhoestq For some reason [setting num_proc](https://discuss.huggingface.co/t/how-can-i-multithreadedly-download-a-huggingface-dataset/56178/2?u=kopyl) does not work at all... My dataset has 58 parquet files and i was hoping passing `num_proc` to `load_dataset` would spawn 58 Python processes each downloading its own parquet so I can load my dataset in 1 minutes instead of 50...",
"It does spawn `num_proc` processes. Note that when you download in parallel you're often bounded by your bandwidth at one point, so 50 processes is unlikely to get you a x50 download speed up but a bit less",
"> It does spawn `num_proc` processes. Note that when you download in parallel you're often bounded by your bandwidth at one point, so 50 processes is unlikely to get you a x50 download speed up but a bit less\r\n\r\nwhy then i see only 1 parquet file download progress bar at a time?",
"Ah indeed parallel downloads are not enabled yet for some datasets.\r\nI opened https://github.com/huggingface/datasets/pull/6551 to fix this.\r\n",
"@lhoestq thank you very much, i though for a second i'm just tripping."
] | 2020-08-31T12:57:23
| 2024-01-02T20:26:24
| 2020-09-08T10:19:57
|
NONE
| null | null | null | null |
I made a simple python script to check the NLP library speed, which loads 1.1 TB of textual data.
It has been 8 hours and still, it is on the loading steps.
It does work when the text dataset size is small about 1 GB, but it doesn't scale.
It also uses a single thread during the data loading step.
```
train_files = glob.glob("xxx/*.txt",recursive=True)
random.shuffle(train_files)
print(train_files)
dataset = nlp.load_dataset('text',
data_files=train_files,
name="customDataset",
version="1.0.0",
cache_dir="xxx/nlp")
```
Is there something that I am missing ?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/thomwolf",
"id": 7353373,
"login": "thomwolf",
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"type": "User",
"url": "https://api.github.com/users/thomwolf",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/546/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/546/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 7 days, 21:22:34
|
https://api.github.com/repos/huggingface/datasets/issues/545
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/545/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/545/comments
|
https://api.github.com/repos/huggingface/datasets/issues/545/events
|
https://github.com/huggingface/datasets/issues/545
| 689,138,878
|
MDU6SXNzdWU2ODkxMzg4Nzg=
| 545
|
New release coming up for this library
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/thomwolf",
"id": 7353373,
"login": "thomwolf",
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"type": "User",
"url": "https://api.github.com/users/thomwolf",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Update: release is planed mid-next week."
] | 2020-08-31T11:37:38
| 2021-01-13T10:59:04
| 2021-01-13T10:59:04
|
MEMBER
| null | null | null | null |
Hi all,
A few words on the roadmap for this library.
The next release will be a big one and is planed at the end of this week.
In addition to the support for indexed datasets (useful for non-parametric models like REALM, RAG, DPR, knn-LM and many other fast dataset retrieval technics), it will:
- have support for multi-modal datasets
- include various significant improvements on speed for standard processing (map, shuffling, ...)
- have a better support for metrics (better caching, and a robust API) and a bigger focus on reproductibility
- change the name to the final name (voted by the community): `datasets`
- be the 1.0.0 release as we think the API will be mostly stabilized from now on
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 4,
"laugh": 0,
"rocket": 0,
"total_count": 4,
"url": "https://api.github.com/repos/huggingface/datasets/issues/545/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/545/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 134 days, 23:21:26
|
https://api.github.com/repos/huggingface/datasets/issues/543
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/543/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/543/comments
|
https://api.github.com/repos/huggingface/datasets/issues/543/events
|
https://github.com/huggingface/datasets/issues/543
| 688,644,407
|
MDU6SXNzdWU2ODg2NDQ0MDc=
| 543
|
nlp.load_dataset is not safe for multi processes when loading from local files
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/55288513?v=4",
"events_url": "https://api.github.com/users/luyug/events{/privacy}",
"followers_url": "https://api.github.com/users/luyug/followers",
"following_url": "https://api.github.com/users/luyug/following{/other_user}",
"gists_url": "https://api.github.com/users/luyug/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/luyug",
"id": 55288513,
"login": "luyug",
"node_id": "MDQ6VXNlcjU1Mjg4NTEz",
"organizations_url": "https://api.github.com/users/luyug/orgs",
"received_events_url": "https://api.github.com/users/luyug/received_events",
"repos_url": "https://api.github.com/users/luyug/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/luyug/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/luyug/subscriptions",
"type": "User",
"url": "https://api.github.com/users/luyug",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"I'll take a look!"
] | 2020-08-30T03:20:34
| 2020-08-31T11:15:10
| 2020-08-31T11:15:10
|
NONE
| null | null | null | null |
Loading from local files, e.g., `dataset = nlp.load_dataset('csv', data_files=['file_1.csv', 'file_2.csv'])`
concurrently from multiple processes, will raise `FileExistsError` from builder's line 430, https://github.com/huggingface/nlp/blob/6655008c738cb613c522deb3bd18e35a67b2a7e5/src/nlp/builder.py#L423-L438
Likely because multiple processes step into download_and_prepare, https://github.com/huggingface/nlp/blob/6655008c738cb613c522deb3bd18e35a67b2a7e5/src/nlp/load.py#L550-L554
This can happen when launching distributed training with commands like `python -m torch.distributed.launch --nproc_per_node 4` on a new collection of files never loaded before.
I can create a PR that puts in some file locks. It would be helpful if I can be informed of the convention for naming and placement of the lock.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/thomwolf",
"id": 7353373,
"login": "thomwolf",
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"type": "User",
"url": "https://api.github.com/users/thomwolf",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/543/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/543/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 1 day, 7:54:36
|
https://api.github.com/repos/huggingface/datasets/issues/541
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/541/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/541/comments
|
https://api.github.com/repos/huggingface/datasets/issues/541/events
|
https://github.com/huggingface/datasets/issues/541
| 688,521,224
|
MDU6SXNzdWU2ODg1MjEyMjQ=
| 541
|
Best practices for training tokenizers with nlp
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/11806234?v=4",
"events_url": "https://api.github.com/users/moskomule/events{/privacy}",
"followers_url": "https://api.github.com/users/moskomule/followers",
"following_url": "https://api.github.com/users/moskomule/following{/other_user}",
"gists_url": "https://api.github.com/users/moskomule/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/moskomule",
"id": 11806234,
"login": "moskomule",
"node_id": "MDQ6VXNlcjExODA2MjM0",
"organizations_url": "https://api.github.com/users/moskomule/orgs",
"received_events_url": "https://api.github.com/users/moskomule/received_events",
"repos_url": "https://api.github.com/users/moskomule/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/moskomule/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/moskomule/subscriptions",
"type": "User",
"url": "https://api.github.com/users/moskomule",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Docs that explain how to train a tokenizer with `datasets` are available here: https://huggingface.co/docs/tokenizers/training_from_memory#using-the-datasets-library"
] | 2020-08-29T12:06:49
| 2022-10-04T17:28:04
| 2022-10-04T17:28:04
|
NONE
| null | null | null | null |
Hi, thank you for developing this library.
What do you think are the best practices for training tokenizers using `nlp`? In the document and examples, I could only find pre-trained tokenizers used.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/541/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/541/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 766 days, 5:21:15
|
https://api.github.com/repos/huggingface/datasets/issues/539
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/539/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/539/comments
|
https://api.github.com/repos/huggingface/datasets/issues/539/events
|
https://github.com/huggingface/datasets/issues/539
| 688,323,602
|
MDU6SXNzdWU2ODgzMjM2MDI=
| 539
|
[Dataset] `NonMatchingChecksumError` due to an update in the LinCE benchmark data
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/5833357?v=4",
"events_url": "https://api.github.com/users/gaguilar/events{/privacy}",
"followers_url": "https://api.github.com/users/gaguilar/followers",
"following_url": "https://api.github.com/users/gaguilar/following{/other_user}",
"gists_url": "https://api.github.com/users/gaguilar/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gaguilar",
"id": 5833357,
"login": "gaguilar",
"node_id": "MDQ6VXNlcjU4MzMzNTc=",
"organizations_url": "https://api.github.com/users/gaguilar/orgs",
"received_events_url": "https://api.github.com/users/gaguilar/received_events",
"repos_url": "https://api.github.com/users/gaguilar/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gaguilar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gaguilar/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gaguilar",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi @gaguilar \r\n\r\nIf you want to take care of this, it very simple, you just need to regenerate the `dataset_infos.json` file as indicated [in the doc](https://huggingface.co/nlp/share_dataset.html#adding-metadata) by [installing from source](https://huggingface.co/nlp/installation.html#installing-from-source) and running the following command from the root of the repo:\r\n```bash\r\npython nlp-cli test ./datasets/lince --save_infos --all_configs\r\n```\r\nAnd then you can open a pull-request with the updated json file.\r\n\r\nOtherwise we'll do it sometime this week.",
"Hi @thomwolf \r\n\r\nThanks for the details! I just created a PR with the updated `dataset_infos.json` file (#550).",
"Thanks for updating the json file. Closing this one"
] | 2020-08-28T19:55:51
| 2020-09-03T16:34:02
| 2020-09-03T16:34:01
|
CONTRIBUTOR
| null | null | null | null |
Hi,
There is a `NonMatchingChecksumError` error for the `lid_msaea` (language identification for Modern Standard Arabic - Egyptian Arabic) dataset from the LinCE benchmark due to a minor update on that dataset.
How can I update the checksum of the library to solve this issue? The error is below and it also appears in the [nlp viewer](https://huggingface.co/nlp/viewer/?dataset=lince&config=lid_msaea):
```python
import nlp
nlp.load_dataset('lince', 'lid_msaea')
```
Output:
```
NonMatchingChecksumError: ['https://ritual.uh.edu/lince/libaccess/eyJ1c2VybmFtZSI6ICJodWdnaW5nZmFjZSBubHAiLCAidXNlcl9pZCI6IDExMSwgImVtYWlsIjogImR1bW15QGVtYWlsLmNvbSJ9/lid_msaea.zip']
Traceback:
File "/home/sasha/streamlit/lib/streamlit/ScriptRunner.py", line 322, in _run_script
exec(code, module.__dict__)
File "/home/sasha/nlp-viewer/run.py", line 196, in <module>
dts, fail = get(str(option.id), str(conf_option.name) if conf_option else None)
File "/home/sasha/streamlit/lib/streamlit/caching.py", line 591, in wrapped_func
return get_or_create_cached_value()
File "/home/sasha/streamlit/lib/streamlit/caching.py", line 575, in get_or_create_cached_value
return_value = func(*args, **kwargs)
File "/home/sasha/nlp-viewer/run.py", line 150, in get
builder_instance.download_and_prepare()
File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/nlp/builder.py", line 432, in download_and_prepare
download_config.force_download = download_mode == FORCE_REDOWNLOAD
File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/nlp/builder.py", line 469, in _download_and_prepare
File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/nlp/utils/info_utils.py", line 36, in verify_checksums
raise NonMatchingChecksumError(str(bad_urls))
```
Thank you in advance!
@lhoestq
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/539/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/539/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 5 days, 20:38:10
|
https://api.github.com/repos/huggingface/datasets/issues/537
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/537/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/537/comments
|
https://api.github.com/repos/huggingface/datasets/issues/537/events
|
https://github.com/huggingface/datasets/issues/537
| 687,614,699
|
MDU6SXNzdWU2ODc2MTQ2OTk=
| 537
|
[Dataset] RACE dataset Checksums error
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/6608232?v=4",
"events_url": "https://api.github.com/users/abarbosa94/events{/privacy}",
"followers_url": "https://api.github.com/users/abarbosa94/followers",
"following_url": "https://api.github.com/users/abarbosa94/following{/other_user}",
"gists_url": "https://api.github.com/users/abarbosa94/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/abarbosa94",
"id": 6608232,
"login": "abarbosa94",
"node_id": "MDQ6VXNlcjY2MDgyMzI=",
"organizations_url": "https://api.github.com/users/abarbosa94/orgs",
"received_events_url": "https://api.github.com/users/abarbosa94/received_events",
"repos_url": "https://api.github.com/users/abarbosa94/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/abarbosa94/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abarbosa94/subscriptions",
"type": "User",
"url": "https://api.github.com/users/abarbosa94",
"user_view_type": "public"
}
|
[
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] |
closed
| false
| null |
[] |
[
"`NonMatchingChecksumError` means that the checksum of the downloaded file is not the expected one.\r\nEither the file you downloaded was corrupted along the way, or the host updated the file.\r\nCould you try to clear your cache and run `load_dataset` again ? If the error is still there, it means that there was an update in the data, and we may have to update the expected checksum value.",
"I just cleared the cache an run it again. The error persists ):\r\n\r\n```\r\n nlp (master) $ rm -rf /Users/abarbosa/.cache/huggingface/\r\n nlp (master) $ python\r\nPython 3.8.5 (default, Aug 5 2020, 03:39:04)\r\n[Clang 10.0.0 ] :: Anaconda, Inc. on darwin\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>> import nlp\r\n>>> dataset = nlp.load_dataset(\"race\")\r\nDownloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4.39k/4.39k [00:00<00:00, 661kB/s]\r\nDownloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.81k/1.81k [00:00<00:00, 644kB/s]\r\nUsing custom data configuration default\r\nDownloading and preparing dataset race/default (download: 84.52 MiB, generated: 132.61 MiB, post-processed: Unknown size, total: 217.13 MiB) to /Users/abarbosa/.cache/huggingface/datasets/race/default/0.1.0/5461327f1a83549ca0d845a3159c806d2baf4f8d0d8f7d657157ce7cdf3899c2...\r\nDownloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 25.4M/25.4M [01:03<00:00, 401kB/s]\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/Users/abarbosa/Documents/nlp/src/nlp/load.py\", line 550, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/Users/abarbosa/Documents/nlp/src/nlp/builder.py\", line 471, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/Users/abarbosa/Documents/nlp/src/nlp/builder.py\", line 530, in _download_and_prepare\r\n verify_checksums(\r\n File \"/Users/abarbosa/Documents/nlp/src/nlp/utils/info_utils.py\", line 38, in verify_checksums\r\n raise NonMatchingChecksumError(error_msg + str(bad_urls))\r\nnlp.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['http://www.cs.cmu.edu/~glai1/data/race/RACE.tar.gz']\r\n>>>\r\n```",
"Dealing with the same issue please update the checksum on nlp library end. The data seems to have changed on their end.",
"We have a discussion on this datasets here: https://github.com/huggingface/nlp/pull/540\r\n\r\nFeel free to participate if you have some opinion on the scope of data which should be included in this dataset.",
"At least for me, the file that was downloaded from CMU isn't the complete dataset, but a small subset of it (~25MB vs ~85MB). I've previously downloaded the dataset directly, so for my personal needs I could just swap out the corrupted file with the correct one. Perhaps you could host it like you do for the Wikipedia and BookCorpus datasets.\r\n\r\n",
"> At least for me, the file that was downloaded from CMU isn't the complete dataset, but a small subset of it (~25MB vs ~85MB). I've previously downloaded the dataset directly, so for my personal needs I could just swap out the corrupted file with the correct one. Perhaps you could host it like you do for the Wikipedia and BookCorpus datasets.\r\n\r\nCould you upload this please?",
"> > At least for me, the file that was downloaded from CMU isn't the complete dataset, but a small subset of it (~25MB vs ~85MB). I've previously downloaded the dataset directly, so for my personal needs I could just swap out the corrupted file with the correct one. Perhaps you could host it like you do for the Wikipedia and BookCorpus datasets.\r\n> \r\n> Could you upload this please?\r\n\r\nNot sure if I can upload it according to their license (\"You agree not to reproduce, duplicate, copy, sell, trade, resell or exploit for any commercial purpose, any portion of the contexts and any portion of derived data.\").",
"I managed to fix it in #540 :)",
"Closing since @540 is merged\r\n\r\nThanks again @abarbosa94 "
] | 2020-08-27T23:58:16
| 2020-09-18T12:07:04
| 2020-09-18T12:07:04
|
CONTRIBUTOR
| null | null | null | null |
Hi there, I just would like to use this awesome lib to perform a dataset fine-tuning on RACE dataset. I have performed the following steps:
```
dataset = nlp.load_dataset("race")
len(dataset["train"]), len(dataset["validation"])
```
But then I got the following error:
```
---------------------------------------------------------------------------
NonMatchingChecksumError Traceback (most recent call last)
<ipython-input-15-8bf7603ce0ed> in <module>
----> 1 dataset = nlp.load_dataset("race")
2 len(dataset["train"]), len(dataset["validation"])
~/miniconda3/envs/masters/lib/python3.8/site-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)
546
547 # Download and prepare data
--> 548 builder_instance.download_and_prepare(
549 download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications,
550 )
~/miniconda3/envs/masters/lib/python3.8/site-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)
460 logger.info("Dataset not on Hf google storage. Downloading and preparing it from source")
461 if not downloaded_from_gcs:
--> 462 self._download_and_prepare(
463 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
464 )
~/miniconda3/envs/masters/lib/python3.8/site-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
519 # Checksums verification
520 if verify_infos:
--> 521 verify_checksums(
522 self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), "dataset source files"
523 )
~/miniconda3/envs/masters/lib/python3.8/site-packages/nlp/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name)
36 if len(bad_urls) > 0:
37 error_msg = "Checksums didn't match" + for_verification_name + ":\n"
---> 38 raise NonMatchingChecksumError(error_msg + str(bad_urls))
39 logger.info("All the checksums matched successfully" + for_verification_name)
40
NonMatchingChecksumError: Checksums didn't match for dataset source files:
['http://www.cs.cmu.edu/~glai1/data/race/RACE.tar.gz']
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/537/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/537/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 21 days, 12:08:48
|
https://api.github.com/repos/huggingface/datasets/issues/534
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/534/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/534/comments
|
https://api.github.com/repos/huggingface/datasets/issues/534/events
|
https://github.com/huggingface/datasets/issues/534
| 686,115,912
|
MDU6SXNzdWU2ODYxMTU5MTI=
| 534
|
`list_datasets()` is broken.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/314169?v=4",
"events_url": "https://api.github.com/users/ashutosh-dwivedi-e3502/events{/privacy}",
"followers_url": "https://api.github.com/users/ashutosh-dwivedi-e3502/followers",
"following_url": "https://api.github.com/users/ashutosh-dwivedi-e3502/following{/other_user}",
"gists_url": "https://api.github.com/users/ashutosh-dwivedi-e3502/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ashutosh-dwivedi-e3502",
"id": 314169,
"login": "ashutosh-dwivedi-e3502",
"node_id": "MDQ6VXNlcjMxNDE2OQ==",
"organizations_url": "https://api.github.com/users/ashutosh-dwivedi-e3502/orgs",
"received_events_url": "https://api.github.com/users/ashutosh-dwivedi-e3502/received_events",
"repos_url": "https://api.github.com/users/ashutosh-dwivedi-e3502/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ashutosh-dwivedi-e3502/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ashutosh-dwivedi-e3502/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ashutosh-dwivedi-e3502",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Thanks for reporting !\r\nThis has been fixed in #475 and the fix will be available in the next release",
"What you can do instead to get the list of the datasets is call\r\n\r\n```python\r\nprint([dataset.id for dataset in nlp.list_datasets()])\r\n```",
"Thanks @lhoestq . "
] | 2020-08-26T08:19:01
| 2020-08-27T06:31:11
| 2020-08-27T06:31:11
|
NONE
| null | null | null | null |
version = '0.4.0'
`list_datasets()` is broken. It results in the following error :
```
In [3]: nlp.list_datasets()
Out[3]: ---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
~/.virtualenvs/san-lgUCsFg_/lib/python3.8/site-packages/IPython/core/formatters.py in __call__(self, obj)
700 type_pprinters=self.type_printers,
701 deferred_pprinters=self.deferred_printers)
--> 702 printer.pretty(obj)
703 printer.flush()
704 return stream.getvalue()
~/.virtualenvs/san-lgUCsFg_/lib/python3.8/site-packages/IPython/lib/pretty.py in pretty(self, obj)
375 if cls in self.type_pprinters:
376 # printer registered in self.type_pprinters
--> 377 return self.type_pprinters[cls](obj, self, cycle)
378 else:
379 # deferred printer
~/.virtualenvs/san-lgUCsFg_/lib/python3.8/site-packages/IPython/lib/pretty.py in inner(obj, p, cycle)
553 p.text(',')
554 p.breakable()
--> 555 p.pretty(x)
556 if len(obj) == 1 and type(obj) is tuple:
557 # Special case for 1-item tuples.
~/.virtualenvs/san-lgUCsFg_/lib/python3.8/site-packages/IPython/lib/pretty.py in pretty(self, obj)
392 if cls is not object \
393 and callable(cls.__dict__.get('__repr__')):
--> 394 return _repr_pprint(obj, self, cycle)
395
396 return _default_pprint(obj, self, cycle)
~/.virtualenvs/san-lgUCsFg_/lib/python3.8/site-packages/IPython/lib/pretty.py in _repr_pprint(obj, p, cycle)
698 """A pprint that just redirects to the normal repr function."""
699 # Find newlines and replace them with p.break_()
--> 700 output = repr(obj)
701 lines = output.splitlines()
702 with p.group():
~/.virtualenvs/san-lgUCsFg_/lib/python3.8/site-packages/nlp/hf_api.py in __repr__(self)
110
111 def __repr__(self):
--> 112 single_line_description = self.description.replace("\n", "")
113 return f"nlp.ObjectInfo(id='{self.id}', description='{single_line_description}', files={self.siblings})"
114
AttributeError: 'NoneType' object has no attribute 'replace'
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/314169?v=4",
"events_url": "https://api.github.com/users/ashutosh-dwivedi-e3502/events{/privacy}",
"followers_url": "https://api.github.com/users/ashutosh-dwivedi-e3502/followers",
"following_url": "https://api.github.com/users/ashutosh-dwivedi-e3502/following{/other_user}",
"gists_url": "https://api.github.com/users/ashutosh-dwivedi-e3502/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ashutosh-dwivedi-e3502",
"id": 314169,
"login": "ashutosh-dwivedi-e3502",
"node_id": "MDQ6VXNlcjMxNDE2OQ==",
"organizations_url": "https://api.github.com/users/ashutosh-dwivedi-e3502/orgs",
"received_events_url": "https://api.github.com/users/ashutosh-dwivedi-e3502/received_events",
"repos_url": "https://api.github.com/users/ashutosh-dwivedi-e3502/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ashutosh-dwivedi-e3502/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ashutosh-dwivedi-e3502/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ashutosh-dwivedi-e3502",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/534/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/534/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 22:12:10
|
https://api.github.com/repos/huggingface/datasets/issues/532
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/532/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/532/comments
|
https://api.github.com/repos/huggingface/datasets/issues/532/events
|
https://github.com/huggingface/datasets/issues/532
| 685,540,614
|
MDU6SXNzdWU2ODU1NDA2MTQ=
| 532
|
File exists error when used with TPU
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/20531705?v=4",
"events_url": "https://api.github.com/users/go-inoue/events{/privacy}",
"followers_url": "https://api.github.com/users/go-inoue/followers",
"following_url": "https://api.github.com/users/go-inoue/following{/other_user}",
"gists_url": "https://api.github.com/users/go-inoue/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/go-inoue",
"id": 20531705,
"login": "go-inoue",
"node_id": "MDQ6VXNlcjIwNTMxNzA1",
"organizations_url": "https://api.github.com/users/go-inoue/orgs",
"received_events_url": "https://api.github.com/users/go-inoue/received_events",
"repos_url": "https://api.github.com/users/go-inoue/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/go-inoue/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/go-inoue/subscriptions",
"type": "User",
"url": "https://api.github.com/users/go-inoue",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] |
[
"I am facing probably facing similar issues with \r\n\r\n`wiki40b_en_100_0`",
"Could you try to run `dataset = load_dataset(\"text\", data_files=file_path, split=\"train\")` once before calling the script ?\r\n\r\nIt looks like several processes try to create the dataset in arrow format at the same time. If the dataset is already created it should be fine",
"Thanks! I tested on 328MB text data on `n1-standard-8 (8 vCPUs, 30 GB memory)`. The main script ran without any issue, but it seems to require a huge space in the drive.\r\n\r\nAs suggested, I ran the following script before running the pre-training command with `xla_spawn.py`.\r\n\r\n```python\r\nfrom nlp import load_dataset\r\n\r\nfile_path=\"your_file_name\"\r\nload_dataset(\"text\", data_files=file_path, split=\"train\")\r\n```\r\nThis will create `text-train.arrow` under the default cache directory. Then, I run the script with `xla_spawn.py`. It will load data from the cached file. My understanding is that there's no other way but to do this two-step process with the current version (0.4) of `nlp`.\r\n\r\nDuring another caching process that happens in the main script:\r\n\r\n```\r\n08/26/2020 09:19:51 - INFO - nlp.utils.info_utils - All the checksums matched successfully for post processing resources\r\n08/26/2020 09:19:53 - INFO - nlp.arrow_dataset - Caching processed dataset at /home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d/cache-f90f341e5308a7469\r\n8d872bcc88f9c0e.arrow\r\n```\r\n\r\n`nlp` generates a temporary file per core, each of which is three times larger than the original text data. If each process is actually writing on the disk, you will need a huge amount of space in your drive. (Maybe I'm missing something.)\r\n\r\n```\r\n-rw-r--r-- 1 ***** ***** 674 Aug 26 09:19 dataset_info.json\r\n-rw-r--r-- 1 ***** ***** 0 Aug 26 09:19 LICENSE\r\n-rw-r--r-- 1 ***** ***** 332M Aug 26 09:10 text-train.arrow\r\n-rw------- 1 ***** ***** 940M Aug 26 09:31 tmp0k43sazw\r\n-rw------- 1 ***** ***** 940M Aug 26 09:31 tmp7sxs9mj5\r\n-rw------- 1 ***** ***** 939M Aug 26 09:31 tmpbbiqw2vp\r\n-rw------- 1 ***** ***** 937M Aug 26 09:31 tmpjxb5ptyu\r\n-rw------- 1 ***** ***** 933M Aug 26 09:31 tmpk3hkdh0e\r\n-rw------- 1 ***** ***** 944M Aug 26 09:31 tmpnoalwftz\r\n-rw------- 1 ***** ***** 931M Aug 26 09:31 tmpuxdr_dz3\r\n-rw------- 1 ***** ***** 945M Aug 26 09:31 tmpxjyuy6dk\r\n```\r\nAfter the caching process, they seem to be merged into one file.\r\n\r\n```\r\n-rw------- 1 ***** ***** 989M Aug 26 09:32 cache-f90f341e5308a74698d872bcc88f9c0e.arrow\r\n-rw-r--r-- 1 ***** ***** 674 Aug 26 09:19 dataset_info.json\r\n-rw-r--r-- 1 ***** ***** 0 Aug 26 09:19 LICENSE\r\n-rw-r--r-- 1 ***** ***** 332M Aug 26 09:10 text-train.arrow\r\n```",
"Again it looks like every process tries to tokenize the full dataset at the same time.\r\nIf you do the tokenization before calling `xla_spawn.py` once, then each process will then use the tokenized cached file `cache-f90f341e5308a74698d872bcc88f9c0e.arrow` and not recompute it.\r\n\r\nNot sure if there's a better way to do that cc @julien-c @thomwolf ",
"I wrote a separate script just for preparing a cached file, including tokenization. Each process did use the tokenized cached file.\r\n\r\nCurrently I'm testing the pipeline on 24GB text data. It took about 1.5 hour to create a cached file on `n1-highmem-16 (16 vCPUs, 104 GB memory)`. I assume loading this cached file in the main script with `xla_spawn.py` won't be an issue (even if there are 8 processes).\r\n\r\n```\r\ntotal 98G\r\ndrwxr-xr-x 2 ***** ***** 4.0K Aug 26 13:38 .\r\ndrwxr-xr-x 3 ***** ***** 4.0K Aug 26 12:24 ..\r\n-rw------- 1 ***** ***** 74G Aug 26 13:38 cache-a7aa04134ba7b1aff5d9710f14a4e334.arrow\r\n-rw-r--r-- 1 ***** ***** 681 Aug 26 12:24 dataset_info.json\r\n-rw-r--r-- 1 ***** ***** 0 Aug 26 12:24 LICENSE\r\n-rw-r--r-- 1 ***** ***** 25G Aug 26 12:24 text-train.arrow\r\n```",
"Yes loading the cached file should be fine from different processes",
"Sorry, I thought it was working, but actually the second call doesn't use the cached file that was generated separately, and it will generate another cache-****.arrorw file with a different name. If I run the training script again (with `xla_spawn.py`), it will use the second cached file, which was generated by the training script itself in the previous run.\r\n\r\n```\r\ndrwxr-xr-x 2 ***** ***** 4.0K Aug 26 15:35 .\r\ndrwxr-xr-x 3 ***** ***** 4.0K Aug 26 15:29 ..\r\n-rw------- 1 ***** ***** 99M Aug 26 15:35 cache-0d77dfce704493dbe63f071eed6a5431.arrow\r\n-rw------- 1 ***** ***** 99M Aug 26 15:29 cache-69633651476e943b93c89ace715f9487.arrow\r\n-rw-r--r-- 1 ***** ***** 670 Aug 26 15:33 dataset_info.json\r\n-rw-r--r-- 1 ***** ***** 0 Aug 26 15:33 LICENSE\r\n-rw-r--r-- 1 ***** ***** 33M Aug 26 15:29 text-train.arrow\r\n```",
"So if I understand correctly it means that the cached file generated by your separated script is different by the one used by the training script ?",
"Yes.\r\n\r\n1. `cache-69633651476e943b93c89ace715f9487.arrow` was generated with a separate script. \r\n2. I ran the entire script with `xla_spawn.py`.\r\n3. `cache-69633651476e943b93c89ace715f9487.arrow` is not used.\r\n4. `cache-0d77dfce704493dbe63f071eed6a5431.arrow` is created.\r\n5. training starts...\r\n\r\nNow, if I kill the process at step 5, and do the step 2 again, it will use `cache-0d77dfce704493dbe63f071eed6a5431.arrow` (cached file created at step 4) without any issue.\r\n\r\nI used the following to generate the first cached file.\r\n```python\r\ndataset = load_dataset(\"text\", data_files=file_path, split=\"train\")\r\ndataset = dataset.map(lambda ex: tokenizer(ex[\"text\"], add_special_tokens=True,\r\n truncation=True, max_length=args.block_size), batched=True)\r\ndataset.set_format(type='torch', columns=['input_ids'])\r\n```",
"1. Here's the log from the first step.\r\n```\r\nDownloading and preparing dataset text/default-e84dd29acc4ad9ef (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/*****/.cache/huggingface/datasets/text/default-e84dd29acc4ad9ef/0.0.0/\r\n447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d...\r\nDataset text downloaded and prepared to /home/*****/.cache/huggingface/datasets/text/default-e84dd29acc4ad9ef/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d. Subsequent calls will reuse this data.\r\n```\r\nThere's a file named `cache-7b1440ba7077af0f0d9035b5a55d01fc.arrow`, so it did create a cached file.\r\n```\r\ndrwxr-xr-x 2 ***** ***** 4.0K Aug 26 15:59 .\r\ndrwxr-xr-x 3 ***** ***** 4.0K Aug 26 15:58 ..\r\n-rw------- 1 ***** ***** 99M Aug 26 15:59 cache-7b1440ba7077af0f0d9035b5a55d01fc.arrow\r\n-rw-r--r-- 1 ***** ***** 670 Aug 26 15:58 dataset_info.json\r\n-rw-r--r-- 1 ***** ***** 0 Aug 26 15:58 LICENSE\r\n-rw-r--r-- 1 ***** ***** 33M Aug 26 15:58 text-train.arrow\r\n```\r\n2. Ideally, `cache-7b1440ba7077af0f0d9035b5a55d01fc.arrow` should be used in `run_language_modeling.py` (modified version using `nlp`) with `xla_spawn.py`. But it looks like it's creating a new cached file.\r\n\r\n```\r\n08/26/2020 16:13:03 - INFO - filelock - Lock 139635836351096 released on /home/*****/.cache/huggingface/datasets/3e34209a2741375a1db1ff03bf1abba1a9bd0e6016912d3ead0114b9d1ca2685.202fa4f84f552bff1f5400ae012663839c61efb3de068c6c8722d34ac0ea6192\r\n.py.lock\r\n08/26/2020 16:13:03 - WARNING - nlp.builder - Using custom data configuration default\r\n08/26/2020 16:13:03 - INFO - nlp.builder - Overwrite dataset info from restored data version.\r\n08/26/2020 16:13:03 - INFO - nlp.info - Loading Dataset info from /home/*****/.cache/huggingface/datasets/text/default-e84dd29acc4ad9ef/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d\r\n08/26/2020 16:13:03 - INFO - nlp.builder - Reusing dataset text (/home/*****/.cache/huggingface/datasets/text/default-e84dd29acc4ad9ef/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d)\r\n08/26/2020 16:13:03 - INFO - nlp.builder - Constructing Dataset for split train, from /home/*****/.cache/huggingface/datasets/text/default-e84dd29acc4ad9ef/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d\r\n08/26/2020 16:13:03 - INFO - nlp.utils.info_utils - All the checksums matched successfully for post processing resources\r\n08/26/2020 16:13:03 - INFO - nlp.builder - Overwrite dataset info from restored data version.\r\n08/26/2020 16:13:03 - INFO - nlp.info - Loading Dataset info from /home/*****/.cache/huggingface/datasets/text/default-e84dd29acc4ad9ef/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d\r\n08/26/2020 16:13:03 - INFO - nlp.builder - Reusing dataset text (/home/*****/.cache/huggingface/datasets/text/default-e84dd29acc4ad9ef/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d)\r\n08/26/2020 16:13:03 - INFO - nlp.builder - Constructing Dataset for split train, from /home/*****/.cache/huggingface/datasets/text/default-e84dd29acc4ad9ef/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d\r\n08/26/2020 16:13:03 - INFO - nlp.utils.info_utils - All the checksums matched successfully for post processing resources\r\n08/26/2020 16:13:05 - INFO - nlp.arrow_dataset - Caching processed dataset at /home/*****/.cache/huggingface/datasets/text/default-e84dd29acc4ad9ef/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d/cache-0d77dfce704493dbe\r\n63f071eed6a5431.arrow\r\n^M 0%| | 0/100 [00:00<?, ?it/s]08/26/2020 16:13:05 - INFO - nlp.arrow_dataset - Caching processed dataset at /home/*****/.cache/huggingface/datasets/text/default-e84dd29acc4ad9ef/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6\r\nfe661fe4d070d380d/cache-0d77dfce704493dbe63f071eed6a5431.arrow\r\n```\r\n\r\nThere are two cached files in the directory:\r\n\r\n```\r\ndrwxr-xr-x 2 ***** ***** 4.0K Aug 26 16:14 .\r\ndrwxr-xr-x 3 ***** ***** 4.0K Aug 26 15:58 ..\r\n-rw------- 1 ***** ***** 99M Aug 26 16:14 cache-0d77dfce704493dbe63f071eed6a5431.arrow\r\n-rw------- 1 ***** ***** 99M Aug 26 15:59 cache-7b1440ba7077af0f0d9035b5a55d01fc.arrow\r\n-rw-r--r-- 1 ***** ***** 670 Aug 26 16:13 dataset_info.json\r\n-rw-r--r-- 1 ***** ***** 0 Aug 26 16:13 LICENSE\r\n-rw-r--r-- 1 ***** ***** 33M Aug 26 15:58 text-train.arrow\r\n```\r\n\r\nIf I kill the process, and run it again, it will use the second cached file.\r\n\r\n```\r\n08/26/2020 16:19:52 - WARNING - nlp.builder - Using custom data configuration default\r\n08/26/2020 16:19:52 - INFO - nlp.builder - Overwrite dataset info from restored data version.\r\n08/26/2020 16:19:52 - INFO - nlp.info - Loading Dataset info from /home/*****/.cache/huggingface/datasets/text/default-e84dd29acc4ad9ef/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d\r\n08/26/2020 16:19:52 - INFO - nlp.builder - Reusing dataset text (/home/*****/.cache/huggingface/datasets/text/default-e84dd29acc4ad9ef/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d)\r\n08/26/2020 16:19:52 - INFO - nlp.builder - Constructing Dataset for split train, from /home/*****/.cache/huggingface/datasets/text/default-e84dd29acc4ad9ef/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d\r\n08/26/2020 16:19:52 - INFO - nlp.utils.info_utils - All the checksums matched successfully for post processing resources\r\n08/26/2020 16:19:53 - INFO - nlp.arrow_dataset - Loading cached processed dataset at /home/*****/.cache/huggingface/datasets/text/default-e84dd29acc4ad9ef/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d/cache-0d77dfce70\r\n4493dbe63f071eed6a5431.arrow\r\n08/26/2020 16:19:53 - INFO - nlp.arrow_dataset - Set __getitem__(key) output type to torch for ['input_ids'] columns (when key is int or slice) and don't output other (un-formatted) columns.\r\n```",
"Thanks for all the details.\r\nThe two cached files are supposed to be the same. I suspect that the caching has a problem with the tokenizer.\r\nWhich tokenizer did you use ?",
"I trained a byte-level BPE tokenizer on my data with `tokenziers` library following this [example](https://github.com/huggingface/tokenizers/blob/master/bindings/python/examples/train_bytelevel_bpe.py).\r\n\r\nAnd I put these model files in a directory named `\"model_name\"`. I also put config.json, which is the original RoBERTa config file.\r\n\r\n```bash\r\n%ls model_name\r\nconfig.json merges.txt vocab.json\r\n```\r\n\r\n[This](https://github.com/huggingface/transformers/blob/4bd7be9a4268221d2a0000c7e8033aaeb365c03b/examples/language-modeling/run_language_modeling.py#L196) is the line where `run_language_modeling.py` loads the tokenier.\r\n\r\n```python\r\ntokenizer = AutoTokenizer.from_pretrained(model_args.tokenizer_name, cache_dir=model_args.cache_dir)\r\n```\r\n\r\nI use `\"model_name\"` for `model_args.tokenizer_name`. I don't specify `model_args.cache_dir`. It is 'None' by default.",
"In my separated script for caching, I'm using `use_fast=True` when initializing a tokenizer.\r\n\r\n```python\r\ntokenizer = AutoTokenizer.from_pretrained(args.config_name, use_fast=True)\r\n```\r\nI wasn't using that option in the main script. That could be the reason...",
"Yea it could definitely explain why you have two different cache files.\r\nLet me know if using the same tokenizers on both sides fixes the issue",
"It still creates a new file even if I remove `use_fast=True`... \r\n\r\nHere's the script used to create a cached file.\r\n```python \r\n#!/usr/bin/env python3\r\n\r\nimport argparse\r\n\r\nfrom transformers import AutoTokenizer\r\n\r\nfrom nlp import load_dataset\r\n\r\n\r\ndef main():\r\n parser = argparse.ArgumentParser(description='description')\r\n parser.add_argument('--config_name', type=str, help='Pretrained config name or path if not the same as model_name')\r\n parser.add_argument('--data_file', type=str, help='The input data file (a text file).')\r\n parser.add_argument('--block_size', type=int, default=-1, help='The training dataset will be truncated in block of this size for training')\r\n args = parser.parse_args()\r\n\r\n tokenizer = AutoTokenizer.from_pretrained(args.config_name)\r\n\r\n dataset = load_dataset(\"text\", data_files=args.data_file, split=\"train\")\r\n dataset = dataset.map(lambda ex: tokenizer(ex[\"text\"], add_special_tokens=True,\r\n truncation=True, max_length=args.block_size), batched=True)\r\n dataset.set_format(type='torch', columns=['input_ids'])\r\n\r\n\r\nif __name__ == \"__main__\":\r\n main()\r\n```\r\n\r\nHere's how the data is loaded in the modified `run_language_modeling.py`. [[original function](https://github.com/huggingface/transformers/blob/971d1802d009d9996b36a34a34477cee849ef39f/examples/language-modeling/run_language_modeling.py#L128-L135)]\r\n\r\n```python\r\ndef get_dataset(args: DataTrainingArguments, tokenizer: PreTrainedTokenizer, evaluate=False):\r\n file_path = args.eval_data_file if evaluate else args.train_data_file\r\n split = \"validation\" if evaluate else \"train\"\r\n if args.line_by_line:\r\n # return LineByLineTextDataset(tokenizer=tokenizer, file_path=file_path, block_size=args.block_size)\r\n dataset = load_dataset(\"text\", data_files=file_path, split=\"train\")\r\n dataset = dataset.map(lambda ex: tokenizer(ex[\"text\"], add_special_tokens=True,\r\n truncation=True, max_length=args.block_size), batched=True)\r\n dataset.set_format(type='torch', columns=['input_ids'])\r\n return dataset\r\n\r\n else:\r\n return TextDataset(\r\n tokenizer=tokenizer, file_path=file_path, block_size=args.block_size, overwrite_cache=args.overwrite_cache\r\n )\r\n```\r\n\r\nProbably I don't need this part in the main script,\r\n\r\n```python\r\ndataset = dataset.map(lambda ex: tokenizer(ex[\"text\"], add_special_tokens=True,\r\n truncation=True, max_length=args.block_size), batched=True)\r\n dataset.set_format(type='torch', columns=['input_ids'])\r\n```\r\nand simply do this?\r\n```python\r\ndataset = load_dataset(\"text\", data_files=file_path, split=\"train\")\r\nreturn dataset\r\n```",
"You need this part in the main script or it will use the dataset that is not tokenized\r\n\r\n",
"I can see that the tokenizer in `run_language_modeling.py` is not instantiated the same way as in your separated script.\r\nIndeed we can see L196:\r\n```python\r\ntokenizer = AutoTokenizer.from_pretrained(model_args.tokenizer_name, cache_dir=model_args.cache_dir)\r\n```\r\nCould you try to make it so they are instantiated the exact same way please ?",
"I updated my separated script, but it's creating a cached file again. If I don't use the `model_args.cache_dir`, both will get `None`, so they should be the same.\r\n\r\n```python\r\n#!/usr/bin/env python3\r\nimport argparse\r\n\r\nfrom transformers import AutoTokenizer\r\nfrom nlp import load_dataset\r\n\r\ndef main():\r\n parser = argparse.ArgumentParser(description='description')\r\n parser.add_argument('--tokenizer_name', type=str, help='Pretrained tokenizer name or path if not the same as model_name')\r\n parser.add_argument('--data_file', type=str, help='The input data file (a text file).')\r\n parser.add_argument('--cache_dir', type=str, default=None, help='Where do you want to store the pretrained models downloaded from s3')\r\n parser.add_argument('--block_size', type=int, default=-1, help='The training dataset will be truncated in block of this size for training')\r\n\r\n model_args = parser.parse_args()\r\n\r\n tokenizer = AutoTokenizer.from_pretrained(model_args.tokenizer_name, cache_dir=model_args.cache_dir)\r\n\r\n dataset = load_dataset(\"text\", data_files=model_args.data_file, split=\"train\")\r\n dataset = dataset.map(lambda ex: tokenizer(ex[\"text\"], add_special_tokens=True,\r\n truncation=True, max_length=model_args.block_size), batched=True)\r\n dataset.set_format(type='torch', columns=['input_ids'])\r\n\r\nif __name__ == \"__main__\":\r\n main()\r\n```\r\n\r\nIs there a way to specify the cache file to load, and skip the re-computation?",
"Could you also check that the `args.block_size` used in the lambda function is the same as well ?",
"Here's a minimal working example to reproduce this issue.\r\n\r\nAssumption:\r\n- You have access to TPU.\r\n- You have installed `transformers` and `nlp`.\r\n- You have tokenizer files (`config.json`, `merges.txt`, `vocab.json`) under the directory named `model_name`.\r\n- You have `xla_spawn.py` (Download from https://github.com/huggingface/transformers/blob/master/examples/xla_spawn.py).\r\n- You have saved the following script as `prepare_cached_dataset.py`.\r\n\r\n```python\r\n#!/usr/bin/env python3\r\nimport argparse\r\nfrom transformers import AutoTokenizer\r\nfrom nlp import load_dataset\r\n\r\ndef main():\r\n parser = argparse.ArgumentParser(description='description')\r\n parser.add_argument('--tokenizer_name', type=str, help='Pretrained tokenizer name or path if not the same as model_name')\r\n parser.add_argument('--data_file', type=str, help='The input data file (a text file).')\r\n parser.add_argument('--cache_dir', type=str, default=None, help='Where do you want to store the pretrained models downloaded from s3')\r\n parser.add_argument('--block_size', type=int, default=-1, help='The training dataset will be truncated in block of this size for training')\r\n parser.add_argument('--tpu_num_cores', type=int, default=1, help='Number of TPU cores to use (1 or 8). For xla_apwan.py')\r\n model_args = parser.parse_args()\r\n \r\n tokenizer = AutoTokenizer.from_pretrained(model_args.tokenizer_name, cache_dir=model_args.cache_dir, use_fast=True)\r\n \r\n dataset = load_dataset(\"text\", data_files=model_args.data_file, split=\"train\")\r\n dataset = dataset.map(lambda ex: tokenizer(ex[\"text\"], add_special_tokens=True,\r\n truncation=True, max_length=model_args.block_size), batched=True)\r\n dataset.set_format(type='torch', columns=['input_ids'])\r\n\r\ndef _mp_fn(index):\r\n # For xla_spawn (TPUs)\r\n main()\r\n\r\nif __name__ == \"__main__\":\r\n main()\r\n```\r\n\r\n- Run the following command. Replace `your_training_data` with some text file.\r\n\r\n```bash\r\nexport TRAIN_DATA=your_training_data\r\n\r\npython prepare_cached_dataset.py \\\r\n--tokenizer_name=model_name \\\r\n--block_size=512 \\\r\n--data_file=$TRAIN_DATA\r\n```\r\n- Check the cached directory.\r\n```bash\r\nls -lha /home/*****/.cache/huggingface/datasets/text/default-e84dd29acc4ad9ef/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d\r\ntotal 132M\r\ndrwxr-xr-x 2 ***** ***** 4.0K Aug 28 13:08 .\r\ndrwxr-xr-x 3 ***** ***** 4.0K Aug 28 13:08 ..\r\n-rw------- 1 ***** ***** 99M Aug 28 13:08 cache-bfc7cb0702426d19242db5e8c079f04b.arrow\r\n-rw-r--r-- 1 ***** ***** 670 Aug 28 13:08 dataset_info.json\r\n-rw-r--r-- 1 ***** ***** 0 Aug 28 13:08 LICENSE\r\n-rw-r--r-- 1 ***** ***** 33M Aug 28 13:08 text-train.arrow\r\n```\r\n\r\n- Run the same script again. (The output should be just `Using custom data configuration default`.)\r\n```\r\npython prepare_cached_dataset.py \\\r\n--tokenizer_name=model_name \\\r\n--block_size=512 \\\r\n--data_file=$TRAIN_DATA\r\n```\r\n- Check the cached directory.\r\n```bash\r\nls -lha /home/*****/.cache/huggingface/datasets/text/default-e84dd29acc4ad9ef/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d\r\ntotal 132M\r\ndrwxr-xr-x 2 ***** ***** 4.0K Aug 28 13:08 .\r\ndrwxr-xr-x 3 ***** ***** 4.0K Aug 28 13:08 ..\r\n-rw------- 1 ***** ***** 99M Aug 28 13:08 cache-bfc7cb0702426d19242db5e8c079f04b.arrow\r\n-rw-r--r-- 1 ***** ***** 670 Aug 28 13:20 dataset_info.json\r\n-rw-r--r-- 1 ***** ***** 0 Aug 28 13:20 LICENSE\r\n-rw-r--r-- 1 ***** ***** 33M Aug 28 13:08 text-train.arrow\r\n```\r\n- The cached file (`cache-bfc7cb0702426d19242db5e8c079f04b.arrow`) is reused.\r\n- Now, run this script with `xla_spawn.py`. Ideally, it should reuse the cached file, however, you will see each process is creating a cache file again.\r\n\r\n```bash\r\npython xla_spawn.py --num_cores 8 \\\r\nprepare_cached_dataset.py \\\r\n--tokenizer_name=model_name \\\r\n--block_size=512 \\\r\n--data_file=$TRAIN_DATA\r\n```\r\n\r\n- Check the cached directory. There are two arrrow files.\r\n```bash\r\nls -lha /home/*****/.cache/huggingface/datasets/text/default-e84dd29acc4ad9ef/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d\r\ntotal 230M\r\ndrwxr-xr-x 2 ***** ***** 4.0K Aug 28 13:25 .\r\ndrwxr-xr-x 3 ***** ***** 4.0K Aug 28 13:08 ..\r\n-rw------- 1 ***** ***** 99M Aug 28 13:08 cache-bfc7cb0702426d19242db5e8c079f04b.arrow\r\n-rw------- 1 ***** ***** 99M Aug 28 13:25 cache-e0e2313e49c8a110aafcc8133154c19a.arrow\r\n-rw-r--r-- 1 ***** ***** 670 Aug 28 13:24 dataset_info.json\r\n-rw-r--r-- 1 ***** ***** 0 Aug 28 13:24 LICENSE\r\n-rw-r--r-- 1 ***** ***** 33M Aug 28 13:08 text-train.arrow\r\n```\r\n",
"I ended up specifying the `cache_file_name` argument when I call `map` function.\r\n\r\n```python\r\ndataset = dataset.map(lambda ex: tokenizer(ex[\"text\"], add_special_tokens=True, truncation=True, max_length=args.block_size),\r\n batched=True,\r\n cache_file_name=cache_file_name)\r\n```\r\n\r\nNote:\r\n- `text` dataset in `nlp` does not strip `\"\\n\"`. If you want the same output as in [`LineByLineTextDataset`](https://github.com/huggingface/transformers/blob/afc4ece462ad83a090af620ff4da099a0272e171/src/transformers/data/datasets/language_modeling.py#L88-L111), you would need to create your own dataset class where you replace `line` to `line.strip()` [here](https://github.com/huggingface/nlp/blob/master/datasets/text/text.py#L35).\r\n"
] | 2020-08-25T14:36:38
| 2020-09-01T12:14:56
| null |
NONE
| null | null | null | null |
Hi,
I'm getting a "File exists" error when I use [text dataset](https://github.com/huggingface/nlp/tree/master/datasets/text) for pre-training a RoBERTa model using `transformers` (3.0.2) and `nlp`(0.4.0) on a VM with TPU (v3-8).
I modified [line 131 in the original `run_language_modeling.py`](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_language_modeling.py#L131) as follows:
```python
# line 131: return LineByLineTextDataset(tokenizer=tokenizer, file_path=file_path, block_size=args.block_size)
dataset = load_dataset("text", data_files=file_path, split="train")
dataset = dataset.map(lambda ex: tokenizer(ex["text"], add_special_tokens=True,
truncation=True, max_length=args.block_size), batched=True)
dataset.set_format(type='torch', columns=['input_ids'])
return dataset
```
When I run this with [`xla_spawn.py`](https://github.com/huggingface/transformers/blob/master/examples/xla_spawn.py), I get the following error (it produces one message per core in TPU, which I believe is fine).
It seems the current version doesn't take into account distributed training processes as in [this example](https://github.com/huggingface/transformers/blob/a573777901e662ec2e565be312ffaeedef6effec/src/transformers/data/datasets/language_modeling.py#L35-L38)?
```
08/25/2020 13:59:41 - WARNING - nlp.builder - Using custom data configuration default
08/25/2020 13:59:43 - INFO - nlp.builder - Generating dataset text (/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d)
08/25/2020 13:59:43 - INFO - nlp.builder - Generating dataset text (/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d)
08/25/2020 13:59:43 - INFO - nlp.builder - Generating dataset text (/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d)
08/25/2020 13:59:43 - INFO - nlp.builder - Generating dataset text (/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d)
08/25/2020 13:59:43 - INFO - nlp.builder - Generating dataset text (/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d)
08/25/2020 13:59:43 - INFO - nlp.builder - Generating dataset text (/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d)
08/25/2020 13:59:43 - INFO - nlp.builder - Generating dataset text (/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d)
08/25/2020 13:59:43 - INFO - nlp.builder - Generating dataset text (/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d)
Downloading and preparing dataset text/default-b0932b2bdbb63283 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/
447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d...
Downloading and preparing dataset text/default-b0932b2bdbb63283 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/
447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d...
Downloading and preparing dataset text/default-b0932b2bdbb63283 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/
447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d...
Downloading and preparing dataset text/default-b0932b2bdbb63283 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/
447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d...
Downloading and preparing dataset text/default-b0932b2bdbb63283 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/
447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d...
Downloading and preparing dataset text/default-b0932b2bdbb63283 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/
447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d...
Exception in device=TPU:6: [Errno 17] File exists: '/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete'
Exception in device=TPU:4: [Errno 17] File exists: '/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete'
Exception in device=TPU:1: [Errno 17] File exists: '/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete'
Downloading and preparing dataset text/default-b0932b2bdbb63283 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/
447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d...
Exception in device=TPU:7: [Errno 17] File exists: '/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete'
Exception in device=TPU:3: [Errno 17] File exists: '/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete'
Downloading and preparing dataset text/default-b0932b2bdbb63283 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/
447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d...
Exception in device=TPU:2: [Errno 17] File exists: '/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete'
Exception in device=TPU:0: [Errno 17] File exists: '/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete'
Traceback (most recent call last):
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 231, in _start_fn
fn(gindex, *args)
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 231, in _start_fn
fn(gindex, *args)
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 231, in _start_fn
fn(gindex, *args)
File "/home/*****/huggingface_roberta/run_language_modeling.py", line 300, in _mp_fn
main()
File "/home/*****/huggingface_roberta/run_language_modeling.py", line 300, in _mp_fn
main()
File "/home/*****/huggingface_roberta/run_language_modeling.py", line 300, in _mp_fn
main()
File "/home/*****/huggingface_roberta/run_language_modeling.py", line 240, in main
train_dataset = get_dataset(data_args, tokenizer=tokenizer) if training_args.do_train else None
File "/home/*****/huggingface_roberta/run_language_modeling.py", line 240, in main
train_dataset = get_dataset(data_args, tokenizer=tokenizer) if training_args.do_train else None
File "/home/*****/huggingface_roberta/run_language_modeling.py", line 240, in main
train_dataset = get_dataset(data_args, tokenizer=tokenizer) if training_args.do_train else None
File "/home/*****/huggingface_roberta/run_language_modeling.py", line 134, in get_dataset
dataset = load_dataset("text", data_files=file_path, split="train")
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/load.py", line 546, in load_dataset
download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications,
File "/home/*****/huggingface_roberta/run_language_modeling.py", line 134, in get_dataset
dataset = load_dataset("text", data_files=file_path, split="train")
File "/home/*****/huggingface_roberta/run_language_modeling.py", line 134, in get_dataset
dataset = load_dataset("text", data_files=file_path, split="train")
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/builder.py", line 450, in download_and_prepare
with incomplete_dir(self._cache_dir) as tmp_data_dir:
Traceback (most recent call last):
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/load.py", line 546, in load_dataset
download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications,
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/contextlib.py", line 81, in __enter__
return next(self.gen)
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/load.py", line 546, in load_dataset
download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications,
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 231, in _start_fn
fn(gindex, *args)
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/builder.py", line 450, in download_and_prepare
with incomplete_dir(self._cache_dir) as tmp_data_dir:
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/builder.py", line 422, in incomplete_dir
os.makedirs(tmp_dir)
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/builder.py", line 450, in download_and_prepare
with incomplete_dir(self._cache_dir) as tmp_data_dir:
File "/home/*****/huggingface_roberta/run_language_modeling.py", line 300, in _mp_fn
main()
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/contextlib.py", line 81, in __enter__
return next(self.gen)
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/os.py", line 220, in makedirs
mkdir(name, mode)
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/contextlib.py", line 81, in __enter__
return next(self.gen)
File "/home/*****/huggingface_roberta/run_language_modeling.py", line 240, in main
train_dataset = get_dataset(data_args, tokenizer=tokenizer) if training_args.do_train else None
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/builder.py", line 422, in incomplete_dir
os.makedirs(tmp_dir)
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 231, in _start_fn
fn(gindex, *args)
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/builder.py", line 422, in incomplete_dir
os.makedirs(tmp_dir)
File "/home/*****/huggingface_roberta/run_language_modeling.py", line 134, in get_dataset
dataset = load_dataset("text", data_files=file_path, split="train")
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/os.py", line 220, in makedirs
mkdir(name, mode)
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/load.py", line 546, in load_dataset
download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications,
FileExistsError: [Errno 17] File exists: '/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete'
File "/home/*****/huggingface_roberta/run_language_modeling.py", line 300, in _mp_fn
main()
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/builder.py", line 450, in download_and_prepare
with incomplete_dir(self._cache_dir) as tmp_data_dir:
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/os.py", line 220, in makedirs
mkdir(name, mode)
FileExistsError: [Errno 17] File exists: '/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete'
File "/home/*****/huggingface_roberta/run_language_modeling.py", line 240, in main
train_dataset = get_dataset(data_args, tokenizer=tokenizer) if training_args.do_train else None
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/contextlib.py", line 81, in __enter__
return next(self.gen)
FileExistsError: [Errno 17] File exists: '/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete'
File "/home/*****/huggingface_roberta/run_language_modeling.py", line 134, in get_dataset
dataset = load_dataset("text", data_files=file_path, split="train")
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/builder.py", line 422, in incomplete_dir
os.makedirs(tmp_dir)
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/load.py", line 546, in load_dataset
download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications,
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/os.py", line 220, in makedirs
mkdir(name, mode)
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/builder.py", line 450, in download_and_prepare
with incomplete_dir(self._cache_dir) as tmp_data_dir:
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/contextlib.py", line 81, in __enter__
return next(self.gen)
FileExistsError: [Errno 17] File exists: '/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete'
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/builder.py", line 422, in incomplete_dir
os.makedirs(tmp_dir)
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/os.py", line 220, in makedirs
mkdir(name, mode)
Traceback (most recent call last):
FileExistsError: [Errno 17] File exists: '/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete'
Traceback (most recent call last):
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 231, in _start_fn
fn(gindex, *args)
File "/home/*****/huggingface_roberta/run_language_modeling.py", line 300, in _mp_fn
main()
File "/home/*****/huggingface_roberta/run_language_modeling.py", line 240, in main
train_dataset = get_dataset(data_args, tokenizer=tokenizer) if training_args.do_train else None
File "/home/*****/huggingface_roberta/run_language_modeling.py", line 134, in get_dataset
dataset = load_dataset("text", data_files=file_path, split="train")
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 231, in _start_fn
fn(gindex, *args)
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/load.py", line 546, in load_dataset
download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications,
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/builder.py", line 450, in download_and_prepare
with incomplete_dir(self._cache_dir) as tmp_data_dir:
File "/home/*****/huggingface_roberta/run_language_modeling.py", line 300, in _mp_fn
main()
File "/home/*****/huggingface_roberta/run_language_modeling.py", line 240, in main
train_dataset = get_dataset(data_args, tokenizer=tokenizer) if training_args.do_train else None
File "/home/*****/huggingface_roberta/run_language_modeling.py", line 134, in get_dataset
dataset = load_dataset("text", data_files=file_path, split="train")
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/load.py", line 546, in load_dataset
download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications,
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/builder.py", line 450, in download_and_prepare
with incomplete_dir(self._cache_dir) as tmp_data_dir:
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/contextlib.py", line 81, in __enter__
return next(self.gen)
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/builder.py", line 422, in incomplete_dir
os.makedirs(tmp_dir)
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/os.py", line 220, in makedirs
mkdir(name, mode)
FileExistsError: [Errno 17] File exists: '/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete'
```
| null |
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/532/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/532/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/525
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/525/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/525/comments
|
https://api.github.com/repos/huggingface/datasets/issues/525/events
|
https://github.com/huggingface/datasets/issues/525
| 683,875,483
|
MDU6SXNzdWU2ODM4NzU0ODM=
| 525
|
wmt download speed example
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sshleifer",
"id": 6045025,
"login": "sshleifer",
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sshleifer",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Thanks for creating the issue :)\r\nThe download link for wmt-en-de raw looks like a mirror. We should use that instead of the current url.\r\nIs this mirror official ?\r\n\r\nAlso it looks like for `ro-en` it tried to download other languages. If we manage to only download the one that is asked it'd be cool\r\n\r\nAlso cc @patrickvonplaten ",
"Mirror is not official.",
"Shall we host the files ourselves or it is fine to use this mirror in your opinion ?",
"Should we add an argument in `load_dataset` to override some URL with a custom URL (e.g. mirror) or a local path?\r\n\r\nThis could also be used to provide local files instead of the original files as requested by some users (e.g. when you made a dataset with the same format than SQuAD and what to use it instead of the official dataset files).",
"@lhoestq I think we should host it ourselves. I'll put the subset of wmt (without preprocessed files) that we need on s3 and post a link over the weekend.",
"Is there a solution yet? The download speed is still too slow. 60-70kbps download for wmt16 and around 100kbps for wmt19. @sshleifer ",
"I'm working on mirror links which will provide high download speed :)\r\nSee https://github.com/huggingface/datasets/issues/1892",
"Resolved via https://github.com/huggingface/datasets/pull/1912"
] | 2020-08-21T23:29:06
| 2022-10-04T17:45:39
| 2022-10-04T17:45:39
|
CONTRIBUTOR
| null | null | null | null |
Continuing from the slack 1.0 roadmap thread w @lhoestq , I realized the slow downloads is only a thing sometimes. Here are a few examples, I suspect there are multiple issues. All commands were run from the same gcp us-central-1f machine.
```
import nlp
nlp.load_dataset('wmt16', 'de-en')
```
Downloads at 49.1 KB/S
Whereas
```
pip install gdown # download from google drive
!gdown https://drive.google.com/uc?id=1iO7um-HWoNoRKDtw27YUSgyeubn9uXqj
```
Downloads at 127 MB/s. (The file is a copy of wmt-en-de raw).
```
nlp.load_dataset('wmt16', 'ro-en')
```
goes at 27 MB/s, much faster.
if we wget the same data from s3 is the same download speed, but ¼ the file size:
```
wget https://s3.amazonaws.com/datasets.huggingface.co/translation/wmt_en_ro_packed_200_rand.tgz
```
Finally,
```
nlp.load_dataset('wmt19', 'zh-en')
```
Starts fast, but broken. (duplicate of #493 )
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/525/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/525/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 773 days, 18:16:33
|
https://api.github.com/repos/huggingface/datasets/issues/524
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/524/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/524/comments
|
https://api.github.com/repos/huggingface/datasets/issues/524/events
|
https://github.com/huggingface/datasets/issues/524
| 683,686,359
|
MDU6SXNzdWU2ODM2ODYzNTk=
| 524
|
Some docs are missing parameter names
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4",
"events_url": "https://api.github.com/users/jarednielsen/events{/privacy}",
"followers_url": "https://api.github.com/users/jarednielsen/followers",
"following_url": "https://api.github.com/users/jarednielsen/following{/other_user}",
"gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jarednielsen",
"id": 4564897,
"login": "jarednielsen",
"node_id": "MDQ6VXNlcjQ1NjQ4OTc=",
"organizations_url": "https://api.github.com/users/jarednielsen/orgs",
"received_events_url": "https://api.github.com/users/jarednielsen/received_events",
"repos_url": "https://api.github.com/users/jarednielsen/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jarednielsen",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Indeed, good catch!"
] | 2020-08-21T16:47:34
| 2020-08-25T09:04:03
| 2020-08-25T09:04:03
|
CONTRIBUTOR
| null | null | null | null |
See https://huggingface.co/nlp/master/package_reference/main_classes.html#nlp.Dataset.map. I believe this is because the parameter names are enclosed in backticks in the docstrings, maybe it's an old docstring format that doesn't work with the current Sphinx version.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/524/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/524/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 3 days, 16:16:29
|
https://api.github.com/repos/huggingface/datasets/issues/522
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/522/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/522/comments
|
https://api.github.com/repos/huggingface/datasets/issues/522/events
|
https://github.com/huggingface/datasets/issues/522
| 682,478,833
|
MDU6SXNzdWU2ODI0Nzg4MzM=
| 522
|
dictionnary typo in docs
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/4004127?v=4",
"events_url": "https://api.github.com/users/yonigottesman/events{/privacy}",
"followers_url": "https://api.github.com/users/yonigottesman/followers",
"following_url": "https://api.github.com/users/yonigottesman/following{/other_user}",
"gists_url": "https://api.github.com/users/yonigottesman/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yonigottesman",
"id": 4004127,
"login": "yonigottesman",
"node_id": "MDQ6VXNlcjQwMDQxMjc=",
"organizations_url": "https://api.github.com/users/yonigottesman/orgs",
"received_events_url": "https://api.github.com/users/yonigottesman/received_events",
"repos_url": "https://api.github.com/users/yonigottesman/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yonigottesman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yonigottesman/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yonigottesman",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Thanks!"
] | 2020-08-20T07:11:05
| 2020-08-20T07:52:14
| 2020-08-20T07:52:13
|
CONTRIBUTOR
| null | null | null | null |
Many places dictionary is spelled dictionnary, not sure if its on purpose or not.
Fixed in this pr:
https://github.com/huggingface/nlp/pull/521
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/thomwolf",
"id": 7353373,
"login": "thomwolf",
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"type": "User",
"url": "https://api.github.com/users/thomwolf",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/522/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/522/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 0:41:08
|
https://api.github.com/repos/huggingface/datasets/issues/519
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/519/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/519/comments
|
https://api.github.com/repos/huggingface/datasets/issues/519/events
|
https://github.com/huggingface/datasets/issues/519
| 682,193,882
|
MDU6SXNzdWU2ODIxOTM4ODI=
| 519
|
[BUG] Metrics throwing new error on master since 0.4.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/2238344?v=4",
"events_url": "https://api.github.com/users/jbragg/events{/privacy}",
"followers_url": "https://api.github.com/users/jbragg/followers",
"following_url": "https://api.github.com/users/jbragg/following{/other_user}",
"gists_url": "https://api.github.com/users/jbragg/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jbragg",
"id": 2238344,
"login": "jbragg",
"node_id": "MDQ6VXNlcjIyMzgzNDQ=",
"organizations_url": "https://api.github.com/users/jbragg/orgs",
"received_events_url": "https://api.github.com/users/jbragg/received_events",
"repos_url": "https://api.github.com/users/jbragg/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jbragg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jbragg/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jbragg",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Update - maybe this is only failing on bleu because I was not tokenizing inputs to the metric",
"Closing - seems to be just forgetting to tokenize. And found the helpful discussion in huggingface/evaluate#105 "
] | 2020-08-19T21:29:15
| 2022-06-02T16:41:01
| 2020-08-19T22:04:40
|
CONTRIBUTOR
| null | null | null | null |
The following error occurs when passing in references of type `List[List[str]]` to metrics like bleu.
Wasn't happening on 0.4.0 but happening now on master.
```
File "/usr/local/lib/python3.7/site-packages/nlp/metric.py", line 226, in compute
self.add_batch(predictions=predictions, references=references)
File "/usr/local/lib/python3.7/site-packages/nlp/metric.py", line 242, in add_batch
batch = self.info.features.encode_batch(batch)
File "/usr/local/lib/python3.7/site-packages/nlp/features.py", line 527, in encode_batch
encoded_batch[key] = [encode_nested_example(self[key], cast_to_python_objects(obj)) for obj in column]
File "/usr/local/lib/python3.7/site-packages/nlp/features.py", line 527, in <listcomp>
encoded_batch[key] = [encode_nested_example(self[key], cast_to_python_objects(obj)) for obj in column]
File "/usr/local/lib/python3.7/site-packages/nlp/features.py", line 456, in encode_nested_example
raise ValueError("Got a string but expected a list instead: '{}'".format(obj))
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/2238344?v=4",
"events_url": "https://api.github.com/users/jbragg/events{/privacy}",
"followers_url": "https://api.github.com/users/jbragg/followers",
"following_url": "https://api.github.com/users/jbragg/following{/other_user}",
"gists_url": "https://api.github.com/users/jbragg/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jbragg",
"id": 2238344,
"login": "jbragg",
"node_id": "MDQ6VXNlcjIyMzgzNDQ=",
"organizations_url": "https://api.github.com/users/jbragg/orgs",
"received_events_url": "https://api.github.com/users/jbragg/received_events",
"repos_url": "https://api.github.com/users/jbragg/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jbragg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jbragg/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jbragg",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/519/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/519/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 0:35:25
|
https://api.github.com/repos/huggingface/datasets/issues/517
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/517/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/517/comments
|
https://api.github.com/repos/huggingface/datasets/issues/517/events
|
https://github.com/huggingface/datasets/issues/517
| 681,896,944
|
MDU6SXNzdWU2ODE4OTY5NDQ=
| 517
|
add MLDoc dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/13238952?v=4",
"events_url": "https://api.github.com/users/jxmorris12/events{/privacy}",
"followers_url": "https://api.github.com/users/jxmorris12/followers",
"following_url": "https://api.github.com/users/jxmorris12/following{/other_user}",
"gists_url": "https://api.github.com/users/jxmorris12/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jxmorris12",
"id": 13238952,
"login": "jxmorris12",
"node_id": "MDQ6VXNlcjEzMjM4OTUy",
"organizations_url": "https://api.github.com/users/jxmorris12/orgs",
"received_events_url": "https://api.github.com/users/jxmorris12/received_events",
"repos_url": "https://api.github.com/users/jxmorris12/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jxmorris12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jxmorris12/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jxmorris12",
"user_view_type": "public"
}
|
[
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] |
open
| false
| null |
[] |
[
"Any updates on this?",
"This request is still an open issue waiting to be addressed by any community member, @GuillemGSubies."
] | 2020-08-19T14:41:59
| 2021-08-03T05:59:33
| null |
CONTRIBUTOR
| null | null | null | null |
Hi,
I am recommending that someone add MLDoc, a multilingual news topic classification dataset.
- Here's a link to the Github: https://github.com/facebookresearch/MLDoc
- and the paper: http://www.lrec-conf.org/proceedings/lrec2018/pdf/658.pdf
Looks like the dataset contains news stories in multiple languages that can be classified into four hierarchical groups: CCAT (Corporate/Industrial), ECAT (Economics), GCAT (Government/Social) and MCAT (Markets). There are 13 languages: Dutch, French, German, Chinese, Japanese, Russian, Portuguese, Spanish, Latin American Spanish, Italian, Danish, Norwegian, and Swedish
| null |
{
"+1": 4,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 4,
"url": "https://api.github.com/repos/huggingface/datasets/issues/517/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/517/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| null |
https://api.github.com/repos/huggingface/datasets/issues/514
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/514/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/514/comments
|
https://api.github.com/repos/huggingface/datasets/issues/514/events
|
https://github.com/huggingface/datasets/issues/514
| 681,256,348
|
MDU6SXNzdWU2ODEyNTYzNDg=
| 514
|
dataset.shuffle(keep_in_memory=True) is never allowed
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/24683907?v=4",
"events_url": "https://api.github.com/users/vegarab/events{/privacy}",
"followers_url": "https://api.github.com/users/vegarab/followers",
"following_url": "https://api.github.com/users/vegarab/following{/other_user}",
"gists_url": "https://api.github.com/users/vegarab/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/vegarab",
"id": 24683907,
"login": "vegarab",
"node_id": "MDQ6VXNlcjI0NjgzOTA3",
"organizations_url": "https://api.github.com/users/vegarab/orgs",
"received_events_url": "https://api.github.com/users/vegarab/received_events",
"repos_url": "https://api.github.com/users/vegarab/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/vegarab/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vegarab/subscriptions",
"type": "User",
"url": "https://api.github.com/users/vegarab",
"user_view_type": "public"
}
|
[
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
},
{
"color": "DF8D62",
"default": false,
"description": "",
"id": 4614514401,
"name": "hacktoberfest",
"node_id": "LA_kwDODunzps8AAAABEwvm4Q",
"url": "https://api.github.com/repos/huggingface/datasets/labels/hacktoberfest"
}
] |
closed
| false
| null |
[] |
[
"This seems to be fixed in #513 for the filter function, replacing `cache_file_name` with `indices_cache_file_name` in the assert. Although not for the `map()` function @thomwolf ",
"Maybe I'm a bit tired but I fail to see the issue here.\r\n\r\nSince `cache_file_name` is `None` by default, if you set `keep_in_memory` to `True`, the assert should pass, no?",
"I failed to realise that this only applies to `shuffle()`. Whenever `keep_in_memory` is set to True, this is passed on to the `select()` function. However, if `cache_file_name` is None, it will be defined in the `shuffle()` function before it is passed on to `select()`. \r\n\r\nThus, `select()` is called with `keep_in_memory=True` and a not None value for `cache_file_name`. \r\nThis is essentially fixed in #513 \r\n\r\nEasily reproducible:\r\n```python\r\n>>> import nlp\r\n>>> data = nlp.load_dataset(\"cosmos_qa\", split=\"train\")\r\nUsing custom data configuration default\r\n>>> data.shuffle(keep_in_memory=True)\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/vegarab/.conda/envs/torch/lib/python3.7/site-packages/nlp/arrow_dataset.py\", line 1398, in shuffle\r\n verbose=verbose,\r\n File \"/home/vegarab/.conda/envs/torch/lib/python3.7/site-packages/nlp/arrow_dataset.py\", line 1178, in select\r\n ), \"Please use either `keep_in_memory` or `cache_file_name` but not both.\"\r\nAssertionError: Please use either `keep_in_memory` or `cache_file_name` but not both.\r\n>>>data.select([0], keep_in_memory=True)\r\n# No error\r\n```",
"Oh yes ok got it thanks. Should be fixed if we are happy with #513 indeed.",
"My bad. This is actually not fixed in #513. Sorry about that...\r\nThe new `indices_cache_file_name` is set to a non-None value in the new `shuffle()` as well. \r\n\r\nThe buffer and caching mechanisms used in the `select()` function are too intricate for me to understand why the check is there at all. I've removed it in my local build and it seems to be working fine for my project, without really considering other implications of the change. \r\n\r\n",
"Ok I'll investigate and add a series of tests on the `keep_in_memory=True` settings which is under-tested atm",
"Hey, still seeing this issue with the latest version.",
"The same :(",
"These are the steps needed to fix this issue:\r\n1. add the following check to `Dataset.shuffle`:\r\n```python\r\nif keep_in_memory and indices_cache_file_name is not None:\r\n raise ValueError(\"Please use either `keep_in_memory` or `indices_cache_file_name` but not both.\")\r\n```\r\n2. set `indices_cache_file_name` to `None` if `keep_in_memory` is True in the call to `select`\r\n3. add a test with `shuffle(keep_in_memory=True)`",
"Hi @mariosasko , I have opened this PR #5082 "
] | 2020-08-18T18:47:40
| 2022-10-10T12:21:58
| 2022-10-10T12:21:58
|
CONTRIBUTOR
| null | null | null | null |
As of commit ef4aac2, the usage of the parameter `keep_in_memory=True` is never possible: `dataset.select(keep_in_memory=True)`
The commit added the lines
```python
# lines 994-996 in src/nlp/arrow_dataset.py
assert (
not keep_in_memory or cache_file_name is None
), "Please use either `keep_in_memory` or `cache_file_name` but not both."
```
This affects both `shuffle()` as `select()` is a sub-routine, and `map()` that has the same check.
I'd love to fix this myself, but unsure what the intention of the assert is given the rest of the logic in the function concerning `ccache_file_name` and `keep_in_memory`.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/514/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/514/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 782 days, 17:34:18
|
https://api.github.com/repos/huggingface/datasets/issues/511
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/511/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/511/comments
|
https://api.github.com/repos/huggingface/datasets/issues/511/events
|
https://github.com/huggingface/datasets/issues/511
| 681,055,553
|
MDU6SXNzdWU2ODEwNTU1NTM=
| 511
|
dataset.shuffle() and select() resets format. Intended?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/24683907?v=4",
"events_url": "https://api.github.com/users/vegarab/events{/privacy}",
"followers_url": "https://api.github.com/users/vegarab/followers",
"following_url": "https://api.github.com/users/vegarab/following{/other_user}",
"gists_url": "https://api.github.com/users/vegarab/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/vegarab",
"id": 24683907,
"login": "vegarab",
"node_id": "MDQ6VXNlcjI0NjgzOTA3",
"organizations_url": "https://api.github.com/users/vegarab/orgs",
"received_events_url": "https://api.github.com/users/vegarab/received_events",
"repos_url": "https://api.github.com/users/vegarab/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/vegarab/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vegarab/subscriptions",
"type": "User",
"url": "https://api.github.com/users/vegarab",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Hi @vegarab yes feel free to open a discussion here.\r\n\r\nThis design choice was not very much thought about.\r\n\r\nSince `dataset.select()` (like all the method without a trailing underscore) is non-destructive and returns a new dataset it has most of its properties initialized from scratch (except the table and infos).\r\n\r\nThinking about it I don't see a strong reason against transmitting the format from the parent dataset to its newly created child. It's probably what's expected by the user in most cases. What do you think @lhoestq?\r\n\r\nBy the way, I've been working today on a refactoring of all the samples re-ordering/selection methods (`select`, `sort`, `shuffle`, `shard`, `train_test_split`). The idea is to speed them up by a lot (like, really a lot) by working as much as possible with an indices mapping table instead of doing a deep copy of the full dataset as we've been doing currently. You can give it a look and try it here: https://github.com/huggingface/nlp/pull/513\r\nFeedbacks are very much welcome",
"I think it's ok to keep the format.\r\nIf we want to have this behavior for `.map` too we just have to make sure it doesn't keep a column that's been removed.",
"Shall we have this in the coming release by the way @lhoestq ?",
"Yes sure !",
"Since datasets 1.0.0 the format is not reset anymore.\r\nClosing this one, but feel free to re-open if you have other questions"
] | 2020-08-18T13:46:01
| 2020-09-14T08:45:38
| 2020-09-14T08:45:38
|
CONTRIBUTOR
| null | null | null | null |
Calling `dataset.shuffle()` or `dataset.select()` on a dataset resets its format set by `dataset.set_format()`. Is this intended or an oversight?
When working on quite large datasets that require a lot of preprocessing I find it convenient to save the processed dataset to file using `torch.save("dataset.pt")`. Later loading the dataset object using `torch.load("dataset.pt")`, which conserves the defined format before saving.
I do shuffling and selecting (for controlling dataset size) after loading the data from .pt-file, as it's convenient whenever you train multiple models with varying sizes of the same dataset.
The obvious workaround for this is to set the format again after using `dataset.select()` or `dataset.shuffle()`.
_I guess this is more of a discussion on the design philosophy of the functions. Please let me know if this is not the right channel for these kinds of discussions or if they are not wanted at all!_
#### How to reproduce:
```python
import nlp
from transformers import T5Tokenizer
tokenizer = T5Tokenizer.from_pretrained("t5-base")
def create_features(batch):
context_encoding = tokenizer.batch_encode_plus(batch["context"])
return {"input_ids": context_encoding["input_ids"]}
dataset = nlp.load_dataset("cosmos_qa", split="train")
dataset = dataset.map(create_features, batched=True)
dataset.set_format(type="torch", columns=["input_ids"])
dataset[0]
# {'input_ids': tensor([ 1804, 3525, 1602, ... 0, 0])}
dataset = dataset.shuffle()
dataset[0]
# {'id': '3Q9(...)20', 'context': "Good Old War an (...) play ?', 'answer0': 'None of the above choices .', 'answer1': 'This person likes music and likes to see the show , they will see other bands play .', (...) 'input_ids': [1804, 3525, 1602, ... , 0, 0]}
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/511/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/511/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 26 days, 18:59:37
|
https://api.github.com/repos/huggingface/datasets/issues/510
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/510/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/510/comments
|
https://api.github.com/repos/huggingface/datasets/issues/510/events
|
https://github.com/huggingface/datasets/issues/510
| 680,823,644
|
MDU6SXNzdWU2ODA4MjM2NDQ=
| 510
|
Version of numpy to use the library
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/6966175?v=4",
"events_url": "https://api.github.com/users/isspek/events{/privacy}",
"followers_url": "https://api.github.com/users/isspek/followers",
"following_url": "https://api.github.com/users/isspek/following{/other_user}",
"gists_url": "https://api.github.com/users/isspek/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/isspek",
"id": 6966175,
"login": "isspek",
"node_id": "MDQ6VXNlcjY5NjYxNzU=",
"organizations_url": "https://api.github.com/users/isspek/orgs",
"received_events_url": "https://api.github.com/users/isspek/received_events",
"repos_url": "https://api.github.com/users/isspek/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/isspek/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/isspek/subscriptions",
"type": "User",
"url": "https://api.github.com/users/isspek",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Seems like this method was added in 1.17. I'll add a requirement on this.",
"Thank you so much. After upgrading the numpy library, it worked."
] | 2020-08-18T08:59:13
| 2020-08-19T18:35:56
| 2020-08-19T18:35:56
|
NONE
| null | null | null | null |
Thank you so much for your excellent work! I would like to use nlp library in my project. While importing nlp, I am receiving the following error `AttributeError: module 'numpy.random' has no attribute 'Generator'` Numpy version in my project is 1.16.0. May I learn which numpy version is used for the nlp library.
Thanks in advance.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/6966175?v=4",
"events_url": "https://api.github.com/users/isspek/events{/privacy}",
"followers_url": "https://api.github.com/users/isspek/followers",
"following_url": "https://api.github.com/users/isspek/following{/other_user}",
"gists_url": "https://api.github.com/users/isspek/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/isspek",
"id": 6966175,
"login": "isspek",
"node_id": "MDQ6VXNlcjY5NjYxNzU=",
"organizations_url": "https://api.github.com/users/isspek/orgs",
"received_events_url": "https://api.github.com/users/isspek/received_events",
"repos_url": "https://api.github.com/users/isspek/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/isspek/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/isspek/subscriptions",
"type": "User",
"url": "https://api.github.com/users/isspek",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/510/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/510/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 1 day, 9:36:43
|
https://api.github.com/repos/huggingface/datasets/issues/509
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/509/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/509/comments
|
https://api.github.com/repos/huggingface/datasets/issues/509/events
|
https://github.com/huggingface/datasets/issues/509
| 679,711,585
|
MDU6SXNzdWU2Nzk3MTE1ODU=
| 509
|
Converting TensorFlow dataset example
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/22762845?v=4",
"events_url": "https://api.github.com/users/saareliad/events{/privacy}",
"followers_url": "https://api.github.com/users/saareliad/followers",
"following_url": "https://api.github.com/users/saareliad/following{/other_user}",
"gists_url": "https://api.github.com/users/saareliad/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/saareliad",
"id": 22762845,
"login": "saareliad",
"node_id": "MDQ6VXNlcjIyNzYyODQ1",
"organizations_url": "https://api.github.com/users/saareliad/orgs",
"received_events_url": "https://api.github.com/users/saareliad/received_events",
"repos_url": "https://api.github.com/users/saareliad/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/saareliad/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/saareliad/subscriptions",
"type": "User",
"url": "https://api.github.com/users/saareliad",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Do you want to convert a dataset script to the tfds format ?\r\nIf so, we currently have a comversion script nlp/commands/convert.py but it is a conversion script that goes from tfds to nlp.\r\nI think it shouldn't be too hard to do the changes in reverse (at some manual adjustments).\r\nIf you manage to make it work in reverse, feel free to open a PR to share it with the community :)",
"In our docs: [Using a Dataset with PyTorch/Tensorflow](https://huggingface.co/docs/datasets/torch_tensorflow.html)."
] | 2020-08-16T08:05:20
| 2021-08-03T06:01:18
| 2021-08-03T06:01:17
|
NONE
| null | null | null | null |
Hi,
I want to use TensorFlow datasets with this repo, I noticed you made some conversion script,
can you give a simple example of using it?
Thanks
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/509/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/509/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 351 days, 21:55:57
|
https://api.github.com/repos/huggingface/datasets/issues/508
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/508/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/508/comments
|
https://api.github.com/repos/huggingface/datasets/issues/508/events
|
https://github.com/huggingface/datasets/issues/508
| 679,705,734
|
MDU6SXNzdWU2Nzk3MDU3MzQ=
| 508
|
TypeError: Receiver() takes no arguments
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1225851?v=4",
"events_url": "https://api.github.com/users/sebastiantomac/events{/privacy}",
"followers_url": "https://api.github.com/users/sebastiantomac/followers",
"following_url": "https://api.github.com/users/sebastiantomac/following{/other_user}",
"gists_url": "https://api.github.com/users/sebastiantomac/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sebastiantomac",
"id": 1225851,
"login": "sebastiantomac",
"node_id": "MDQ6VXNlcjEyMjU4NTE=",
"organizations_url": "https://api.github.com/users/sebastiantomac/orgs",
"received_events_url": "https://api.github.com/users/sebastiantomac/received_events",
"repos_url": "https://api.github.com/users/sebastiantomac/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sebastiantomac/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sebastiantomac/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sebastiantomac",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Which version of Apache Beam do you have (can you copy your full environment info here)?",
"apache-beam==2.23.0\r\nnlp==0.4.0\r\n\r\nFor me this was resolved by running the same python script on Linux (or really WSL). ",
"Do you manage to run a dummy beam pipeline with python on windows ? \r\nYou can test a dummy pipeline with [this code](https://github.com/apache/beam/blob/master/sdks/python/apache_beam/examples/wordcount_minimal.py)\r\n\r\nIf you get the same error, it means that the issue comes from apache beam.\r\nOtherwise we'll investigate what went wrong here",
"Still, same error, so I guess it is on apache beam then. \r\nThanks for the investigation.",
"Thanks for trying\r\nLet us know if you find clues of what caused this issue, or if you find a fix"
] | 2020-08-16T07:18:16
| 2020-09-01T14:53:33
| 2020-09-01T14:49:03
|
NONE
| null | null | null | null |
I am trying to load a wikipedia data set
```
import nlp
from nlp import load_dataset
dataset = load_dataset("wikipedia", "20200501.en", split="train", cache_dir=data_path, beam_runner='DirectRunner')
#dataset = load_dataset('wikipedia', '20200501.sv', cache_dir=data_path, beam_runner='DirectRunner')
```
This fails in the apache beam runner.
```
Traceback (most recent call last):
File "D:/ML/wikiembedding/gpt2_sv.py", line 36, in <module>
dataset = load_dataset("wikipedia", "20200501.en", split="train", cache_dir=my_cache_dir, beam_runner='DirectRunner')
File "C:\Users\seto\AppData\Local\Programs\Python\Python38\lib\site-packages\nlp\load.py", line 548, in load_dataset
builder_instance.download_and_prepare(
File "C:\Users\seto\AppData\Local\Programs\Python\Python38\lib\site-packages\nlp\builder.py", line 462, in download_and_prepare
self._download_and_prepare(
File "C:\Users\seto\AppData\Local\Programs\Python\Python38\lib\site-packages\nlp\builder.py", line 969, in _download_and_prepare
pipeline_results = pipeline.run()
File "C:\Users\seto\AppData\Local\Programs\Python\Python38\lib\site-packages\apache_beam\pipeline.py", line 534, in run
return self.runner.run_pipeline(self, self._options)
....
File "C:\Users\seto\AppData\Local\Programs\Python\Python38\lib\site-packages\apache_beam\runners\worker\bundle_processor.py", line 218, in process_encoded
self.output(decoded_value)
File "C:\Users\seto\AppData\Local\Programs\Python\Python38\lib\site-packages\apache_beam\runners\worker\operations.py", line 332, in output
cython.cast(Receiver, self.receivers[output_index]).receive(windowed_value)
File "C:\Users\seto\AppData\Local\Programs\Python\Python38\lib\site-packages\Cython\Shadow.py", line 167, in cast
return type(*args)
TypeError: Receiver() takes no arguments
```
This is run on a Windows 10 machine with python 3.8. I get the same error loading the swedish wikipedia dump.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1225851?v=4",
"events_url": "https://api.github.com/users/sebastiantomac/events{/privacy}",
"followers_url": "https://api.github.com/users/sebastiantomac/followers",
"following_url": "https://api.github.com/users/sebastiantomac/following{/other_user}",
"gists_url": "https://api.github.com/users/sebastiantomac/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sebastiantomac",
"id": 1225851,
"login": "sebastiantomac",
"node_id": "MDQ6VXNlcjEyMjU4NTE=",
"organizations_url": "https://api.github.com/users/sebastiantomac/orgs",
"received_events_url": "https://api.github.com/users/sebastiantomac/received_events",
"repos_url": "https://api.github.com/users/sebastiantomac/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sebastiantomac/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sebastiantomac/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sebastiantomac",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/508/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/508/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 16 days, 7:30:47
|
https://api.github.com/repos/huggingface/datasets/issues/507
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/507/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/507/comments
|
https://api.github.com/repos/huggingface/datasets/issues/507/events
|
https://github.com/huggingface/datasets/issues/507
| 679,400,683
|
MDU6SXNzdWU2Nzk0MDA2ODM=
| 507
|
Errors when I use
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/30506151?v=4",
"events_url": "https://api.github.com/users/mchari/events{/privacy}",
"followers_url": "https://api.github.com/users/mchari/followers",
"following_url": "https://api.github.com/users/mchari/following{/other_user}",
"gists_url": "https://api.github.com/users/mchari/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mchari",
"id": 30506151,
"login": "mchari",
"node_id": "MDQ6VXNlcjMwNTA2MTUx",
"organizations_url": "https://api.github.com/users/mchari/orgs",
"received_events_url": "https://api.github.com/users/mchari/received_events",
"repos_url": "https://api.github.com/users/mchari/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mchari/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchari/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mchari",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Looks like an issue with 3.0.2 transformers version. Works fine when I use \"master\" version of transformers."
] | 2020-08-14T21:03:57
| 2020-08-14T21:39:10
| 2020-08-14T21:39:10
|
NONE
| null | null | null | null |
I tried the following example code from https://huggingface.co/deepset/roberta-base-squad2 and got errors
I am using **transformers 3.0.2** code .
from transformers.pipelines import pipeline
from transformers.modeling_auto import AutoModelForQuestionAnswering
from transformers.tokenization_auto import AutoTokenizer
model_name = "deepset/roberta-base-squad2"
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'Why is model conversion important?',
'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'
}
res = nlp(QA_input)
The errors are :
res = nlp(QA_input)
File ".local/lib/python3.6/site-packages/transformers/pipelines.py", line 1316, in __call__
for s, e, score in zip(starts, ends, scores)
File ".local/lib/python3.6/site-packages/transformers/pipelines.py", line 1316, in <listcomp>
for s, e, score in zip(starts, ends, scores)
KeyError: 0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/30506151?v=4",
"events_url": "https://api.github.com/users/mchari/events{/privacy}",
"followers_url": "https://api.github.com/users/mchari/followers",
"following_url": "https://api.github.com/users/mchari/following{/other_user}",
"gists_url": "https://api.github.com/users/mchari/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mchari",
"id": 30506151,
"login": "mchari",
"node_id": "MDQ6VXNlcjMwNTA2MTUx",
"organizations_url": "https://api.github.com/users/mchari/orgs",
"received_events_url": "https://api.github.com/users/mchari/received_events",
"repos_url": "https://api.github.com/users/mchari/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mchari/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchari/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mchari",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/507/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/507/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 0:35:13
|
https://api.github.com/repos/huggingface/datasets/issues/501
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/501/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/501/comments
|
https://api.github.com/repos/huggingface/datasets/issues/501/events
|
https://github.com/huggingface/datasets/issues/501
| 677,952,893
|
MDU6SXNzdWU2Nzc5NTI4OTM=
| 501
|
Caching doesn't work for map (non-deterministic)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8149933?v=4",
"events_url": "https://api.github.com/users/wulu473/events{/privacy}",
"followers_url": "https://api.github.com/users/wulu473/followers",
"following_url": "https://api.github.com/users/wulu473/following{/other_user}",
"gists_url": "https://api.github.com/users/wulu473/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/wulu473",
"id": 8149933,
"login": "wulu473",
"node_id": "MDQ6VXNlcjgxNDk5MzM=",
"organizations_url": "https://api.github.com/users/wulu473/orgs",
"received_events_url": "https://api.github.com/users/wulu473/received_events",
"repos_url": "https://api.github.com/users/wulu473/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/wulu473/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wulu473/subscriptions",
"type": "User",
"url": "https://api.github.com/users/wulu473",
"user_view_type": "public"
}
|
[] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
] |
[
"Thanks for reporting !\r\n\r\nTo store the cache file, we compute a hash of the function given in `.map`, using our own hashing function.\r\nThe hash doesn't seem to stay the same over sessions for the tokenizer.\r\nApparently this is because of the regex at `tokenizer.pat` is not well supported by our hashing function.\r\n\r\nI'm working on a fix",
"Thanks everyone. Works great now.",
"Hi. I believe the fix was for the nlp library. Is there a solution to handle compiled regex expressions in .map() with the caching. I want to run a simple regex pattern on a big dataset, but I am running into the issue of compiled expression not being cached. \r\n\r\nInstead of opening a new issue, I thought I would put my query here. Let me know if a new issue would be more suitable. Thanks",
"Hi @MaveriQ! This fix is also included in the `datasets` library. Can you provide a reproducer?"
] | 2020-08-12T20:20:07
| 2022-08-08T11:02:23
| 2020-08-24T16:34:35
|
NONE
| null | null | null | null |
The caching functionality doesn't work reliably when tokenizing a dataset. Here's a small example to reproduce it.
```python
import nlp
import transformers
def main():
ds = nlp.load_dataset("reddit", split="train[:500]")
tokenizer = transformers.AutoTokenizer.from_pretrained("gpt2")
def convert_to_features(example_batch):
input_str = example_batch["body"]
encodings = tokenizer(input_str, add_special_tokens=True, truncation=True)
return encodings
ds = ds.map(convert_to_features, batched=True)
if __name__ == "__main__":
main()
```
Roughly 3/10 times, this example recomputes the tokenization.
Is this expected behaviour?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8149933?v=4",
"events_url": "https://api.github.com/users/wulu473/events{/privacy}",
"followers_url": "https://api.github.com/users/wulu473/followers",
"following_url": "https://api.github.com/users/wulu473/following{/other_user}",
"gists_url": "https://api.github.com/users/wulu473/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/wulu473",
"id": 8149933,
"login": "wulu473",
"node_id": "MDQ6VXNlcjgxNDk5MzM=",
"organizations_url": "https://api.github.com/users/wulu473/orgs",
"received_events_url": "https://api.github.com/users/wulu473/received_events",
"repos_url": "https://api.github.com/users/wulu473/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/wulu473/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wulu473/subscriptions",
"type": "User",
"url": "https://api.github.com/users/wulu473",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/501/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/501/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 11 days, 20:14:28
|
https://api.github.com/repos/huggingface/datasets/issues/492
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/492/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/492/comments
|
https://api.github.com/repos/huggingface/datasets/issues/492/events
|
https://github.com/huggingface/datasets/issues/492
| 676,495,064
|
MDU6SXNzdWU2NzY0OTUwNjQ=
| 492
|
nlp.Features does not distinguish between nullable and non-nullable types in PyArrow schema
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4",
"events_url": "https://api.github.com/users/jarednielsen/events{/privacy}",
"followers_url": "https://api.github.com/users/jarednielsen/followers",
"following_url": "https://api.github.com/users/jarednielsen/following{/other_user}",
"gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jarednielsen",
"id": 4564897,
"login": "jarednielsen",
"node_id": "MDQ6VXNlcjQ1NjQ4OTc=",
"organizations_url": "https://api.github.com/users/jarednielsen/orgs",
"received_events_url": "https://api.github.com/users/jarednielsen/received_events",
"repos_url": "https://api.github.com/users/jarednielsen/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jarednielsen",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"In 0.4.0, the assertion in `concatenate_datasets ` is on the features, and not the schema.\r\nCould you try to update `nlp` ?\r\n\r\nAlso, since 0.4.0, you can use `dset_wikipedia.cast_(dset_books.features)` to avoid the schema cast hack.",
"Or maybe the assertion comes from elsewhere ?",
"I'm using the master branch. The assertion failure comes from the underlying `pa.concat_tables()`, which is in the pyarrow package. That method does check schemas.\r\n\r\nSince `features.type` does not contain information about nullable vs non-nullable features, the `cast_()` method won't resolve the schema mismatch. There is information in a schema which is not stored in features.",
"I'm doing a refactor of type inference in #363 . Both text fields should match after that",
"By default nullable will be set to True",
"It should be good now. I was able to run\r\n\r\n```python\r\n>>> from nlp import concatenate_datasets, load_dataset\r\n>>>\r\n>>> bookcorpus = load_dataset(\"bookcorpus\", split=\"train\")\r\n>>> wiki = load_dataset(\"wikipedia\", \"20200501.en\", split=\"train\")\r\n>>> wiki.remove_columns_(\"title\") # only keep the text\r\n>>>\r\n>>> assert bookcorpus.features.type == wiki.features.type\r\n>>> bert_dataset = concatenate_datasets([bookcorpus, wiki])\r\n```",
"Thanks!"
] | 2020-08-11T00:27:46
| 2020-08-26T16:17:19
| 2020-08-26T16:17:19
|
CONTRIBUTOR
| null | null | null | null |
Here's the code I'm trying to run:
```python
dset_wikipedia = nlp.load_dataset("wikipedia", "20200501.en", split="train", cache_dir=args.cache_dir)
dset_wikipedia.drop(columns=["title"])
dset_wikipedia.features.pop("title")
dset_books = nlp.load_dataset("bookcorpus", split="train", cache_dir=args.cache_dir)
dset = nlp.concatenate_datasets([dset_wikipedia, dset_books])
```
This fails because they have different schemas, despite having identical features.
```python
assert dset_wikipedia.features == dset_books.features # True
assert dset_wikipedia._data.schema == dset_books._data.schema # False
```
The Wikipedia dataset has 'text: string', while the BookCorpus dataset has 'text: string not null'. Currently I hack together a working schema match with the following line, but it would be better if this was handled in Features themselves.
```python
dset_wikipedia._data = dset_wikipedia.data.cast(dset_books._data.schema)
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4",
"events_url": "https://api.github.com/users/jarednielsen/events{/privacy}",
"followers_url": "https://api.github.com/users/jarednielsen/followers",
"following_url": "https://api.github.com/users/jarednielsen/following{/other_user}",
"gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jarednielsen",
"id": 4564897,
"login": "jarednielsen",
"node_id": "MDQ6VXNlcjQ1NjQ4OTc=",
"organizations_url": "https://api.github.com/users/jarednielsen/orgs",
"received_events_url": "https://api.github.com/users/jarednielsen/received_events",
"repos_url": "https://api.github.com/users/jarednielsen/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jarednielsen",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/492/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/492/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 15 days, 15:49:33
|
https://api.github.com/repos/huggingface/datasets/issues/491
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/491/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/491/comments
|
https://api.github.com/repos/huggingface/datasets/issues/491/events
|
https://github.com/huggingface/datasets/issues/491
| 676,486,275
|
MDU6SXNzdWU2NzY0ODYyNzU=
| 491
|
No 0.4.0 release on GitHub
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4",
"events_url": "https://api.github.com/users/jarednielsen/events{/privacy}",
"followers_url": "https://api.github.com/users/jarednielsen/followers",
"following_url": "https://api.github.com/users/jarednielsen/following{/other_user}",
"gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jarednielsen",
"id": 4564897,
"login": "jarednielsen",
"node_id": "MDQ6VXNlcjQ1NjQ4OTc=",
"organizations_url": "https://api.github.com/users/jarednielsen/orgs",
"received_events_url": "https://api.github.com/users/jarednielsen/received_events",
"repos_url": "https://api.github.com/users/jarednielsen/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jarednielsen",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"I did the release on github, and updated the doc :)\r\nSorry for the delay",
"Thanks!"
] | 2020-08-10T23:59:57
| 2020-08-11T16:50:07
| 2020-08-11T16:50:07
|
CONTRIBUTOR
| null | null | null | null |
0.4.0 was released on PyPi, but not on GitHub. This means [the documentation](https://huggingface.co/nlp/) is still displaying from 0.3.0, and that there's no tag to easily clone the 0.4.0 version of the repo.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4",
"events_url": "https://api.github.com/users/jarednielsen/events{/privacy}",
"followers_url": "https://api.github.com/users/jarednielsen/followers",
"following_url": "https://api.github.com/users/jarednielsen/following{/other_user}",
"gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jarednielsen",
"id": 4564897,
"login": "jarednielsen",
"node_id": "MDQ6VXNlcjQ1NjQ4OTc=",
"organizations_url": "https://api.github.com/users/jarednielsen/orgs",
"received_events_url": "https://api.github.com/users/jarednielsen/received_events",
"repos_url": "https://api.github.com/users/jarednielsen/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jarednielsen",
"user_view_type": "public"
}
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/491/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/491/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 16:50:10
|
https://api.github.com/repos/huggingface/datasets/issues/490
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/490/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/490/comments
|
https://api.github.com/repos/huggingface/datasets/issues/490/events
|
https://github.com/huggingface/datasets/issues/490
| 676,482,242
|
MDU6SXNzdWU2NzY0ODIyNDI=
| 490
|
Loading preprocessed Wikipedia dataset requires apache_beam
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4",
"events_url": "https://api.github.com/users/jarednielsen/events{/privacy}",
"followers_url": "https://api.github.com/users/jarednielsen/followers",
"following_url": "https://api.github.com/users/jarednielsen/following{/other_user}",
"gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jarednielsen",
"id": 4564897,
"login": "jarednielsen",
"node_id": "MDQ6VXNlcjQ1NjQ4OTc=",
"organizations_url": "https://api.github.com/users/jarednielsen/orgs",
"received_events_url": "https://api.github.com/users/jarednielsen/received_events",
"repos_url": "https://api.github.com/users/jarednielsen/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jarednielsen",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[] | 2020-08-10T23:46:50
| 2020-08-14T13:17:20
| 2020-08-14T13:17:20
|
CONTRIBUTOR
| null | null | null | null |
Running
`nlp.load_dataset("wikipedia", "20200501.en", split="train", dir="/tmp/wikipedia")`
gives an error if apache_beam is not installed, stemming from
https://github.com/huggingface/nlp/blob/38eb2413de54ee804b0be81781bd65ac4a748ced/src/nlp/builder.py#L981-L988
This succeeded without the dependency in version 0.3.0. This seems like an unnecessary dependency to process some dataset info if you're using the already-preprocessed version. Could it be removed?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/490/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/490/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 3 days, 13:30:30
|
https://api.github.com/repos/huggingface/datasets/issues/489
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/489/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/489/comments
|
https://api.github.com/repos/huggingface/datasets/issues/489/events
|
https://github.com/huggingface/datasets/issues/489
| 676,456,257
|
MDU6SXNzdWU2NzY0NTYyNTc=
| 489
|
ug
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/2000204?v=4",
"events_url": "https://api.github.com/users/timothyjlaurent/events{/privacy}",
"followers_url": "https://api.github.com/users/timothyjlaurent/followers",
"following_url": "https://api.github.com/users/timothyjlaurent/following{/other_user}",
"gists_url": "https://api.github.com/users/timothyjlaurent/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/timothyjlaurent",
"id": 2000204,
"login": "timothyjlaurent",
"node_id": "MDQ6VXNlcjIwMDAyMDQ=",
"organizations_url": "https://api.github.com/users/timothyjlaurent/orgs",
"received_events_url": "https://api.github.com/users/timothyjlaurent/received_events",
"repos_url": "https://api.github.com/users/timothyjlaurent/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/timothyjlaurent/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/timothyjlaurent/subscriptions",
"type": "User",
"url": "https://api.github.com/users/timothyjlaurent",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"whoops",
"please delete this"
] | 2020-08-10T22:33:03
| 2020-08-10T22:55:14
| 2020-08-10T22:33:40
|
NONE
| null | null | null | null |
{
"avatar_url": "https://avatars.githubusercontent.com/u/2000204?v=4",
"events_url": "https://api.github.com/users/timothyjlaurent/events{/privacy}",
"followers_url": "https://api.github.com/users/timothyjlaurent/followers",
"following_url": "https://api.github.com/users/timothyjlaurent/following{/other_user}",
"gists_url": "https://api.github.com/users/timothyjlaurent/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/timothyjlaurent",
"id": 2000204,
"login": "timothyjlaurent",
"node_id": "MDQ6VXNlcjIwMDAyMDQ=",
"organizations_url": "https://api.github.com/users/timothyjlaurent/orgs",
"received_events_url": "https://api.github.com/users/timothyjlaurent/received_events",
"repos_url": "https://api.github.com/users/timothyjlaurent/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/timothyjlaurent/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/timothyjlaurent/subscriptions",
"type": "User",
"url": "https://api.github.com/users/timothyjlaurent",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/489/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/489/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 0:00:37
|
|
https://api.github.com/repos/huggingface/datasets/issues/488
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/488/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/488/comments
|
https://api.github.com/repos/huggingface/datasets/issues/488/events
|
https://github.com/huggingface/datasets/issues/488
| 676,299,993
|
MDU6SXNzdWU2NzYyOTk5OTM=
| 488
|
issues with downloading datasets for wmt16 and wmt19
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stas00",
"id": 10676103,
"login": "stas00",
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"repos_url": "https://api.github.com/users/stas00/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stas00",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"I found `UNv1.0.en-ru.tar.gz` here: https://conferences.unite.un.org/uncorpus/en/downloadoverview, so it can be reconstructed with:\r\n```\r\nwget -c https://stuncorpusprod.blob.core.windows.net/corpusfiles/UNv1.0.en-ru.tar.gz.00\r\nwget -c https://stuncorpusprod.blob.core.windows.net/corpusfiles/UNv1.0.en-ru.tar.gz.01\r\nwget -c https://stuncorpusprod.blob.core.windows.net/corpusfiles/UNv1.0.en-ru.tar.gz.02\r\ncat UNv1.0.en-ru.tar.gz.0* > UNv1.0.en-ru.tar.gz\r\n```\r\nit has other languages as well, in case https://storage.googleapis.com/tfdataset-data/downloadataset/uncorpus/ is gone",
"Further, `nlp.load_dataset('wmt19', 'ru-en')` has only the `train` and `val` datasets. `test` is missing.\r\n\r\nFixed locally for summarization needs, by running:\r\n```\r\npip install sacrebleu\r\nsacrebleu -t wmt19 -l ru-en --echo src > test.source\r\nsacrebleu -t wmt19 -l ru-en --echo ref > test.target\r\n```\r\nh/t @sshleifer ",
"Fixed in https://github.com/huggingface/datasets/pull/1912"
] | 2020-08-10T17:32:51
| 2022-10-04T17:46:59
| 2022-10-04T17:46:58
|
CONTRIBUTOR
| null | null | null | null |
I have encountered multiple issues while trying to:
```
import nlp
dataset = nlp.load_dataset('wmt16', 'ru-en')
metric = nlp.load_metric('wmt16')
```
1. I had to do `pip install -e ".[dev]" ` on master, currently released nlp didn't work (sorry, didn't save the error) - I went back to the released version and now it worked. So it must have been some outdated dependencies that `pip install -e ".[dev]" ` fixed.
2. it was downloading at 60kbs - almost 5 hours to get the dataset. It was downloading all pairs and not just the one I asked for.
I tried the same code with `wmt19` in parallel and it took a few secs to download and it only fetched data for the requested pair. (but it failed too, see below)
3. my machine has crushed and when I retried I got:
```
Traceback (most recent call last):
File "./download.py", line 9, in <module>
dataset = nlp.load_dataset('wmt16', 'ru-en')
File "/mnt/nvme1/code/huggingface/nlp-master/src/nlp/load.py", line 549, in load_dataset
download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications,
File "/mnt/nvme1/code/huggingface/nlp-master/src/nlp/builder.py", line 449, in download_and_prepare
with incomplete_dir(self._cache_dir) as tmp_data_dir:
File "/home/stas/anaconda3/envs/main/lib/python3.7/contextlib.py", line 112, in __enter__
return next(self.gen)
File "/mnt/nvme1/code/huggingface/nlp-master/src/nlp/builder.py", line 422, in incomplete_dir
os.makedirs(tmp_dir)
File "/home/stas/anaconda3/envs/main/lib/python3.7/os.py", line 221, in makedirs
mkdir(name, mode)
FileExistsError: [Errno 17] File exists: '/home/stas/.cache/huggingface/datasets/wmt16/ru-en/1.0.0/4d8269cdd971ed26984a9c0e4a158e0c7afc8135fac8fb8ee43ceecf38fd422d.incomplete'
```
it can't handle resumes. but neither allows a new start. Had to delete it manually.
4. and finally when it downloaded the dataset, it then failed to fetch the metrics:
```
Traceback (most recent call last):
File "./download.py", line 15, in <module>
metric = nlp.load_metric('wmt16')
File "/mnt/nvme1/code/huggingface/nlp-master/src/nlp/load.py", line 442, in load_metric
module_path, hash = prepare_module(path, download_config=download_config, dataset=False)
File "/mnt/nvme1/code/huggingface/nlp-master/src/nlp/load.py", line 258, in prepare_module
local_path = cached_path(file_path, download_config=download_config)
File "/mnt/nvme1/code/huggingface/nlp-master/src/nlp/utils/file_utils.py", line 198, in cached_path
local_files_only=download_config.local_files_only,
File "/mnt/nvme1/code/huggingface/nlp-master/src/nlp/utils/file_utils.py", line 356, in get_from_cache
raise ConnectionError("Couldn't reach {}".format(url))
ConnectionError: Couldn't reach https://s3.amazonaws.com/datasets.huggingface.co/nlp/metrics/wmt16/wmt16.py
```
5. If I run the same code with `wmt19`, it fails too:
```
ConnectionError: Couldn't reach https://storage.googleapis.com/tfdataset-data/downloadataset/uncorpus/UNv1.0.en-ru.tar.gz
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/488/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/488/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 785 days, 0:14:07
|
https://api.github.com/repos/huggingface/datasets/issues/486
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/486/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/486/comments
|
https://api.github.com/repos/huggingface/datasets/issues/486/events
|
https://github.com/huggingface/datasets/issues/486
| 675,649,034
|
MDU6SXNzdWU2NzU2NDkwMzQ=
| 486
|
Bookcorpus data contains pretokenized text
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/99543?v=4",
"events_url": "https://api.github.com/users/orsharir/events{/privacy}",
"followers_url": "https://api.github.com/users/orsharir/followers",
"following_url": "https://api.github.com/users/orsharir/following{/other_user}",
"gists_url": "https://api.github.com/users/orsharir/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/orsharir",
"id": 99543,
"login": "orsharir",
"node_id": "MDQ6VXNlcjk5NTQz",
"organizations_url": "https://api.github.com/users/orsharir/orgs",
"received_events_url": "https://api.github.com/users/orsharir/received_events",
"repos_url": "https://api.github.com/users/orsharir/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/orsharir/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/orsharir/subscriptions",
"type": "User",
"url": "https://api.github.com/users/orsharir",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"Yes indeed it looks like some `'` and spaces are missing (for example in `dont` or `didnt`).\r\nDo you know if there exist some copies without this issue ?\r\nHow would you fix this issue on the current data exactly ? I can see that the data is raw text (not tokenized) so I'm not sure I understand how you would do it. Could you provide more details ?",
"I'm afraid that I don't know how to obtain the original BookCorpus data. I believe this version came from an anonymous Google Drive link posted in another issue.\r\n\r\nGoing through the raw text in this version, it's apparent that NLTK's TreebankWordTokenizer was applied on it (I gave some examples in my original post), followed by:\r\n`' '.join(tokens)`\r\nYou can retrieve the tokenization by splitting on whitespace. You can then \"detokenize\" it with TreebankWordDetokenizer class of NLTK (though, as I suggested, use the fixed version in my repo). This will bring the text closer to its original form, but some steps of TreebankWordTokenizer are destructive, so it wouldn't be one-to-one. Something along the lines of the following should work:\r\n```\r\ntreebank_detokenizer = nltk.tokenize.treebank.TreebankWordDetokenizer()\r\ndb = nlp.load_dataset('bookcorpus', split=nlp.Split.TRAIN)\r\ndb = db.map(lambda x: treebank_detokenizer.detokenize(x['text'].split()))\r\n```\r\n\r\nRegarding other issues beyond the above, I'm afraid that I can't help with that.",
"Ok I get it, that would be very cool indeed\r\n\r\nWhat kinds of patterns the detokenizer can't retrieve ?",
"The TreebankTokenizer makes some assumptions about whitespace, parentheses, quotation marks, etc. For instance, while tokenizing the following text:\r\n```\r\nDwayne \"The Rock\" Johnson\r\n```\r\nwill result in:\r\n```\r\nDwayne `` The Rock '' Johnson\r\n```\r\nwhere the left and right quotation marks are turned into distinct symbols. Upon reconstruction, we can attach the left part to its token on the right, and respectively for the right part. However, the following texts would be tokenized exactly the same:\r\n```\r\nDwayne \" The Rock \" Johnson\r\nDwayne \" The Rock\" Johnson\r\nDwayne \" The Rock\" Johnson\r\n...\r\n```\r\nIn the above examples, the detokenizer would correct these inputs into the canonical text\r\n```\r\nDwayne \"The Rock\" Johnson\r\n```\r\nHowever, there are cases where there the solution cannot easily be inferred (at least without a true LM - this tokenizer is just a bunch of regexes). For instance, in cases where you have a fragment that contains the end of quote, but not its beginning, plus an accidental space:\r\n```\r\n... and it sounds fantastic, \" he said.\r\n```\r\nIn the above case, the tokenizer would assume that the quotes refer to the next token, and so upon detokenization it will result in the following mistake:\r\n```\r\n... and it sounds fantastic, \"he said.\r\n```\r\n\r\nWhile these are all odd edge cases (the basic assumptions do make sense), in noisy data they can occur, which is why I mentioned that the detokenizer cannot restore the original perfectly.\r\n",
"To confirm, since this is preprocessed, this was not the exact version of the Book Corpus used to actually train the models described here (particularly Distilbert)? https://huggingface.co/datasets/bookcorpus\r\n\r\nOr does this preprocessing exactly match that of the papers?",
"I believe these are just artifacts of this particular source. It might be better to crawl it again, or use another preprocessed source, as found here: https://github.com/soskek/bookcorpus ",
"Yes actually the BookCorpus on hugginface is based on [this](https://github.com/soskek/bookcorpus/issues/24#issuecomment-643933352). And I kind of regret naming it as \"BookCorpus\" instead of something like \"BookCorpusLike\".\r\n\r\nBut there is a good news ! @shawwn has replicated BookCorpus in his way, and also provided a link to download the plain text files. see [here](https://github.com/soskek/bookcorpus/issues/27). There is chance we can have a \"OpenBookCorpus\" !",
"Resolved via #856"
] | 2020-08-09T06:53:24
| 2022-10-04T17:44:33
| 2022-10-04T17:44:33
|
CONTRIBUTOR
| null | null | null | null |
It seem that the bookcoprus data downloaded through the library was pretokenized with NLTK's Treebank tokenizer, which changes the text in incompatible ways to how, for instance, BERT's wordpiece tokenizer works. For example, "didn't" becomes "did" + "n't", and double quotes are changed to `` and '' for start and end quotes, respectively.
On my own projects, I just run the data through NLTK's TreebankWordDetokenizer to reverse the tokenization (as best as possible). I think it would be beneficial to apply this transformation directly on your remote cached copy of the dataset. If you choose to do so, I would also suggest to use my fork of NLTK that fixes several bugs in their detokenizer (I've opened a pull-request, but they've yet to respond): https://github.com/nltk/nltk/pull/2575
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/486/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/486/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 786 days, 10:51:09
|
https://api.github.com/repos/huggingface/datasets/issues/485
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/485/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/485/comments
|
https://api.github.com/repos/huggingface/datasets/issues/485/events
|
https://github.com/huggingface/datasets/issues/485
| 675,595,393
|
MDU6SXNzdWU2NzU1OTUzOTM=
| 485
|
PAWS dataset first item is header
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/13238952?v=4",
"events_url": "https://api.github.com/users/jxmorris12/events{/privacy}",
"followers_url": "https://api.github.com/users/jxmorris12/followers",
"following_url": "https://api.github.com/users/jxmorris12/following{/other_user}",
"gists_url": "https://api.github.com/users/jxmorris12/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jxmorris12",
"id": 13238952,
"login": "jxmorris12",
"node_id": "MDQ6VXNlcjEzMjM4OTUy",
"organizations_url": "https://api.github.com/users/jxmorris12/orgs",
"received_events_url": "https://api.github.com/users/jxmorris12/received_events",
"repos_url": "https://api.github.com/users/jxmorris12/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jxmorris12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jxmorris12/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jxmorris12",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[] | 2020-08-08T22:05:25
| 2020-08-19T09:50:01
| 2020-08-19T09:50:01
|
CONTRIBUTOR
| null | null | null | null |
```
import nlp
dataset = nlp.load_dataset('xtreme', 'PAWS-X.en')
dataset['test'][0]
```
prints the following
```
{'label': 'label', 'sentence1': 'sentence1', 'sentence2': 'sentence2'}
```
dataset['test'][0] should probably be the first item in the dataset, not just a dictionary mapping the column names to themselves. Probably just need to ignore the first row in the dataset by default or something like that.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/thomwolf",
"id": 7353373,
"login": "thomwolf",
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"type": "User",
"url": "https://api.github.com/users/thomwolf",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/485/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/485/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 10 days, 11:44:36
|
https://api.github.com/repos/huggingface/datasets/issues/483
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/483/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/483/comments
|
https://api.github.com/repos/huggingface/datasets/issues/483/events
|
https://github.com/huggingface/datasets/issues/483
| 675,080,694
|
MDU6SXNzdWU2NzUwODA2OTQ=
| 483
|
rotten tomatoes movie review dataset taken down
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/13238952?v=4",
"events_url": "https://api.github.com/users/jxmorris12/events{/privacy}",
"followers_url": "https://api.github.com/users/jxmorris12/followers",
"following_url": "https://api.github.com/users/jxmorris12/following{/other_user}",
"gists_url": "https://api.github.com/users/jxmorris12/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jxmorris12",
"id": 13238952,
"login": "jxmorris12",
"node_id": "MDQ6VXNlcjEzMjM4OTUy",
"organizations_url": "https://api.github.com/users/jxmorris12/orgs",
"received_events_url": "https://api.github.com/users/jxmorris12/received_events",
"repos_url": "https://api.github.com/users/jxmorris12/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jxmorris12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jxmorris12/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jxmorris12",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] |
[
"found a mirror: https://storage.googleapis.com/seldon-datasets/sentence_polarity_v1/rt-polaritydata.tar.gz",
"fixed in #484 ",
"Closing this one. Thanks again @jxmorris12 for taking care of this :)"
] | 2020-08-07T15:12:01
| 2020-09-08T09:36:34
| 2020-09-08T09:36:33
|
CONTRIBUTOR
| null | null | null | null |
In an interesting twist of events, the individual who created the movie review seems to have left Cornell, and their webpage has been removed, along with the movie review dataset (http://www.cs.cornell.edu/people/pabo/movie-review-data/rt-polaritydata.tar.gz). It's not downloadable anymore.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/483/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/483/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 31 days, 18:24:32
|
https://api.github.com/repos/huggingface/datasets/issues/482
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/482/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/482/comments
|
https://api.github.com/repos/huggingface/datasets/issues/482/events
|
https://github.com/huggingface/datasets/issues/482
| 674,851,147
|
MDU6SXNzdWU2NzQ4NTExNDc=
| 482
|
Bugs : dataset.map() is frozen on ELI5
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/56621342?v=4",
"events_url": "https://api.github.com/users/ratthachat/events{/privacy}",
"followers_url": "https://api.github.com/users/ratthachat/followers",
"following_url": "https://api.github.com/users/ratthachat/following{/other_user}",
"gists_url": "https://api.github.com/users/ratthachat/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ratthachat",
"id": 56621342,
"login": "ratthachat",
"node_id": "MDQ6VXNlcjU2NjIxMzQy",
"organizations_url": "https://api.github.com/users/ratthachat/orgs",
"received_events_url": "https://api.github.com/users/ratthachat/received_events",
"repos_url": "https://api.github.com/users/ratthachat/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ratthachat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ratthachat/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ratthachat",
"user_view_type": "public"
}
|
[] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
] |
[
"This comes from an overflow in pyarrow's array.\r\nIt is stuck inside the loop that reduces the batch size to avoid the overflow.\r\nI'll take a look",
"I created a PR to fix the issue.\r\nIt was due to an overflow check that handled badly an empty list.\r\n\r\nYou can try the changes by using \r\n```\r\n!pip install git+https://github.com/huggingface/nlp.git@fix-bad-type-in-overflow-check\r\n```\r\n\r\nAlso I noticed that the first 1000 examples have an empty list in the `title_urls` field. The feature type inference in `.map` will consider it `null` because of that, and it will crash when it encounter the next example with a `title_urls` that is not empty.\r\n\r\nTherefore to fix that, what you can do for now is increase the writer batch size so that the feature inference will take into account at least one example with a non-empty `title_urls`:\r\n\r\n```python\r\n# default batch size is 1_000 and it's not enough for feature type inference because of empty lists\r\nvalid_dataset = valid_dataset.map(make_input_target, writer_batch_size=3_000) \r\n```\r\n\r\nI was able to run the frozen cell with these changes.",
"@lhoestq Perfect and thank you very much!!\r\nClose the issue.",
"@lhoestq mapping the function `make_input_target` was passed by your fixing.\r\n\r\nHowever, there is another error in the final step of `valid_dataset.map(convert_to_features, batched=True)`\r\n\r\n`ArrowInvalid: Could not convert Thepiratebay.vg with type str: converting to null type`\r\n(The [same colab notebook above with new error message](https://colab.research.google.com/drive/14wttOTv3ky74B_c0kv5WrbgQjCF2fYQk?usp=sharing#scrollTo=5sRrJ3_C8rLt))\r\n\r\nDo you have some ideas? (I am really sorry I could not debug it by myself since I never used `pyarrow` before) \r\nNote that `train_dataset.map(convert_to_features, batched=True)` can be run successfully even though train_dataset is 27x bigger than `valid_dataset` so I believe the problem lies in some field of `valid_dataset` again .",
"I got this issue too and fixed it by specifying `writer_batch_size=3_000` in `.map`.\r\nThis is because Arrow didn't expect `Thepiratebay.vg` in `title_urls `, as all previous examples have empty lists in `title_urls `",
"I am clear now . Thank so much again Quentin!",
"I'm getting a hanging `dataset.map()` when running a gradio app with `gradio` for auto-reloading instead of `python`",
"Maybe this is an issue with gradio, could you open an issue on their repo ? `Dataset.map` simply uses `multiprocess.Pool` for multiprocessing\r\n\r\nIf you interrupt the program mayeb the stack trace would give some information of where it was hanging in the code (maybe a lock somewhere ?)"
] | 2020-08-07T08:23:35
| 2023-04-06T09:39:59
| 2020-08-11T23:55:15
|
NONE
| null | null | null | null |
Hi Huggingface Team!
Thank you guys once again for this amazing repo.
I have tried to prepare ELI5 to train with T5, based on [this wonderful notebook of Suraj Patil](https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb)
However, when I run `dataset.map()` on ELI5 to prepare `input_text, target_text`, `dataset.map` is **frozen** in the first hundreds examples. On the contrary, this works totally fine on SQUAD (80,000 examples). Both `nlp` version 0.3.0 and 0.4.0 cause frozen process . Also try various `pyarrow` versions from 0.16.0 / 0.17.0 / 1.0.0 also have the same frozen process.
Reproducible code can be found on [this colab notebook ](https://colab.research.google.com/drive/14wttOTv3ky74B_c0kv5WrbgQjCF2fYQk?usp=sharing), where I also show that the same mapping function works fine on SQUAD, so the problem is likely due to ELI5 somehow.
----------------------------------------
**More Info :** instead of `map`, if I run `for` loop and apply function by myself, there's no error and can finish within 10 seconds. However, `nlp dataset` is immutable (I couldn't manually assign a new key-value to `dataset `object)
I also notice that SQUAD texts are quite clean while ELI5 texts contain many special characters, not sure if this is the cause ?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/56621342?v=4",
"events_url": "https://api.github.com/users/ratthachat/events{/privacy}",
"followers_url": "https://api.github.com/users/ratthachat/followers",
"following_url": "https://api.github.com/users/ratthachat/following{/other_user}",
"gists_url": "https://api.github.com/users/ratthachat/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ratthachat",
"id": 56621342,
"login": "ratthachat",
"node_id": "MDQ6VXNlcjU2NjIxMzQy",
"organizations_url": "https://api.github.com/users/ratthachat/orgs",
"received_events_url": "https://api.github.com/users/ratthachat/received_events",
"repos_url": "https://api.github.com/users/ratthachat/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ratthachat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ratthachat/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ratthachat",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/482/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/482/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
| 4 days, 15:31:40
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.