Datasets:
Tasks:
Text Classification
Modalities:
Text
Formats:
csv
Languages:
English
Size:
10K - 100K
License:
| language: | |
| - en | |
| multilinguality: | |
| - monolingual | |
| license: other | |
| license_name: topicnet | |
| license_link: >- | |
| https://github.com/machine-intelligence-laboratory/TopicNet/blob/master/LICENSE.txt | |
| configs: | |
| - config_name: "bag-of-words" | |
| default: true | |
| data_files: | |
| - split: train | |
| path: "data/Reuters_BOW.csv.gz" | |
| - config_name: "natural-order-of-words" | |
| data_files: | |
| - split: train | |
| path: "data/Reuters_NOOW.csv.gz" | |
| task_categories: | |
| - text-classification | |
| task_ids: | |
| - topic-classification | |
| - multi-class-classification | |
| - multi-label-classification | |
| tags: | |
| - topic-modeling | |
| - topic-modelling | |
| - text-clustering | |
| - multimodal-data | |
| - multimodal-learning | |
| - modalities | |
| - document-representation | |
| # Reuters | |
| The Reuters Corpus contains 10,788 news documents totaling 1.3 million words. The documents have been classified into 90 topics, and grouped into two sets, called "training" and "test"; thus, the text with fileid 'test/14826' is a document drawn from the test set. This split is for training and testing algorithms that automatically detect the topic of a document, as we will see in chap-data-intensive. | |
| * Language: English | |
| * Number of topics: 90 | |
| * Number of articles: ~10.000 | |
| * Year: 2000 | |
| ## References | |
| * NLTK datasets: https://www.nltk.org/book/ch02.html. | |
| * Dataset site: https://trec.nist.gov/data/reuters/reuters.html. | |