Nikhil Ranjan commited on
Commit
2197083
·
verified ·
1 Parent(s): 149aafd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +30 -26
README.md CHANGED
@@ -23,6 +23,12 @@ tags:
23
  - text-classification
24
  - arxiv
25
  - wikipedia
 
 
 
 
 
 
26
  ---
27
 
28
  # Dataset Card for Text360 Sample Dataset
@@ -37,45 +43,43 @@ tags:
37
 
38
  This dataset contains text samples from two sources (arXiv and Wikipedia) organized in a hierarchical directory structure. Each sample includes a text field and a subset identifier.
39
 
40
- ### Supported Tasks and Leaderboards
41
 
42
- - **Text Classification:** The dataset can be used for text classification tasks, particularly for distinguishing between arXiv and Wikipedia content.
 
 
 
 
 
 
 
 
 
43
 
44
- ### Languages
45
 
46
- The dataset is in English.
 
 
47
 
48
- ## Dataset Structure
49
 
50
- ### Data Instances
 
 
51
 
52
- Each instance in the dataset is a JSON object with the following structure:
53
  ```json
54
  {
55
- "text": "A long text sample...",
56
- "subset": "arxiv" or "wikipedia"
57
  }
58
  ```
59
 
60
- ### Data Fields
61
 
62
- - `text`: The main text content of the sample
63
- - `subset`: The source of the text ("arxiv" or "wikipedia")
64
-
65
- ### Data Splits
66
-
67
- The dataset is organized in the following directory structure:
68
- ```
69
- .
70
- ├── dir1/
71
- │ └── subdir1/
72
- │ └── sample1.jsonl
73
- └── dir2/
74
- └── subdir2/
75
- └── sample2.jsonl
76
- ```
77
 
78
- ## Dataset Creation
79
 
80
  ### Curation Rationale
81
 
 
23
  - text-classification
24
  - arxiv
25
  - wikipedia
26
+ dataset_info:
27
+ data_files:
28
+ train:
29
+ - dir1/subdir1/sample1.jsonl
30
+ - dir2/subdir2/sample2.jsonl
31
+ config_name: default
32
  ---
33
 
34
  # Dataset Card for Text360 Sample Dataset
 
43
 
44
  This dataset contains text samples from two sources (arXiv and Wikipedia) organized in a hierarchical directory structure. Each sample includes a text field and a subset identifier.
45
 
46
+ ### Data Files Structure
47
 
48
+ The dataset maintains its original directory structure:
49
+ ```
50
+ .
51
+ ├── dir1/
52
+ │ └── subdir1/
53
+ │ └── sample1.jsonl # Contains arXiv samples
54
+ └── dir2/
55
+ └── subdir2/
56
+ └── sample2.jsonl # Contains Wikipedia samples
57
+ ```
58
 
59
+ ### Data Fields
60
 
61
+ Each JSONL file contains records with the following fields:
62
+ - `text`: string - The main text content
63
+ - `subset`: string - Source identifier ("arxiv" or "wikipedia")
64
 
65
+ ### Data Splits
66
 
67
+ All data is included in the train split, distributed across the JSONL files in their respective directories.
68
+
69
+ ### Example Instance
70
 
 
71
  ```json
72
  {
73
+ "text": "This is a long text sample from arxiv about quantum computing...",
74
+ "subset": "arxiv"
75
  }
76
  ```
77
 
78
+ ## Additional Information
79
 
80
+ ### Dataset Creation
 
 
 
 
 
 
 
 
 
 
 
 
 
 
81
 
82
+ The dataset is organized in its original directory structure, with JSONL files containing text samples from arXiv and Wikipedia sources. Each file maintains its original location and format.
83
 
84
  ### Curation Rationale
85