File size: 6,657 Bytes
4a04bb5
 
 
 
 
 
 
 
 
 
 
a34b549
 
0e2c007
 
7a25341
0e2c007
 
 
 
 
 
 
 
660d674
5e6838c
0e2c007
 
20c3639
0e2c007
5e6838c
 
0e2c007
 
 
 
660d674
0e2c007
5e6838c
 
 
 
 
 
 
0e2c007
 
 
 
 
 
 
660d674
0e2c007
 
 
 
 
 
 
 
 
 
 
 
20c3639
0e2c007
 
 
 
 
 
 
 
 
 
 
 
 
 
4630460
 
 
0e2c007
 
20c3639
0e2c007
4630460
 
 
 
 
660d674
0e2c007
660d674
0e2c007
5e6838c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0e2c007
 
 
4630460
0e2c007
 
 
 
 
 
 
 
 
 
5e6838c
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
---
license: mit
task_categories:
- text-generation
language:
- en
tags:
- code
pretty_name: GoodDocs-v0
size_categories:
- 100K<n<1M
---

# GoodDocs-v0: High-quality code documentation texts

GoodDocs-v0 is a text dataset scraped from high-quality documentation sources in the open-source ecosystem, in particular the top 1000 GitHub repositories by stars. It is designed to serve as a foundation for building reasoning systems grounded in software documentation, enabling tasks such as:

- Code and API understanding
- Documentation question answering and retrieval
- Planning and tool-use grounded in docs
- Long-context reasoning over multi-file documentation

## What's in this repository

- `cleaned_texts_on_metadata_only.parquet` — per-file Markdown documents and metadata extracted from documentation trees.
- `awesome-repos.parquet` — structured links extracted from Awesome lists-of-lists (`name`, `link`, `description`, `source_repo`, optional `stars`).
- `data_collection_utils/` — utilities to regenerate the dataset:
  - `scrape_gh_docs.py` — main scraper/collector for documentation from GitHub repositories.
  - `scrape_gh_docs_config.yaml` — reproducible configuration (inputs, outputs, filters, strategies).
  - `github_links.txt` — the seed list of GitHub repositories (e.g., top repositories by stars).
  - `awesome_final_repos.py` — extractor for non-"awesome" repositories referenced by Awesome lists.
  - `awesome_scrap_config.yaml` — configuration for `awesome_final_repos.py` (root, depth, output, cache, workers, optional `fetch_stars`).
  - `top_1000_repos.py` — helper to refresh the top‑repositories list via the public site referenced in the code.

## Schema

cleaned_texts_on_metadata_only.parquet — one row per Markdown file (see `md_rows` assembly in `main()`):

- `owner`, `repo`, `repo_dir`
- `file_rel_repo` — path relative to the saved repo root
- `file_rel_outdir` — path relative to `outdir`
- `size` — file size in bytes
- `mtime` — file modification time (epoch seconds)
- `lang` — language prediction field (via `langid.py` when language filtering is enabled)
- `content` — raw Markdown text

## Quickstart

Load the dataset with pandas:

```python
import pandas as pd
df = pd.read_parquet("cleaned_texts_on_metadata_only.parquet")
print(len(df), "rows")
print(df.columns.tolist())
```

Typical uses:

- Retrieval corpora for doc QA and RAG pipelines
- Supervision for instruction tuning grounded in docs
- Long-context model evaluation with real project documentation

## Reproducing the dataset

The scraper is configurable and designed to be reproducible via `data_collection_utils/scrape_gh_docs_config.yaml`.

1) Prerequisites
   - System tools: `git`
   - Python 3.11+ packages: `pandas`, `pyarrow`, `requests`, `tqdm`, `PyYAML`, `langid`
   - For refreshing top repositories (optional): `playwright` (and `playwright install` for a browser)
   - A GitHub API token in the environment (`GITHUB_TOKEN`) or a file referenced by the config (`token_file`)

2) Inputs
   - `data_collection_utils/github_links.txt` — list of repositories to process (either `owner/repo` or full URLs)
   - You can refresh this list with `data_collection_utils/top_1000_repos.py` if desired.

3) Run

```bash
python3 data_collection_utils/scrape_gh_docs.py
# or to rebuild Parquet(s) from existing downloads without any network calls:
python3 data_collection_utils/scrape_gh_docs.py --no-fetch
```

Configuration (YAML-driven; see `data_collection_utils/scrape_gh_docs_config.yaml`):

- `input` — path to a file containing one repo per line (owner/repo or full URL)
- `outdir`, `md_failed`, `texts_parquet`
- `workers`, `dry_run`, `quiet`, `no_fetch`
- `token_file` — GitHub token location (or set `GITHUB_TOKEN` env var)
- `prefer_sparse`, `prefer_zip`, `only_md`, `min_repo_age_years`
- `lang_filter`, `min_text_chars` — control language gating in `cleaned_texts_on_metadata_only.parquet`

Output is written to `<outdir>/cleaned_texts_on_metadata_only.parquet`.

## Awesome list extraction

`data_collection_utils/awesome_final_repos.py` crawls the Awesome list-of-lists and extracts final repositories (those whose repo names do not include "awesome"). For each bullet entry like:

```
* [Fuse](https://github.com/owner/repo) - Mobile development tools.
```

It records:

- `name`: the markdown link text (e.g., `Fuse`).
- `link`: canonical GitHub repository URL (e.g., `https://github.com/owner/repo`).
- `description`: text after the ` - ` dash, or the rest of the line (with the link and bullet removed) if no dash.
- `stars` (optional): repository stargazers count when enabled.

Configuration is YAML-first via `data_collection_utils/awesome_scrap_config.yaml`:

- `root`: root Awesome repository URL, e.g., `https://github.com/sindresorhus/awesome`.
- `depth`: recursion depth for nested Awesome lists (0 = only root).
- `output_dir`: directory for `awesome-repos.parquet`.
- `cache_dir`: directory for README fetch caches.
- `workers`: concurrency for network requests.
- `fetch_stars`: when `true`, also fetch stargazers for each parsed repo (makes extra API calls) and include a `stars` column.

Run:

```bash
python3 data_collection_utils/awesome_final_repos.py
# or adjust via YAML first, then run without flags
```

Schema of `awesome-repos.parquet`:

- `name` — link text from the Awesome entry.
- `link` — canonical GitHub URL (<https://github.com/owner/repo>).
- `description` — description text without the leading ` - ` and without repeating the name.
- `source_repo` — the Awesome list repository where the entry was found, formatted as `owner/repo`.
- `stars` — integer, optional; only present when `fetch_stars: true`.

## Language filtering

Language detection is performed with `langid.py` (see imports in `data_collection_utils/scrape_gh_docs.py`). The default configuration keeps English-only files (`lang_filter: en`). There is no probability/confidence threshold; we gate by the predicted language label and a minimum text length (`min_text_chars`).

## Licensing

- Code and dataset scaffolding in this repository are under the MIT license (see frontmatter).
- The original documentation content belongs to the respective upstream projects and remains governed by their licenses. Please consult each repository’s license before redistribution or commercial use.

## Acknowledgements

This dataset draws from the open-source community’s documentation efforts. The seed list targets highly-starred repositories to bias toward quality, breadth, and maturity.

Note to self: `size` distribution: 20th percentile - 363 symbols, 50p - 701, 95p - 17392