Update README
Browse files- README.md +111 -0
- data_collection_utils/scrape_gh_docs.py +14 -28
README.md
CHANGED
|
@@ -11,4 +11,115 @@ size_categories:
|
|
| 11 |
- 100K<n<1M
|
| 12 |
---
|
| 13 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 14 |
Note to self: `size` distribution: 20th percentile - 363 symbols, 50p - 701, 95p - 17392
|
|
|
|
| 11 |
- 100K<n<1M
|
| 12 |
---
|
| 13 |
|
| 14 |
+
# GoodDocs-v0: High-quality code documentation texts
|
| 15 |
+
|
| 16 |
+
GoodDocs-v0 is a text dataset scraped from high-quality documentation sources in the open-source ecosystem, in particular the top GitHub repositories by stars. It is designed to serve as a foundation for building reasoning systems grounded in software documentation, enabling tasks such as:
|
| 17 |
+
|
| 18 |
+
- Code and API understanding
|
| 19 |
+
- Documentation question answering and retrieval
|
| 20 |
+
- Planning and tool-use grounded in docs
|
| 21 |
+
- Long-context reasoning over multi-file documentation
|
| 22 |
+
|
| 23 |
+
|
| 24 |
+
## What's in this repository
|
| 25 |
+
|
| 26 |
+
- `texts.parquet` — per-file Markdown documents and metadata extracted from documentation trees.
|
| 27 |
+
- `data_collection_utils/` — utilities to regenerate the dataset:
|
| 28 |
+
- `scrape_gh_docs.py` — main scraper/collector for documentation from GitHub repositories.
|
| 29 |
+
- `parse_gh_docs_config.yaml` — reproducible configuration (inputs, outputs, filters, strategies).
|
| 30 |
+
- `github_links.txt` — the seed list of GitHub repositories (e.g., top repositories by stars).
|
| 31 |
+
- `top_1000_repos.py` — helper to refresh the top‑repositories list via the public site referenced in the code.
|
| 32 |
+
|
| 33 |
+
|
| 34 |
+
## Schema
|
| 35 |
+
|
| 36 |
+
Two main Parquet artifacts are produced by the pipeline.
|
| 37 |
+
|
| 38 |
+
1) results.parquet — one row per repository (see `process_repo_entry()` in `data_collection_utils/scrape_gh_docs.py`):
|
| 39 |
+
- `owner` — repository owner
|
| 40 |
+
- `repo` — repository name
|
| 41 |
+
- `default_branch` — branch used when scraping
|
| 42 |
+
- `method` — strategy used (e.g., `docs_folder_in_repo`, `sparse_docs`, `zip_whole_repo`, `no_fetch`)
|
| 43 |
+
- `docs_found` — boolean
|
| 44 |
+
- `docs_folder` — relative path (under outdir) used to count docs
|
| 45 |
+
- `md_count` — number of Markdown files discovered
|
| 46 |
+
- `status` — `ok`, `low-md-count`, `docs-not-found`, etc.
|
| 47 |
+
- `note` — optional message
|
| 48 |
+
|
| 49 |
+
2) texts.parquet — one row per Markdown file (see `md_rows` assembly in `main()`):
|
| 50 |
+
- `owner`, `repo`, `repo_dir`
|
| 51 |
+
- `file_rel_repo` — path relative to the saved repo root
|
| 52 |
+
- `file_rel_outdir` — path relative to `outdir`
|
| 53 |
+
- `size` — file size in bytes
|
| 54 |
+
- `mtime` — file modification time (epoch seconds)
|
| 55 |
+
- `lang`, `lang_prob` — language prediction fields (via `langid.py` when language filtering is enabled)
|
| 56 |
+
- `content` — raw Markdown text
|
| 57 |
+
|
| 58 |
+
|
| 59 |
+
## Quickstart
|
| 60 |
+
|
| 61 |
+
Load the dataset with pandas:
|
| 62 |
+
|
| 63 |
+
```python
|
| 64 |
+
import pandas as pd
|
| 65 |
+
df = pd.read_parquet("texts.parquet")
|
| 66 |
+
print(len(df), "rows")
|
| 67 |
+
print(df.columns.tolist())
|
| 68 |
+
```
|
| 69 |
+
|
| 70 |
+
Typical uses:
|
| 71 |
+
|
| 72 |
+
- Retrieval corpora for doc QA and RAG pipelines
|
| 73 |
+
- Supervision for instruction tuning grounded in docs
|
| 74 |
+
- Long-context model evaluation with real project documentation
|
| 75 |
+
|
| 76 |
+
|
| 77 |
+
## Reproducing the dataset
|
| 78 |
+
|
| 79 |
+
The scraper is configurable and designed to be reproducible via `data_collection_utils/parse_gh_docs_config.yaml`.
|
| 80 |
+
|
| 81 |
+
1) Prerequisites
|
| 82 |
+
- System tools: `git`
|
| 83 |
+
- Python 3.11+ packages: `pandas`, `pyarrow`, `requests`, `tqdm`, `PyYAML`, `langid`
|
| 84 |
+
- For refreshing top repositories (optional): `playwright` (and `playwright install` for a browser)
|
| 85 |
+
- A GitHub API token in the environment (`GITHUB_TOKEN`) or a file referenced by the config (`token_file`)
|
| 86 |
+
|
| 87 |
+
2) Inputs
|
| 88 |
+
- `data_collection_utils/github_links.txt` — list of repositories to process (either `owner/repo` or full URLs)
|
| 89 |
+
- You can refresh this list with `data_collection_utils/top_1000_repos.py` if desired.
|
| 90 |
+
|
| 91 |
+
3) Run
|
| 92 |
+
|
| 93 |
+
```bash
|
| 94 |
+
python3 data_collection_utils/scrape_gh_docs.py --config data_collection_utils/parse_gh_docs_config.yaml
|
| 95 |
+
```
|
| 96 |
+
|
| 97 |
+
Key configuration knobs (see the YAML file):
|
| 98 |
+
|
| 99 |
+
- `outdir` — where raw files and interim results are stored
|
| 100 |
+
- `prefer_sparse` — attempt `git sparse-checkout` of documentation folders first
|
| 101 |
+
- `prefer_zip` — fall back to downloading the whole repo zip and extracting Markdown
|
| 102 |
+
- `only_md` — restrict extracted files to Markdown
|
| 103 |
+
- `no_fetch` — do not call the network; rebuild Parquet(s) from existing `outdir`
|
| 104 |
+
- `lang_filter`, `min_lang_prob`, `min_text_chars` — control language gating in `texts.parquet`
|
| 105 |
+
|
| 106 |
+
Outputs are written to `<outdir>/results.parquet` and, when enabled, `<outdir>/texts.parquet`. In this repository, `texts.parquet` is provided at the root for convenience.
|
| 107 |
+
|
| 108 |
+
|
| 109 |
+
## Language filtering
|
| 110 |
+
|
| 111 |
+
Language detection is performed with `langid.py` (see imports in `data_collection_utils/scrape_gh_docs.py`). The default configuration keeps English-only files (`lang_filter: en`) with a minimum probability threshold.
|
| 112 |
+
|
| 113 |
+
|
| 114 |
+
## Licensing
|
| 115 |
+
|
| 116 |
+
- Code and dataset scaffolding in this repository are under the MIT license (see frontmatter).
|
| 117 |
+
- The original documentation content belongs to the respective upstream projects and remains governed by their licenses. Please consult each repository’s license before redistribution or commercial use.
|
| 118 |
+
|
| 119 |
+
|
| 120 |
+
## Acknowledgements
|
| 121 |
+
|
| 122 |
+
This dataset draws from the open-source community’s documentation efforts. The seed list targets highly-starred repositories to bias toward quality, breadth, and maturity.
|
| 123 |
+
|
| 124 |
+
|
| 125 |
Note to self: `size` distribution: 20th percentile - 363 symbols, 50p - 701, 95p - 17392
|
data_collection_utils/scrape_gh_docs.py
CHANGED
|
@@ -49,7 +49,7 @@ import subprocess
|
|
| 49 |
import yaml
|
| 50 |
from datetime import datetime, timezone
|
| 51 |
import logging
|
| 52 |
-
import
|
| 53 |
|
| 54 |
|
| 55 |
GITHUB_API = "https://api.github.com"
|
|
@@ -867,7 +867,7 @@ def main():
|
|
| 867 |
"--min-lang-prob",
|
| 868 |
type=float,
|
| 869 |
default=0.7,
|
| 870 |
-
help="Minimum language probability from
|
| 871 |
)
|
| 872 |
parser.add_argument(
|
| 873 |
"--min-text-chars",
|
|
@@ -976,13 +976,7 @@ def main():
|
|
| 976 |
# Build from existing directories in outdir
|
| 977 |
results: List[Dict[str, Any]] = []
|
| 978 |
md_rows: List[Dict[str, Any]] = []
|
| 979 |
-
#
|
| 980 |
-
lang_identifier = None
|
| 981 |
-
if lang_filter_value is not None and str(lang_filter_value).strip() != "":
|
| 982 |
-
lang_identifier = gcld3.NNetLanguageIdentifier(
|
| 983 |
-
min_num_bytes=int(min_text_chars_value),
|
| 984 |
-
max_num_bytes=1000000,
|
| 985 |
-
)
|
| 986 |
repo_dirs = [
|
| 987 |
d for d in outdir.iterdir() if d.is_dir() and "__" in d.name and not d.name.startswith("tmp_")
|
| 988 |
]
|
|
@@ -1033,24 +1027,17 @@ def main():
|
|
| 1033 |
text = md_file.read_text(encoding="utf-8", errors="replace")
|
| 1034 |
lang_code = None
|
| 1035 |
lang_prob = None
|
| 1036 |
-
is_reliable = None
|
| 1037 |
include = True
|
| 1038 |
-
|
| 1039 |
-
|
| 1040 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1041 |
include = False
|
| 1042 |
-
else:
|
| 1043 |
-
pred = lang_identifier.FindLanguage(text)
|
| 1044 |
-
if pred is None:
|
| 1045 |
-
include = False
|
| 1046 |
-
else:
|
| 1047 |
-
lang_code = pred.language
|
| 1048 |
-
lang_prob = float(getattr(pred, "probability", 0.0))
|
| 1049 |
-
is_reliable = bool(getattr(pred, "is_reliable", False))
|
| 1050 |
-
include = (
|
| 1051 |
-
(lang_code == lang_filter_value)
|
| 1052 |
-
and (lang_prob is None or lang_prob >= min_lang_prob_value)
|
| 1053 |
-
)
|
| 1054 |
if include:
|
| 1055 |
row = {
|
| 1056 |
"owner": owner,
|
|
@@ -1060,9 +1047,8 @@ def main():
|
|
| 1060 |
"file_rel_outdir": str(md_file.relative_to(outdir)),
|
| 1061 |
"size": md_file.stat().st_size,
|
| 1062 |
"mtime": int(md_file.stat().st_mtime),
|
| 1063 |
-
"lang": lang_code
|
| 1064 |
-
"lang_prob": lang_prob
|
| 1065 |
-
"lang_reliable": is_reliable if is_reliable is not None else None,
|
| 1066 |
"content": text,
|
| 1067 |
}
|
| 1068 |
md_rows.append(row)
|
|
|
|
| 49 |
import yaml
|
| 50 |
from datetime import datetime, timezone
|
| 51 |
import logging
|
| 52 |
+
import langid # https://github.com/saffsd/langid.py
|
| 53 |
|
| 54 |
|
| 55 |
GITHUB_API = "https://api.github.com"
|
|
|
|
| 867 |
"--min-lang-prob",
|
| 868 |
type=float,
|
| 869 |
default=0.7,
|
| 870 |
+
help="Minimum language probability from langid to accept a file (0..1)",
|
| 871 |
)
|
| 872 |
parser.add_argument(
|
| 873 |
"--min-text-chars",
|
|
|
|
| 976 |
# Build from existing directories in outdir
|
| 977 |
results: List[Dict[str, Any]] = []
|
| 978 |
md_rows: List[Dict[str, Any]] = []
|
| 979 |
+
# langid does not require explicit initialization
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 980 |
repo_dirs = [
|
| 981 |
d for d in outdir.iterdir() if d.is_dir() and "__" in d.name and not d.name.startswith("tmp_")
|
| 982 |
]
|
|
|
|
| 1027 |
text = md_file.read_text(encoding="utf-8", errors="replace")
|
| 1028 |
lang_code = None
|
| 1029 |
lang_prob = None
|
|
|
|
| 1030 |
include = True
|
| 1031 |
+
# Apply langid-based detection and optional filter
|
| 1032 |
+
if len(text) >= min_text_chars_value:
|
| 1033 |
+
lid_code, lid_prob = langid.classify(text)
|
| 1034 |
+
lang_code = lid_code
|
| 1035 |
+
lang_prob = float(lid_prob)
|
| 1036 |
+
if lang_filter_value is not None and str(lang_filter_value).strip() != "":
|
| 1037 |
+
include = (lang_code == lang_filter_value) and (lang_prob >= min_lang_prob_value)
|
| 1038 |
+
else:
|
| 1039 |
+
if lang_filter_value is not None and str(lang_filter_value).strip() != "":
|
| 1040 |
include = False
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1041 |
if include:
|
| 1042 |
row = {
|
| 1043 |
"owner": owner,
|
|
|
|
| 1047 |
"file_rel_outdir": str(md_file.relative_to(outdir)),
|
| 1048 |
"size": md_file.stat().st_size,
|
| 1049 |
"mtime": int(md_file.stat().st_mtime),
|
| 1050 |
+
"lang": lang_code,
|
| 1051 |
+
"lang_prob": lang_prob,
|
|
|
|
| 1052 |
"content": text,
|
| 1053 |
}
|
| 1054 |
md_rows.append(row)
|