|
|
--- |
|
|
license: mit |
|
|
task_categories: |
|
|
- text-generation |
|
|
language: |
|
|
- en |
|
|
tags: |
|
|
- code |
|
|
pretty_name: GoodDocs-v0 |
|
|
size_categories: |
|
|
- 100K<n<1M |
|
|
--- |
|
|
|
|
|
# GoodDocs-v0: High-quality code documentation texts |
|
|
|
|
|
GoodDocs-v0 is a text dataset scraped from high-quality documentation sources in the open-source ecosystem, in particular the top 1000 GitHub repositories by stars. It is designed to serve as a foundation for building reasoning systems grounded in software documentation, enabling tasks such as: |
|
|
|
|
|
- Code and API understanding |
|
|
- Documentation question answering and retrieval |
|
|
- Planning and tool-use grounded in docs |
|
|
- Long-context reasoning over multi-file documentation |
|
|
|
|
|
## What's in this repository |
|
|
|
|
|
- `cleaned_texts_on_metadata_only.parquet` — per-file Markdown documents and metadata extracted from documentation trees. |
|
|
- `awesome-repos.parquet` — structured links extracted from Awesome lists-of-lists (`name`, `link`, `description`, `source_repo`, optional `stars`). |
|
|
- `data_collection_utils/` — utilities to regenerate the dataset: |
|
|
- `scrape_gh_docs.py` — main scraper/collector for documentation from GitHub repositories. |
|
|
- `scrape_gh_docs_config.yaml` — reproducible configuration (inputs, outputs, filters, strategies). |
|
|
- `github_links.txt` — the seed list of GitHub repositories (e.g., top repositories by stars). |
|
|
- `awesome_final_repos.py` — extractor for non-"awesome" repositories referenced by Awesome lists. |
|
|
- `awesome_scrap_config.yaml` — configuration for `awesome_final_repos.py` (root, depth, output, cache, workers, optional `fetch_stars`). |
|
|
- `top_1000_repos.py` — helper to refresh the top‑repositories list via the public site referenced in the code. |
|
|
|
|
|
## Schema |
|
|
|
|
|
cleaned_texts_on_metadata_only.parquet — one row per Markdown file (see `md_rows` assembly in `main()`): |
|
|
|
|
|
- `owner`, `repo`, `repo_dir` |
|
|
- `file_rel_repo` — path relative to the saved repo root |
|
|
- `file_rel_outdir` — path relative to `outdir` |
|
|
- `size` — file size in bytes |
|
|
- `mtime` — file modification time (epoch seconds) |
|
|
- `lang` — language prediction field (via `langid.py` when language filtering is enabled) |
|
|
- `content` — raw Markdown text |
|
|
|
|
|
## Quickstart |
|
|
|
|
|
Load the dataset with pandas: |
|
|
|
|
|
```python |
|
|
import pandas as pd |
|
|
df = pd.read_parquet("cleaned_texts_on_metadata_only.parquet") |
|
|
print(len(df), "rows") |
|
|
print(df.columns.tolist()) |
|
|
``` |
|
|
|
|
|
Typical uses: |
|
|
|
|
|
- Retrieval corpora for doc QA and RAG pipelines |
|
|
- Supervision for instruction tuning grounded in docs |
|
|
- Long-context model evaluation with real project documentation |
|
|
|
|
|
## Reproducing the dataset |
|
|
|
|
|
The scraper is configurable and designed to be reproducible via `data_collection_utils/scrape_gh_docs_config.yaml`. |
|
|
|
|
|
1) Prerequisites |
|
|
- System tools: `git` |
|
|
- Python 3.11+ packages: `pandas`, `pyarrow`, `requests`, `tqdm`, `PyYAML`, `langid` |
|
|
- For refreshing top repositories (optional): `playwright` (and `playwright install` for a browser) |
|
|
- A GitHub API token in the environment (`GITHUB_TOKEN`) or a file referenced by the config (`token_file`) |
|
|
|
|
|
2) Inputs |
|
|
- `data_collection_utils/github_links.txt` — list of repositories to process (either `owner/repo` or full URLs) |
|
|
- You can refresh this list with `data_collection_utils/top_1000_repos.py` if desired. |
|
|
|
|
|
3) Run |
|
|
|
|
|
```bash |
|
|
python3 data_collection_utils/scrape_gh_docs.py |
|
|
# or to rebuild Parquet(s) from existing downloads without any network calls: |
|
|
python3 data_collection_utils/scrape_gh_docs.py --no-fetch |
|
|
``` |
|
|
|
|
|
Configuration (YAML-driven; see `data_collection_utils/scrape_gh_docs_config.yaml`): |
|
|
|
|
|
- `input` — path to a file containing one repo per line (owner/repo or full URL) |
|
|
- `outdir`, `md_failed`, `texts_parquet` |
|
|
- `workers`, `dry_run`, `quiet`, `no_fetch` |
|
|
- `token_file` — GitHub token location (or set `GITHUB_TOKEN` env var) |
|
|
- `prefer_sparse`, `prefer_zip`, `only_md`, `min_repo_age_years` |
|
|
- `lang_filter`, `min_text_chars` — control language gating in `cleaned_texts_on_metadata_only.parquet` |
|
|
|
|
|
Output is written to `<outdir>/cleaned_texts_on_metadata_only.parquet`. |
|
|
|
|
|
## Awesome list extraction |
|
|
|
|
|
`data_collection_utils/awesome_final_repos.py` crawls the Awesome list-of-lists and extracts final repositories (those whose repo names do not include "awesome"). For each bullet entry like: |
|
|
|
|
|
``` |
|
|
* [Fuse](https://github.com/owner/repo) - Mobile development tools. |
|
|
``` |
|
|
|
|
|
It records: |
|
|
|
|
|
- `name`: the markdown link text (e.g., `Fuse`). |
|
|
- `link`: canonical GitHub repository URL (e.g., `https://github.com/owner/repo`). |
|
|
- `description`: text after the ` - ` dash, or the rest of the line (with the link and bullet removed) if no dash. |
|
|
- `stars` (optional): repository stargazers count when enabled. |
|
|
|
|
|
Configuration is YAML-first via `data_collection_utils/awesome_scrap_config.yaml`: |
|
|
|
|
|
- `root`: root Awesome repository URL, e.g., `https://github.com/sindresorhus/awesome`. |
|
|
- `depth`: recursion depth for nested Awesome lists (0 = only root). |
|
|
- `output_dir`: directory for `awesome-repos.parquet`. |
|
|
- `cache_dir`: directory for README fetch caches. |
|
|
- `workers`: concurrency for network requests. |
|
|
- `fetch_stars`: when `true`, also fetch stargazers for each parsed repo (makes extra API calls) and include a `stars` column. |
|
|
|
|
|
Run: |
|
|
|
|
|
```bash |
|
|
python3 data_collection_utils/awesome_final_repos.py |
|
|
# or adjust via YAML first, then run without flags |
|
|
``` |
|
|
|
|
|
Schema of `awesome-repos.parquet`: |
|
|
|
|
|
- `name` — link text from the Awesome entry. |
|
|
- `link` — canonical GitHub URL (<https://github.com/owner/repo>). |
|
|
- `description` — description text without the leading ` - ` and without repeating the name. |
|
|
- `source_repo` — the Awesome list repository where the entry was found, formatted as `owner/repo`. |
|
|
- `stars` — integer, optional; only present when `fetch_stars: true`. |
|
|
|
|
|
## Language filtering |
|
|
|
|
|
Language detection is performed with `langid.py` (see imports in `data_collection_utils/scrape_gh_docs.py`). The default configuration keeps English-only files (`lang_filter: en`). There is no probability/confidence threshold; we gate by the predicted language label and a minimum text length (`min_text_chars`). |
|
|
|
|
|
## Licensing |
|
|
|
|
|
- Code and dataset scaffolding in this repository are under the MIT license (see frontmatter). |
|
|
- The original documentation content belongs to the respective upstream projects and remains governed by their licenses. Please consult each repository’s license before redistribution or commercial use. |
|
|
|
|
|
## Acknowledgements |
|
|
|
|
|
This dataset draws from the open-source community’s documentation efforts. The seed list targets highly-starred repositories to bias toward quality, breadth, and maturity. |
|
|
|
|
|
Note to self: `size` distribution: 20th percentile - 363 symbols, 50p - 701, 95p - 17392 |
|
|
|