Stars and source for filtering
Browse files- .gitignore +1 -0
- README.md +49 -14
- data_collection_utils/awesome_final_repos.py +62 -4
- data_collection_utils/awesome_scrap_config.yaml +6 -2
.gitignore
CHANGED
|
@@ -2,6 +2,7 @@
|
|
| 2 |
output/
|
| 3 |
md-failed.txt
|
| 4 |
github_links.txt
|
|
|
|
| 5 |
|
| 6 |
|
| 7 |
# Byte-compiled / optimized / DLL files
|
|
|
|
| 2 |
output/
|
| 3 |
md-failed.txt
|
| 4 |
github_links.txt
|
| 5 |
+
awesome-repos.txt
|
| 6 |
|
| 7 |
|
| 8 |
# Byte-compiled / optimized / DLL files
|
README.md
CHANGED
|
@@ -20,28 +20,29 @@ GoodDocs-v0 is a text dataset scraped from high-quality documentation sources in
|
|
| 20 |
- Planning and tool-use grounded in docs
|
| 21 |
- Long-context reasoning over multi-file documentation
|
| 22 |
|
| 23 |
-
|
| 24 |
## What's in this repository
|
| 25 |
|
| 26 |
- `texts.parquet` — per-file Markdown documents and metadata extracted from documentation trees.
|
|
|
|
| 27 |
- `data_collection_utils/` — utilities to regenerate the dataset:
|
| 28 |
- `scrape_gh_docs.py` — main scraper/collector for documentation from GitHub repositories.
|
| 29 |
- `parse_gh_docs_config.yaml` — reproducible configuration (inputs, outputs, filters, strategies).
|
| 30 |
- `github_links.txt` — the seed list of GitHub repositories (e.g., top repositories by stars).
|
|
|
|
|
|
|
| 31 |
- `top_1000_repos.py` — helper to refresh the top‑repositories list via the public site referenced in the code.
|
| 32 |
|
| 33 |
-
|
| 34 |
## Schema
|
| 35 |
|
| 36 |
texts.parquet — one row per Markdown file (see `md_rows` assembly in `main()`):
|
| 37 |
-
- `owner`, `repo`, `repo_dir`
|
| 38 |
-
- `file_rel_repo` — path relative to the saved repo root
|
| 39 |
-
- `file_rel_outdir` — path relative to `outdir`
|
| 40 |
-
- `size` — file size in bytes
|
| 41 |
-
- `mtime` — file modification time (epoch seconds)
|
| 42 |
-
- `lang` — language prediction field (via `langid.py` when language filtering is enabled)
|
| 43 |
-
- `content` — raw Markdown text
|
| 44 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 45 |
|
| 46 |
## Quickstart
|
| 47 |
|
|
@@ -60,7 +61,6 @@ Typical uses:
|
|
| 60 |
- Supervision for instruction tuning grounded in docs
|
| 61 |
- Long-context model evaluation with real project documentation
|
| 62 |
|
| 63 |
-
|
| 64 |
## Reproducing the dataset
|
| 65 |
|
| 66 |
The scraper is configurable and designed to be reproducible via `data_collection_utils/parse_gh_docs_config.yaml`.
|
|
@@ -94,21 +94,56 @@ Configuration (YAML-driven; see `data_collection_utils/parse_gh_docs_config.yaml
|
|
| 94 |
|
| 95 |
Output is written to `<outdir>/texts.parquet`.
|
| 96 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 97 |
|
| 98 |
## Language filtering
|
| 99 |
|
| 100 |
Language detection is performed with `langid.py` (see imports in `data_collection_utils/scrape_gh_docs.py`). The default configuration keeps English-only files (`lang_filter: en`). There is no probability/confidence threshold; we gate by the predicted language label and a minimum text length (`min_text_chars`).
|
| 101 |
|
| 102 |
-
|
| 103 |
## Licensing
|
| 104 |
|
| 105 |
- Code and dataset scaffolding in this repository are under the MIT license (see frontmatter).
|
| 106 |
- The original documentation content belongs to the respective upstream projects and remains governed by their licenses. Please consult each repository’s license before redistribution or commercial use.
|
| 107 |
|
| 108 |
-
|
| 109 |
## Acknowledgements
|
| 110 |
|
| 111 |
This dataset draws from the open-source community’s documentation efforts. The seed list targets highly-starred repositories to bias toward quality, breadth, and maturity.
|
| 112 |
|
| 113 |
-
|
| 114 |
-
Note to self: `size` distribution: 20th percentile - 363 symbols, 50p - 701, 95p - 17392
|
|
|
|
| 20 |
- Planning and tool-use grounded in docs
|
| 21 |
- Long-context reasoning over multi-file documentation
|
| 22 |
|
|
|
|
| 23 |
## What's in this repository
|
| 24 |
|
| 25 |
- `texts.parquet` — per-file Markdown documents and metadata extracted from documentation trees.
|
| 26 |
+
- `awesome-repos.parquet` — structured links extracted from Awesome lists-of-lists (`name`, `link`, `description`, `source_repo`, optional `stars`).
|
| 27 |
- `data_collection_utils/` — utilities to regenerate the dataset:
|
| 28 |
- `scrape_gh_docs.py` — main scraper/collector for documentation from GitHub repositories.
|
| 29 |
- `parse_gh_docs_config.yaml` — reproducible configuration (inputs, outputs, filters, strategies).
|
| 30 |
- `github_links.txt` — the seed list of GitHub repositories (e.g., top repositories by stars).
|
| 31 |
+
- `awesome_final_repos.py` — extractor for non-"awesome" repositories referenced by Awesome lists.
|
| 32 |
+
- `awesome_scrap_config.yaml` — configuration for `awesome_final_repos.py` (root, depth, output, cache, workers, optional `fetch_stars`).
|
| 33 |
- `top_1000_repos.py` — helper to refresh the top‑repositories list via the public site referenced in the code.
|
| 34 |
|
|
|
|
| 35 |
## Schema
|
| 36 |
|
| 37 |
texts.parquet — one row per Markdown file (see `md_rows` assembly in `main()`):
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 38 |
|
| 39 |
+
- `owner`, `repo`, `repo_dir`
|
| 40 |
+
- `file_rel_repo` — path relative to the saved repo root
|
| 41 |
+
- `file_rel_outdir` — path relative to `outdir`
|
| 42 |
+
- `size` — file size in bytes
|
| 43 |
+
- `mtime` — file modification time (epoch seconds)
|
| 44 |
+
- `lang` — language prediction field (via `langid.py` when language filtering is enabled)
|
| 45 |
+
- `content` — raw Markdown text
|
| 46 |
|
| 47 |
## Quickstart
|
| 48 |
|
|
|
|
| 61 |
- Supervision for instruction tuning grounded in docs
|
| 62 |
- Long-context model evaluation with real project documentation
|
| 63 |
|
|
|
|
| 64 |
## Reproducing the dataset
|
| 65 |
|
| 66 |
The scraper is configurable and designed to be reproducible via `data_collection_utils/parse_gh_docs_config.yaml`.
|
|
|
|
| 94 |
|
| 95 |
Output is written to `<outdir>/texts.parquet`.
|
| 96 |
|
| 97 |
+
## Awesome list extraction
|
| 98 |
+
|
| 99 |
+
`data_collection_utils/awesome_final_repos.py` crawls the Awesome list-of-lists and extracts final repositories (those whose repo names do not include "awesome"). For each bullet entry like:
|
| 100 |
+
|
| 101 |
+
```
|
| 102 |
+
* [Fuse](https://github.com/owner/repo) - Mobile development tools.
|
| 103 |
+
```
|
| 104 |
+
|
| 105 |
+
It records:
|
| 106 |
+
|
| 107 |
+
- `name`: the markdown link text (e.g., `Fuse`).
|
| 108 |
+
- `link`: canonical GitHub repository URL (e.g., `https://github.com/owner/repo`).
|
| 109 |
+
- `description`: text after the ` - ` dash, or the rest of the line (with the link and bullet removed) if no dash.
|
| 110 |
+
- `stars` (optional): repository stargazers count when enabled.
|
| 111 |
+
|
| 112 |
+
Configuration is YAML-first via `data_collection_utils/awesome_scrap_config.yaml`:
|
| 113 |
+
|
| 114 |
+
- `root`: root Awesome repository URL, e.g., `https://github.com/sindresorhus/awesome`.
|
| 115 |
+
- `depth`: recursion depth for nested Awesome lists (0 = only root).
|
| 116 |
+
- `output_dir`: directory for `awesome-repos.parquet`.
|
| 117 |
+
- `cache_dir`: directory for README fetch caches.
|
| 118 |
+
- `workers`: concurrency for network requests.
|
| 119 |
+
- `fetch_stars`: when `true`, also fetch stargazers for each parsed repo (makes extra API calls) and include a `stars` column.
|
| 120 |
+
|
| 121 |
+
Run:
|
| 122 |
+
|
| 123 |
+
```bash
|
| 124 |
+
python3 data_collection_utils/awesome_final_repos.py
|
| 125 |
+
# or adjust via YAML first, then run without flags
|
| 126 |
+
```
|
| 127 |
+
|
| 128 |
+
Schema of `awesome-repos.parquet`:
|
| 129 |
+
|
| 130 |
+
- `name` — link text from the Awesome entry.
|
| 131 |
+
- `link` — canonical GitHub URL (<https://github.com/owner/repo>).
|
| 132 |
+
- `description` — description text without the leading ` - ` and without repeating the name.
|
| 133 |
+
- `source_repo` — the Awesome list repository where the entry was found, formatted as `owner/repo`.
|
| 134 |
+
- `stars` — integer, optional; only present when `fetch_stars: true`.
|
| 135 |
|
| 136 |
## Language filtering
|
| 137 |
|
| 138 |
Language detection is performed with `langid.py` (see imports in `data_collection_utils/scrape_gh_docs.py`). The default configuration keeps English-only files (`lang_filter: en`). There is no probability/confidence threshold; we gate by the predicted language label and a minimum text length (`min_text_chars`).
|
| 139 |
|
|
|
|
| 140 |
## Licensing
|
| 141 |
|
| 142 |
- Code and dataset scaffolding in this repository are under the MIT license (see frontmatter).
|
| 143 |
- The original documentation content belongs to the respective upstream projects and remains governed by their licenses. Please consult each repository’s license before redistribution or commercial use.
|
| 144 |
|
|
|
|
| 145 |
## Acknowledgements
|
| 146 |
|
| 147 |
This dataset draws from the open-source community’s documentation efforts. The seed list targets highly-starred repositories to bias toward quality, breadth, and maturity.
|
| 148 |
|
| 149 |
+
Note to self: `size` distribution: 20th percentile - 363 symbols, 50p - 701, 95p - 17392
|
|
|
data_collection_utils/awesome_final_repos.py
CHANGED
|
@@ -38,6 +38,7 @@ import yaml
|
|
| 38 |
from dotenv import load_dotenv
|
| 39 |
from github_api_utils import fetch_repo_readme_markdown
|
| 40 |
import pandas as pd
|
|
|
|
| 41 |
|
| 42 |
load_dotenv()
|
| 43 |
|
|
@@ -134,7 +135,9 @@ def extract_github_links_from_markdown(md: str) -> List[str]:
|
|
| 134 |
return sorted(urls)
|
| 135 |
|
| 136 |
|
| 137 |
-
def _extract_entries_from_markdown_lines(
|
|
|
|
|
|
|
| 138 |
"""
|
| 139 |
Extract entries of the form: bullet + [name](url) optionally followed by " - description".
|
| 140 |
If there's no " - ", use the entire line as description but remove the [name](url) part.
|
|
@@ -199,7 +202,7 @@ async def crawl_awesome_final_entries(
|
|
| 199 |
visited_awesome.add(root_cu)
|
| 200 |
queue.append((root_owner, root_repo, 0))
|
| 201 |
|
| 202 |
-
# map canonical link -> {name, link, description}
|
| 203 |
results: Dict[str, Dict[str, str]] = {}
|
| 204 |
|
| 205 |
while queue:
|
|
@@ -224,7 +227,12 @@ async def crawl_awesome_final_entries(
|
|
| 224 |
queue.append((o, r, depth + 1))
|
| 225 |
else:
|
| 226 |
if cu not in results:
|
| 227 |
-
results[cu] = {
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 228 |
else:
|
| 229 |
# Prefer keeping the first occurrence; if existing description is empty and new is not, update
|
| 230 |
if not results[cu]["description"] and e["description"]:
|
|
@@ -233,6 +241,35 @@ async def crawl_awesome_final_entries(
|
|
| 233 |
return list(results.values())
|
| 234 |
|
| 235 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 236 |
def main() -> None:
|
| 237 |
cfg_dir = Path(__file__).resolve().parent
|
| 238 |
cfg_path = cfg_dir / "awesome_scrap_config.yaml"
|
|
@@ -271,6 +308,8 @@ def main() -> None:
|
|
| 271 |
default=cfg.get("cache_dir", "output/awesome_parse_cache"),
|
| 272 |
help="Cache directory for README content",
|
| 273 |
)
|
|
|
|
|
|
|
| 274 |
args = ap.parse_args()
|
| 275 |
|
| 276 |
# Resolve paths relative to cfg_dir
|
|
@@ -287,9 +326,28 @@ def main() -> None:
|
|
| 287 |
rows = await crawl_awesome_final_entries(
|
| 288 |
session, cache, cache_file, args.root, args.depth
|
| 289 |
)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 290 |
out_parquet = output_dir / "awesome-repos.parquet"
|
| 291 |
output_dir.mkdir(parents=True, exist_ok=True)
|
| 292 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 293 |
df.to_parquet(out_parquet, index=False)
|
| 294 |
print(f"Collected {len(rows)} final repositories with descriptions")
|
| 295 |
print(f"Wrote to {out_parquet}")
|
|
|
|
| 38 |
from dotenv import load_dotenv
|
| 39 |
from github_api_utils import fetch_repo_readme_markdown
|
| 40 |
import pandas as pd
|
| 41 |
+
from github_api_utils import github_headers
|
| 42 |
|
| 43 |
load_dotenv()
|
| 44 |
|
|
|
|
| 135 |
return sorted(urls)
|
| 136 |
|
| 137 |
|
| 138 |
+
def _extract_entries_from_markdown_lines(
|
| 139 |
+
md: str, current_owner: str, current_repo: str
|
| 140 |
+
) -> List[Dict[str, str]]:
|
| 141 |
"""
|
| 142 |
Extract entries of the form: bullet + [name](url) optionally followed by " - description".
|
| 143 |
If there's no " - ", use the entire line as description but remove the [name](url) part.
|
|
|
|
| 202 |
visited_awesome.add(root_cu)
|
| 203 |
queue.append((root_owner, root_repo, 0))
|
| 204 |
|
| 205 |
+
# map canonical link -> {name, link, description, source_repo}
|
| 206 |
results: Dict[str, Dict[str, str]] = {}
|
| 207 |
|
| 208 |
while queue:
|
|
|
|
| 227 |
queue.append((o, r, depth + 1))
|
| 228 |
else:
|
| 229 |
if cu not in results:
|
| 230 |
+
results[cu] = {
|
| 231 |
+
"name": e["name"],
|
| 232 |
+
"link": cu,
|
| 233 |
+
"description": e["description"],
|
| 234 |
+
"source_repo": f"{owner}/{repo}",
|
| 235 |
+
}
|
| 236 |
else:
|
| 237 |
# Prefer keeping the first occurrence; if existing description is empty and new is not, update
|
| 238 |
if not results[cu]["description"] and e["description"]:
|
|
|
|
| 241 |
return list(results.values())
|
| 242 |
|
| 243 |
|
| 244 |
+
async def fetch_repo_stars(
|
| 245 |
+
session: aiohttp.ClientSession, owner: str, repo: str
|
| 246 |
+
) -> Optional[int]:
|
| 247 |
+
url = f"https://api.github.com/repos/{owner}/{repo}"
|
| 248 |
+
try:
|
| 249 |
+
async with session.get(url, headers=github_headers()) as resp:
|
| 250 |
+
if resp.status == 200:
|
| 251 |
+
data = await resp.json()
|
| 252 |
+
if isinstance(data, dict) and "stargazers_count" in data:
|
| 253 |
+
return data["stargazers_count"]
|
| 254 |
+
except Exception:
|
| 255 |
+
return None
|
| 256 |
+
return None
|
| 257 |
+
|
| 258 |
+
|
| 259 |
+
async def enrich_with_stars(
|
| 260 |
+
session: aiohttp.ClientSession, rows: List[Dict[str, str]], concurrency: int
|
| 261 |
+
) -> None:
|
| 262 |
+
sem = asyncio.Semaphore(concurrency if concurrency and concurrency > 0 else 10)
|
| 263 |
+
|
| 264 |
+
async def one(row: Dict[str, str]):
|
| 265 |
+
async with sem:
|
| 266 |
+
owner, repo = parse_owner_repo(row["link"]) # link is canonical
|
| 267 |
+
stars = await fetch_repo_stars(session, owner, repo)
|
| 268 |
+
row["stars"] = stars if stars is not None else None
|
| 269 |
+
|
| 270 |
+
await asyncio.gather(*(one(r) for r in rows))
|
| 271 |
+
|
| 272 |
+
|
| 273 |
def main() -> None:
|
| 274 |
cfg_dir = Path(__file__).resolve().parent
|
| 275 |
cfg_path = cfg_dir / "awesome_scrap_config.yaml"
|
|
|
|
| 308 |
default=cfg.get("cache_dir", "output/awesome_parse_cache"),
|
| 309 |
help="Cache directory for README content",
|
| 310 |
)
|
| 311 |
+
# YAML-configurable flag to fetch stars for each parsed repo
|
| 312 |
+
fetch_stars_value = bool(cfg.get("fetch_stars", False))
|
| 313 |
args = ap.parse_args()
|
| 314 |
|
| 315 |
# Resolve paths relative to cfg_dir
|
|
|
|
| 326 |
rows = await crawl_awesome_final_entries(
|
| 327 |
session, cache, cache_file, args.root, args.depth
|
| 328 |
)
|
| 329 |
+
if fetch_stars_value and rows:
|
| 330 |
+
print(
|
| 331 |
+
f"Fetching stargazers_count for {len(rows)} repos (concurrency={args.workers})..."
|
| 332 |
+
)
|
| 333 |
+
await enrich_with_stars(session, rows, args.workers)
|
| 334 |
out_parquet = output_dir / "awesome-repos.parquet"
|
| 335 |
output_dir.mkdir(parents=True, exist_ok=True)
|
| 336 |
+
# include stars column if present
|
| 337 |
+
df = pd.DataFrame(rows)
|
| 338 |
+
# Ensure column order when possible
|
| 339 |
+
cols = [
|
| 340 |
+
c
|
| 341 |
+
for c in [
|
| 342 |
+
"name",
|
| 343 |
+
"link",
|
| 344 |
+
"description",
|
| 345 |
+
"source_repo",
|
| 346 |
+
"stars",
|
| 347 |
+
]
|
| 348 |
+
if c in df.columns
|
| 349 |
+
]
|
| 350 |
+
df = df[cols]
|
| 351 |
df.to_parquet(out_parquet, index=False)
|
| 352 |
print(f"Collected {len(rows)} final repositories with descriptions")
|
| 353 |
print(f"Wrote to {out_parquet}")
|
data_collection_utils/awesome_scrap_config.yaml
CHANGED
|
@@ -8,11 +8,15 @@ root: "https://github.com/sindresorhus/awesome"
|
|
| 8 |
# Maximum recursion depth for Awesome sublists
|
| 9 |
depth: 2
|
| 10 |
|
| 11 |
-
# Output directory for
|
| 12 |
output_dir: "."
|
| 13 |
|
| 14 |
# Cache directory for README content (relative to script dir)
|
| 15 |
cache_dir: "output/awesome_parse_cache"
|
| 16 |
|
| 17 |
# Number of concurrent workers for fetching
|
| 18 |
-
workers: 16
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 8 |
# Maximum recursion depth for Awesome sublists
|
| 9 |
depth: 2
|
| 10 |
|
| 11 |
+
# Output directory for awesome-repos.parquet (relative to script dir)
|
| 12 |
output_dir: "."
|
| 13 |
|
| 14 |
# Cache directory for README content (relative to script dir)
|
| 15 |
cache_dir: "output/awesome_parse_cache"
|
| 16 |
|
| 17 |
# Number of concurrent workers for fetching
|
| 18 |
+
workers: 16
|
| 19 |
+
|
| 20 |
+
# Optionally fetch stargazers_count for each parsed repo (extra API requests)
|
| 21 |
+
# Set to true to include a 'stars' column in awesome-repos.parquet
|
| 22 |
+
fetch_stars: false
|