MRiabov commited on
Commit
20c3639
Β·
1 Parent(s): cffe6c7

Filtering data further

Browse files
README.md CHANGED
@@ -26,7 +26,7 @@ GoodDocs-v0 is a text dataset scraped from high-quality documentation sources in
26
  - `awesome-repos.parquet` β€” structured links extracted from Awesome lists-of-lists (`name`, `link`, `description`, `source_repo`, optional `stars`).
27
  - `data_collection_utils/` β€” utilities to regenerate the dataset:
28
  - `scrape_gh_docs.py` β€” main scraper/collector for documentation from GitHub repositories.
29
- - `parse_gh_docs_config.yaml` β€” reproducible configuration (inputs, outputs, filters, strategies).
30
  - `github_links.txt` β€” the seed list of GitHub repositories (e.g., top repositories by stars).
31
  - `awesome_final_repos.py` β€” extractor for non-"awesome" repositories referenced by Awesome lists.
32
  - `awesome_scrap_config.yaml` β€” configuration for `awesome_final_repos.py` (root, depth, output, cache, workers, optional `fetch_stars`).
@@ -63,7 +63,7 @@ Typical uses:
63
 
64
  ## Reproducing the dataset
65
 
66
- The scraper is configurable and designed to be reproducible via `data_collection_utils/parse_gh_docs_config.yaml`.
67
 
68
  1) Prerequisites
69
  - System tools: `git`
@@ -83,7 +83,7 @@ python3 data_collection_utils/scrape_gh_docs.py
83
  python3 data_collection_utils/scrape_gh_docs.py --no-fetch
84
  ```
85
 
86
- Configuration (YAML-driven; see `data_collection_utils/parse_gh_docs_config.yaml`):
87
 
88
  - `input` β€” path to a file containing one repo per line (owner/repo or full URL)
89
  - `outdir`, `md_failed`, `texts_parquet`
 
26
  - `awesome-repos.parquet` β€” structured links extracted from Awesome lists-of-lists (`name`, `link`, `description`, `source_repo`, optional `stars`).
27
  - `data_collection_utils/` β€” utilities to regenerate the dataset:
28
  - `scrape_gh_docs.py` β€” main scraper/collector for documentation from GitHub repositories.
29
+ - `scrape_gh_docs_config.yaml` β€” reproducible configuration (inputs, outputs, filters, strategies).
30
  - `github_links.txt` β€” the seed list of GitHub repositories (e.g., top repositories by stars).
31
  - `awesome_final_repos.py` β€” extractor for non-"awesome" repositories referenced by Awesome lists.
32
  - `awesome_scrap_config.yaml` β€” configuration for `awesome_final_repos.py` (root, depth, output, cache, workers, optional `fetch_stars`).
 
63
 
64
  ## Reproducing the dataset
65
 
66
+ The scraper is configurable and designed to be reproducible via `data_collection_utils/scrape_gh_docs_config.yaml`.
67
 
68
  1) Prerequisites
69
  - System tools: `git`
 
83
  python3 data_collection_utils/scrape_gh_docs.py --no-fetch
84
  ```
85
 
86
+ Configuration (YAML-driven; see `data_collection_utils/scrape_gh_docs_config.yaml`):
87
 
88
  - `input` β€” path to a file containing one repo per line (owner/repo or full URL)
89
  - `outdir`, `md_failed`, `texts_parquet`
clean/clean_meta.yaml CHANGED
@@ -12,7 +12,7 @@ out_filtered_reasons_csv: ../output/repometa.filtered_out.csv
12
  # Keep only repos whose primaryLanguage is in this list (empty means no include filter)
13
  include_languages: []
14
  # Exclude repos whose primaryLanguage is in this list
15
- exclude_languages: [null] # filter empty here.
16
  # Minimum number of stars
17
  min_stars: 300
18
  # Exclude forks
@@ -25,4 +25,4 @@ include_owners: []
25
  exclude_owners: []
26
  # Topic filters (substring match, case-insensitive) over comma-joined topics field
27
  include_topic_substrings: []
28
- exclude_topic_substrings: ["interview","interview-prep","learn"]
 
12
  # Keep only repos whose primaryLanguage is in this list (empty means no include filter)
13
  include_languages: []
14
  # Exclude repos whose primaryLanguage is in this list
15
+ exclude_languages: [null] # filter empty values here.
16
  # Minimum number of stars
17
  min_stars: 300
18
  # Exclude forks
 
25
  exclude_owners: []
26
  # Topic filters (substring match, case-insensitive) over comma-joined topics field
27
  include_topic_substrings: []
28
+ exclude_topic_substrings: ["interview","interview-prep","learn","roadmap","chinese"]
data_collection_utils/fetch_gh_meta.py CHANGED
@@ -27,6 +27,7 @@ import pandas as pd
27
  import yaml
28
  from tqdm import tqdm
29
  import logging
 
30
 
31
  from github_api_utils import fetch_repos_metadata_graphql
32
 
@@ -107,9 +108,8 @@ def main():
107
  return [_resolve_cfg_path(val)]
108
 
109
  input_parquet_values = _resolve_cfg_paths(cfg.get("input_parquet"))
110
- out_parquet_value = _resolve_cfg_path(
111
- cfg.get("out_parquet", "../output/repometa.parquet")
112
- )
113
  batch_size = int(cfg.get("batch_size", 20))
114
  quiet = bool(cfg.get("quiet", False))
115
 
@@ -139,10 +139,23 @@ def main():
139
  seen.add(key)
140
  pairs.append((owner, repo))
141
 
142
- logger.info(f"Total unique repos to fetch: {len(pairs)}")
 
 
 
 
 
 
 
 
 
 
 
 
143
 
144
  # Fetch in batches via GraphQL
145
  records: List[Dict[str, Any]] = []
 
146
  for i in tqdm(range(0, len(pairs), batch_size), desc="GraphQL batches"):
147
  batch = pairs[i : i + batch_size]
148
  meta = fetch_repos_metadata_graphql(batch)
@@ -168,13 +181,29 @@ def main():
168
  ),
169
  "is_fork": m.get("is_fork"),
170
  "parent_url": m.get("parent_url"),
 
171
  }
172
  )
173
 
174
  df_out = pd.DataFrame(records)
175
  out_path = Path(out_parquet_value)
176
  out_path.parent.mkdir(parents=True, exist_ok=True)
177
- df_out.to_parquet(out_path)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
178
  logger.info(f"Wrote metadata for {len(df_out)} repos to {out_path}")
179
 
180
 
 
27
  import yaml
28
  from tqdm import tqdm
29
  import logging
30
+ from datetime import datetime
31
 
32
  from github_api_utils import fetch_repos_metadata_graphql
33
 
 
108
  return [_resolve_cfg_path(val)]
109
 
110
  input_parquet_values = _resolve_cfg_paths(cfg.get("input_parquet"))
111
+ out_parquet_value = _resolve_cfg_path(cfg.get("out_parquet", "../output/repometa.parquet"))
112
+ resume = bool(cfg.get("resume", True))
 
113
  batch_size = int(cfg.get("batch_size", 20))
114
  quiet = bool(cfg.get("quiet", False))
115
 
 
139
  seen.add(key)
140
  pairs.append((owner, repo))
141
 
142
+ # Resume: if output exists and resume=true, skip already-present repos
143
+ existing_map = {}
144
+ out_path = Path(out_parquet_value)
145
+ if resume and out_path.exists():
146
+ try:
147
+ existing_df = pd.read_parquet(out_path)
148
+ if {"owner", "repo"}.issubset(existing_df.columns):
149
+ existing_map = {f"{o}/{r}": True for o, r in zip(existing_df["owner"], existing_df["repo"]) }
150
+ except Exception:
151
+ existing_map = {}
152
+ if existing_map:
153
+ pairs = [(o, r) for (o, r) in pairs if f"{o}/{r}" not in existing_map]
154
+ logger.info(f"Total unique repos to fetch: {len(pairs)} (resume={'on' if resume else 'off'})")
155
 
156
  # Fetch in batches via GraphQL
157
  records: List[Dict[str, Any]] = []
158
+ run_ts = datetime.utcnow().isoformat()
159
  for i in tqdm(range(0, len(pairs), batch_size), desc="GraphQL batches"):
160
  batch = pairs[i : i + batch_size]
161
  meta = fetch_repos_metadata_graphql(batch)
 
181
  ),
182
  "is_fork": m.get("is_fork"),
183
  "parent_url": m.get("parent_url"),
184
+ "updated_at": run_ts,
185
  }
186
  )
187
 
188
  df_out = pd.DataFrame(records)
189
  out_path = Path(out_parquet_value)
190
  out_path.parent.mkdir(parents=True, exist_ok=True)
191
+ if resume and out_path.exists():
192
+ try:
193
+ existing_df = pd.read_parquet(out_path)
194
+ # Ensure updated_at exists on existing_df as well
195
+ if "updated_at" not in existing_df.columns:
196
+ existing_df["updated_at"] = None
197
+ combined = pd.concat([existing_df, df_out], ignore_index=True)
198
+ # Drop duplicates by owner/repo keeping last (newest fetch)
199
+ combined = combined.drop_duplicates(subset=["owner", "repo"], keep="last")
200
+ combined.to_parquet(out_path, index=False)
201
+ logger.info(f"Appended {len(df_out)} new repos (resume) to {out_path} (total {len(combined)})")
202
+ return
203
+ except Exception:
204
+ # If any issue, fall back to overwrite with new
205
+ pass
206
+ df_out.to_parquet(out_path, index=False)
207
  logger.info(f"Wrote metadata for {len(df_out)} repos to {out_path}")
208
 
209
 
data_collection_utils/fetch_gh_meta_config.yaml CHANGED
@@ -12,3 +12,6 @@ batch_size: 20
12
 
13
  # Logging
14
  quiet: false
 
 
 
 
12
 
13
  # Logging
14
  quiet: false
15
+
16
+ # Resume: skip refetching repos that already exist in out_parquet
17
+ resume: true
data_collection_utils/scrape_gh_docs.py CHANGED
@@ -14,14 +14,14 @@ Key features:
14
  3) Zip fallback (optional): `--prefer-zip` to download a codeload zip (no REST usage) and extract only .md.
15
  4) Org heuristics and search fallback via GitHub API if direct docs folder not found.
16
  - Content selection: `--only-md` limits downloads/extractions to Markdown files.
17
- - Central config: reads YAML from `parse_gh_docs_config.yaml` to control inputs/outputs and strategies.
18
  - Note: Repository metadata fetching and filtering (e.g., by age/language/topics) has been split
19
  into a separate pipeline step (see `data_collection_utils/fetch_gh_meta.py` and `clean/clean_meta.py`).
20
  - Quiet mode: `--quiet` or YAML `quiet: true` switches logging to warnings+ so tqdm progress stays visible.
21
  - No-fetch mode: `--no-fetch` rebuilds Parquet(s) from existing outdir without any network calls. You can also emit a per-file texts Parquet via `--texts-parquet` or YAML `texts_parquet`.
22
 
23
  Typical usage:
24
- uv run starting_data/scrape_gh_docs.py --config starting_data/parse_gh_docs_config.yaml
25
 
26
  Outputs:
27
  - Saves files under `<outdir>/<owner>__<repo>/...`.
@@ -48,6 +48,7 @@ import subprocess
48
  import yaml
49
  import duckdb
50
  import logging
 
51
  import langid # https://github.com/saffsd/langid.py
52
 
53
 
@@ -145,11 +146,12 @@ def collect_md_rows_for_repo_dir(
145
  outdir: Path,
146
  lang_filter_value: Optional[str],
147
  min_text_chars_value: int,
 
148
  ) -> List[Dict[str, Any]]:
149
  """Scan a single <owner>__<repo> directory for Markdown files and build row dicts.
150
 
151
  Returns a list of rows with fields: owner, repo, repo_dir, file_rel_repo,
152
- file_rel_outdir, size, mtime, lang, content.
153
  """
154
  try:
155
  owner, repo = d.name.split("__", 1)
@@ -185,6 +187,7 @@ def collect_md_rows_for_repo_dir(
185
  "mtime": int(md_file.stat().st_mtime),
186
  "lang": lang_code,
187
  "content": text,
 
188
  }
189
  rows.append(row)
190
  return rows
@@ -529,6 +532,7 @@ def process_repo_entry(
529
  prefer_zip: bool = False,
530
  prefer_sparse: bool = False,
531
  only_md: bool = False,
 
532
  ):
533
  owner_repo = owner_repo.strip()
534
  if not owner_repo or owner_repo.startswith("#"):
@@ -557,6 +561,34 @@ def process_repo_entry(
557
  got_any = False
558
  default_branch = None
559
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
560
  if prefer_sparse:
561
  # Try to fetch only docs/ folder via git sparse-checkout without REST API
562
  for branch_guess in ("main", "master"):
@@ -667,7 +699,7 @@ def process_repo_entry(
667
  f"https://github.com/{owner}/{repo}/tree/{default_branch}/{docs_path}"
668
  )
669
  elif isinstance(contents, dict) and contents.get("type") == "file":
670
- logger.info(f"Found file at docs (single-file). Downloading...")
671
  if not dry_run:
672
  download_folder_via_api(
673
  owner, repo, docs_path, default_branch, saved_root, only_md=only_md
@@ -781,7 +813,8 @@ def _init_duckdb(con):
781
  size BIGINT,
782
  mtime BIGINT,
783
  lang TEXT,
784
- content TEXT
 
785
  );
786
  """
787
  )
@@ -801,9 +834,9 @@ def main():
801
  )
802
  args = parser.parse_args()
803
 
804
- # Load YAML config next to this script (parse_gh_docs_config.yaml) if present
805
  cfg: Dict[str, Any] = {}
806
- cfg_path = Path(__file__).with_name("parse_gh_docs_config.yaml")
807
  if cfg_path.exists():
808
  cfg = yaml.safe_load(cfg_path.read_text(encoding="utf-8")) or {}
809
 
@@ -827,6 +860,7 @@ def main():
827
  prefer_zip_value = bool(cfg.get("prefer_zip", False))
828
  prefer_sparse_value = bool(cfg.get("prefer_sparse", False))
829
  only_md_value = bool(cfg.get("only_md", False))
 
830
  quiet_value = bool(cfg.get("quiet", False))
831
  # CLI should override YAML for convenience
832
  no_fetch_value = bool(args.no_fetch or cfg.get("no_fetch", False))
@@ -869,7 +903,7 @@ def main():
869
  lines: List[str] = []
870
  if not input_parquet_values:
871
  logger.error(
872
- "'input_parquet' is required. Configure one or more Parquet files with a 'link' column in parse_gh_docs_config.yaml."
873
  )
874
  sys.exit(2)
875
  # Read repositories from one or more Parquet files; use 'link' column
@@ -893,6 +927,7 @@ def main():
893
  duckdb_lock = threading.Lock()
894
 
895
  # Process repositories concurrently
 
896
  with tqdm(total=len(lines), desc="Repos") as pbar:
897
 
898
  def _run(lr: str):
@@ -906,6 +941,7 @@ def main():
906
  prefer_zip=prefer_zip_value,
907
  prefer_sparse=prefer_sparse_value,
908
  only_md=only_md_value,
 
909
  )
910
  if res is not None:
911
  with results_lock:
@@ -921,6 +957,7 @@ def main():
921
  outdir,
922
  lang_filter_value,
923
  min_text_chars_value,
 
924
  )
925
  if rows_one:
926
  cols = [
@@ -933,6 +970,7 @@ def main():
933
  "mtime",
934
  "lang",
935
  "content",
 
936
  ]
937
  df_one = pd.DataFrame(rows_one, columns=cols)
938
  with duckdb_lock:
@@ -995,6 +1033,7 @@ def main():
995
  "mtime",
996
  "lang",
997
  "content",
 
998
  ]
999
  total_inserted = 0
1000
  with duckdb_lock:
@@ -1010,6 +1049,7 @@ def main():
1010
  outdir,
1011
  lang_filter_value,
1012
  min_text_chars_value,
 
1013
  )
1014
  for d in repo_dirs
1015
  ]
 
14
  3) Zip fallback (optional): `--prefer-zip` to download a codeload zip (no REST usage) and extract only .md.
15
  4) Org heuristics and search fallback via GitHub API if direct docs folder not found.
16
  - Content selection: `--only-md` limits downloads/extractions to Markdown files.
17
+ - Central config: reads YAML from `scrape_gh_docs_config.yaml` to control inputs/outputs and strategies.
18
  - Note: Repository metadata fetching and filtering (e.g., by age/language/topics) has been split
19
  into a separate pipeline step (see `data_collection_utils/fetch_gh_meta.py` and `clean/clean_meta.py`).
20
  - Quiet mode: `--quiet` or YAML `quiet: true` switches logging to warnings+ so tqdm progress stays visible.
21
  - No-fetch mode: `--no-fetch` rebuilds Parquet(s) from existing outdir without any network calls. You can also emit a per-file texts Parquet via `--texts-parquet` or YAML `texts_parquet`.
22
 
23
  Typical usage:
24
+ uv run starting_data/scrape_gh_docs.py --config starting_data/scrape_gh_docs_config.yaml
25
 
26
  Outputs:
27
  - Saves files under `<outdir>/<owner>__<repo>/...`.
 
48
  import yaml
49
  import duckdb
50
  import logging
51
+ from datetime import datetime
52
  import langid # https://github.com/saffsd/langid.py
53
 
54
 
 
146
  outdir: Path,
147
  lang_filter_value: Optional[str],
148
  min_text_chars_value: int,
149
+ updated_at: str,
150
  ) -> List[Dict[str, Any]]:
151
  """Scan a single <owner>__<repo> directory for Markdown files and build row dicts.
152
 
153
  Returns a list of rows with fields: owner, repo, repo_dir, file_rel_repo,
154
+ file_rel_outdir, size, mtime, lang, content, updated_at.
155
  """
156
  try:
157
  owner, repo = d.name.split("__", 1)
 
187
  "mtime": int(md_file.stat().st_mtime),
188
  "lang": lang_code,
189
  "content": text,
190
+ "updated_at": updated_at,
191
  }
192
  rows.append(row)
193
  return rows
 
532
  prefer_zip: bool = False,
533
  prefer_sparse: bool = False,
534
  only_md: bool = False,
535
+ resume: bool = True,
536
  ):
537
  owner_repo = owner_repo.strip()
538
  if not owner_repo or owner_repo.startswith("#"):
 
561
  got_any = False
562
  default_branch = None
563
 
564
+ # Resume: if repo directory already exists, skip network fetch and use existing files
565
+ repo_saved_root = outdir / safe_name(f"{owner}__{repo}")
566
+ if resume and repo_saved_root.exists():
567
+ # Determine docs folder similar to below logic
568
+ if (repo_saved_root / "docs").exists():
569
+ docs_folder = repo_saved_root / "docs"
570
+ else:
571
+ found = None
572
+ for p in repo_saved_root.rglob("docs"):
573
+ if p.is_dir():
574
+ found = p
575
+ break
576
+ docs_folder = found if found else repo_saved_root
577
+ md_count = count_md_files(docs_folder)
578
+ result["default_branch"] = None
579
+ result["method"] = "resume-existing"
580
+ result["docs_found_in"] = None
581
+ result["docs_found"] = True
582
+ assert docs_folder.is_relative_to(outdir)
583
+ result["docs_folder"] = str(docs_folder.relative_to(outdir))
584
+ result["md_count"] = int(md_count)
585
+ if md_count < 10:
586
+ append_line_threadsafe(
587
+ md_failed_path, f"{owner}/{repo} # md-count={md_count}\n", lock
588
+ )
589
+ result["status"] = "low-md-count"
590
+ return result
591
+
592
  if prefer_sparse:
593
  # Try to fetch only docs/ folder via git sparse-checkout without REST API
594
  for branch_guess in ("main", "master"):
 
699
  f"https://github.com/{owner}/{repo}/tree/{default_branch}/{docs_path}"
700
  )
701
  elif isinstance(contents, dict) and contents.get("type") == "file":
702
+ logger.info("Found file at docs (single-file). Downloading...")
703
  if not dry_run:
704
  download_folder_via_api(
705
  owner, repo, docs_path, default_branch, saved_root, only_md=only_md
 
813
  size BIGINT,
814
  mtime BIGINT,
815
  lang TEXT,
816
+ content TEXT,
817
+ updated_at TEXT
818
  );
819
  """
820
  )
 
834
  )
835
  args = parser.parse_args()
836
 
837
+ # Load YAML config next to this script (scrape_gh_docs_config.yaml) if present
838
  cfg: Dict[str, Any] = {}
839
+ cfg_path = Path(__file__).with_name("scrape_gh_docs_config.yaml")
840
  if cfg_path.exists():
841
  cfg = yaml.safe_load(cfg_path.read_text(encoding="utf-8")) or {}
842
 
 
860
  prefer_zip_value = bool(cfg.get("prefer_zip", False))
861
  prefer_sparse_value = bool(cfg.get("prefer_sparse", False))
862
  only_md_value = bool(cfg.get("only_md", False))
863
+ resume_value = bool(cfg.get("resume", True))
864
  quiet_value = bool(cfg.get("quiet", False))
865
  # CLI should override YAML for convenience
866
  no_fetch_value = bool(args.no_fetch or cfg.get("no_fetch", False))
 
903
  lines: List[str] = []
904
  if not input_parquet_values:
905
  logger.error(
906
+ "'input_parquet' is required. Configure one or more Parquet files with a 'link' column in scrape_gh_docs_config.yaml."
907
  )
908
  sys.exit(2)
909
  # Read repositories from one or more Parquet files; use 'link' column
 
927
  duckdb_lock = threading.Lock()
928
 
929
  # Process repositories concurrently
930
+ run_ts = datetime.utcnow().isoformat()
931
  with tqdm(total=len(lines), desc="Repos") as pbar:
932
 
933
  def _run(lr: str):
 
941
  prefer_zip=prefer_zip_value,
942
  prefer_sparse=prefer_sparse_value,
943
  only_md=only_md_value,
944
+ resume=resume_value,
945
  )
946
  if res is not None:
947
  with results_lock:
 
957
  outdir,
958
  lang_filter_value,
959
  min_text_chars_value,
960
+ run_ts,
961
  )
962
  if rows_one:
963
  cols = [
 
970
  "mtime",
971
  "lang",
972
  "content",
973
+ "updated_at",
974
  ]
975
  df_one = pd.DataFrame(rows_one, columns=cols)
976
  with duckdb_lock:
 
1033
  "mtime",
1034
  "lang",
1035
  "content",
1036
+ "updated_at",
1037
  ]
1038
  total_inserted = 0
1039
  with duckdb_lock:
 
1049
  outdir,
1050
  lang_filter_value,
1051
  min_text_chars_value,
1052
+ run_ts,
1053
  )
1054
  for d in repo_dirs
1055
  ]
data_collection_utils/{parse_gh_docs_config.yaml β†’ scrape_gh_docs_config.yaml} RENAMED
@@ -7,12 +7,12 @@
7
  # - data_collection_utils/awesome_final_repos.py -> awesome-repos.parquet
8
  # - data_collection_utils/top_1000_repos.py -> top-1000-repos.parquet
9
  input_parquet:
10
- - ../output/links_filtered.parquet
11
 
12
  # Output directories/files
13
  outdir: ../output/raw_docs
14
  md_failed: ../md-failed.txt
15
- texts_parquet: ../output/texts.parquet
16
 
17
  # Concurrency and behavior
18
  workers: 1
@@ -22,6 +22,9 @@ quiet: false
22
  # How often to checkpoint partial outputs (in processed repos)
23
  checkpoint_every: 50
24
 
 
 
 
25
  # Auth
26
  # Secrets are NOT configured here. Put your GitHub token in a .env file (recommended)
27
  # or export it in your shell environment. Required env var:
 
7
  # - data_collection_utils/awesome_final_repos.py -> awesome-repos.parquet
8
  # - data_collection_utils/top_1000_repos.py -> top-1000-repos.parquet
9
  input_parquet:
10
+ - ../output/links.filtered.parquet
11
 
12
  # Output directories/files
13
  outdir: ../output/raw_docs
14
  md_failed: ../md-failed.txt
15
+ texts_parquet: ../output/cleaned_texts_on_metadata_only.parquet
16
 
17
  # Concurrency and behavior
18
  workers: 1
 
22
  # How often to checkpoint partial outputs (in processed repos)
23
  checkpoint_every: 50
24
 
25
+ # Resume: skip refetching repos that already exist under outdir
26
+ resume: true
27
+
28
  # Auth
29
  # Secrets are NOT configured here. Put your GitHub token in a .env file (recommended)
30
  # or export it in your shell environment. Required env var:
data_collection_utils/top-1000-repos.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:4fa7b4ed3bbeca1048ed6fbe9e2ccb212043c211785fff53564230b4c5cad876
3
- size 90891
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c2afb145f66ea26cb5d0392b4807f43bdd8c0a69efa56087ada14799802ecc1d
3
+ size 156344