gss1147 commited on
Commit
7e34914
·
verified ·
1 Parent(s): 6f8abf1

Upload 10 files

Browse files
.gitattributes CHANGED
@@ -57,3 +57,5 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
57
  # Video files - compressed
58
  *.mp4 filter=lfs diff=lfs merge=lfs -text
59
  *.webm filter=lfs diff=lfs merge=lfs -text
 
 
 
57
  # Video files - compressed
58
  *.mp4 filter=lfs diff=lfs merge=lfs -text
59
  *.webm filter=lfs diff=lfs merge=lfs -text
60
+ HyperScholar-OmniPython-50K-CodeOnly.jsonl filter=lfs diff=lfs merge=lfs -text
61
+ HyperScholar-OmniPython-50K-HyperReason.jsonl filter=lfs diff=lfs merge=lfs -text
EVAL_README.md ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Evaluation (Standardized)
2
+
3
+ ## Goal
4
+ Make results comparable across community fine-tunes.
5
+
6
+ ## Recommended metrics
7
+ - HumanEval pass@1
8
+ - Optional: MBPP pass@1
9
+
10
+ ## Suggested tool
11
+ Use lm-evaluation-harness (or your preferred harness) to run HumanEval and report settings:
12
+ - base model
13
+ - training recipe (full / LoRA / QLoRA)
14
+ - sequence length
15
+ - epochs
16
+ - hardware
HyperScholar-OmniPython-50K-CodeOnly.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e8892df67df612a9a9ebc1deb61f13f7d4bc749a2f9d7a28dbbf6c994908f3d3
3
+ size 68591405
HyperScholar-OmniPython-50K-HyperReason.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:740b5fcf75af801f0d49f1cc7afddd30e5041592ffb45ccf8d0cb840eb8ac7c2
3
+ size 89469558
LEADERBOARD.md ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ # HyperScholar-OmniPython Community Leaderboard (WithIn Us AI)
2
+
3
+ Submit a PR adding your model results (use `MODEL_SUBMISSION_TEMPLATE.md`).
4
+
5
+ | Date | Model | Base | Config | Method | HumanEval pass@1 | Notes |
6
+ |---|---|---|---|---|---:|---|
7
+ | | | | | | | |
MODEL_SUBMISSION_TEMPLATE.md ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Model Submission Template — WithIn Us AI
2
+
3
+ ## Model
4
+ - Model name (HF):
5
+ - Base model:
6
+ - Dataset config: codeonly / hyperreason
7
+ - Training method: full finetune / LoRA / QLoRA
8
+ - Seq length:
9
+ - Epochs:
10
+ - LR / scheduler:
11
+ - Batch size / grad acc:
12
+ - Hardware (GPU + VRAM):
13
+
14
+ ## Results
15
+ - HumanEval pass@1:
16
+ - HumanEval pass@10 (optional):
17
+ - Other evals (MBPP, etc.):
18
+
19
+ ## Notes
20
+ - Any special tricks (packing, chat template, prompt format)?
21
+ - Known failure modes?
README.md ADDED
@@ -0,0 +1,109 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pretty_name: HyperScholar-OmniPython (50K Spec→Code→Tests) — WithIn Us AI
3
+ language:
4
+ - en
5
+ license: apache-2.0
6
+ task_categories:
7
+ - text-generation
8
+ - text2text-generation
9
+ tags:
10
+ - python
11
+ - code
12
+ - instruction-tuning
13
+ - sft
14
+ - unit-tests
15
+ - software-engineering
16
+ - security
17
+ - async
18
+ - algorithms
19
+ - trl
20
+ size_categories:
21
+ - 10K<n<100K
22
+ configs:
23
+ - config_name: codeonly
24
+ data_files: HyperScholar-OmniPython-50K-CodeOnly.jsonl
25
+ - config_name: hyperreason
26
+ data_files: HyperScholar-OmniPython-50K-HyperReason.jsonl
27
+ ---
28
+
29
+ # HyperScholar-OmniPython (50K) — WithIn Us AI
30
+
31
+ Publisher: **WithIn Us AI**
32
+ Hugging Face: **gss1147**
33
+
34
+ A developer-focused Python fine-tuning dataset that trains **engineering discipline** with a Mixture-of-Experts flavor:
35
+ clear requirements, edge cases, complexity awareness, secure defaults, and minimal tests.
36
+
37
+ ## What’s included
38
+
39
+ Two configs (each **50,000** records):
40
+
41
+ - **codeonly**: assistant outputs *final code only* (clean production behavior)
42
+ - **hyperreason**: assistant outputs:
43
+ - Expert (MoE label: ALG | DATA | ASYNC | SEC | API | TEST | PERF | STYLE)
44
+ - Spec
45
+ - Edge cases
46
+ - Complexity
47
+ - Implementation
48
+ - Minimal tests
49
+
50
+ ## Why developers want this
51
+
52
+ - Teaches habits that improve reliability in real codebases:
53
+ - fail-closed validation and security patterns
54
+ - minimal tests and typing discipline
55
+ - async/reliability primitives (timeouts, concurrency limits, backoff, circuit breaker)
56
+ - standard library competence and clean APIs
57
+
58
+ ## Files in this repo
59
+
60
+ - `HyperScholar-OmniPython-50K-CodeOnly.jsonl`
61
+ - `HyperScholar-OmniPython-50K-HyperReason.jsonl`
62
+ - `train_sft_omni_50k.py`
63
+ - `requirements_omni_50k.txt`
64
+ - `hf_upload_dataset.py`
65
+ - `EVAL_README.md`
66
+ - `LEADERBOARD.md`
67
+ - `MODEL_SUBMISSION_TEMPLATE.md`
68
+
69
+ ## Schema (JSONL)
70
+
71
+ Each line:
72
+ ```json
73
+ {
74
+ "id": "OMNI00001",
75
+ "tags": ["..."],
76
+ "prompt": [
77
+ {"role":"system","content":"..."},
78
+ {"role":"user","content":"..."}
79
+ ],
80
+ "completion": [
81
+ {"role":"assistant","content":"..."}
82
+ ]
83
+ }
84
+ ```
85
+
86
+ ## Train (TRL SFTTrainer)
87
+
88
+ ```bash
89
+ pip install -r requirements_omni_50k.txt
90
+
91
+ python train_sft_omni_50k.py \
92
+ --model YOUR_BASE_MODEL \
93
+ --dataset HyperScholar-OmniPython-50K-CodeOnly.jsonl \
94
+ --output_dir out_omni_50k \
95
+ --use_lora --use_4bit --bf16 \
96
+ --max_seq_len 4096 \
97
+ --gradient_accumulation_steps 16 \
98
+ --learning_rate 2e-4 \
99
+ --num_train_epochs 1 \
100
+ --packing
101
+ ```
102
+
103
+ To train the structured “engineering discipline” behavior:
104
+ - set `--dataset HyperScholar-OmniPython-50K-HyperReason.jsonl`
105
+
106
+ ## Contribute / publish your fine-tuned model
107
+
108
+ Train a model on this dataset and publish it on Hugging Face, then add your result in `LEADERBOARD.md`
109
+ using `MODEL_SUBMISSION_TEMPLATE.md`.
RESULTS_SCHEMA.json ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "date": "YYYY-MM-DD",
3
+ "model_id": "username/model",
4
+ "base_model": "base",
5
+ "config": "codeonly|hyperreason",
6
+ "method": "full|lora|qlora",
7
+ "humaneval_pass1": 0.0,
8
+ "mbpp_pass1": null,
9
+ "hardware": "GPU",
10
+ "notes": ""
11
+ }
hf_upload_dataset.py ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ # WithIn Us AI / gss1147 upload helper for Hugging Face datasets.
3
+ #
4
+ # Usage:
5
+ # export HF_TOKEN=...
6
+ # python hf_upload_dataset.py --folder . --repo_id gss1147/HyperScholar-OmniPython-50K
7
+ #
8
+ from __future__ import annotations
9
+ import argparse, os
10
+ from pathlib import Path
11
+ from huggingface_hub import create_repo, upload_folder
12
+
13
+ def main() -> None:
14
+ p = argparse.ArgumentParser()
15
+ p.add_argument("--repo_id", default="gss1147/HyperScholar-OmniPython-50K")
16
+ p.add_argument("--folder", default=".")
17
+ p.add_argument("--private", action="store_true")
18
+ args = p.parse_args()
19
+
20
+ token = os.getenv("HF_TOKEN")
21
+ if not token:
22
+ raise SystemExit("Set HF_TOKEN env var with a write-enabled token.")
23
+
24
+ folder = Path(args.folder).resolve()
25
+ if not folder.exists():
26
+ raise SystemExit(f"Folder not found: {folder}")
27
+
28
+ create_repo(args.repo_id, repo_type="dataset", exist_ok=True, private=args.private, token=token)
29
+ upload_folder(
30
+ repo_id=args.repo_id,
31
+ repo_type="dataset",
32
+ folder_path=str(folder),
33
+ commit_message="Upload HyperScholar-OmniPython-50K (WithIn Us AI)",
34
+ token=token,
35
+ )
36
+ print("Uploaded:", args.repo_id)
37
+
38
+ if __name__ == "__main__":
39
+ main()
requirements_omni_50k.txt ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ transformers>=4.42.0
2
+ datasets>=2.19.0
3
+ accelerate>=0.33.0
4
+ trl>=0.26.2
5
+ peft>=0.12.0
6
+ bitsandbytes>=0.43.0
train_sft_omni_50k.py ADDED
@@ -0,0 +1,167 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """
3
+ HyperScholar-OmniPython SFT training script (TRL SFTTrainer)
4
+
5
+ Dataset format: JSONL where each record is:
6
+ {
7
+ "id": "...",
8
+ "tags": [...],
9
+ "prompt": [{"role":"system","content":"..."},{"role":"user","content":"..."}],
10
+ "completion": [{"role":"assistant","content":"..."}]
11
+ }
12
+
13
+ Example:
14
+ python train_sft_omni_50k.py --model <base> --dataset HyperScholar-OmniPython-50K-CodeOnly.jsonl --output_dir out --use_lora --use_4bit --bf16
15
+ """
16
+
17
+ from __future__ import annotations
18
+
19
+ import argparse
20
+ import os
21
+ from dataclasses import dataclass
22
+
23
+ import torch
24
+ from datasets import load_dataset
25
+ from transformers import AutoModelForCausalLM, AutoTokenizer, TrainingArguments
26
+ from trl import SFTTrainer
27
+
28
+
29
+ @dataclass(frozen=True)
30
+ class Args:
31
+ model: str
32
+ dataset: str
33
+ output_dir: str
34
+ max_seq_len: int
35
+ per_device_train_batch_size: int
36
+ gradient_accumulation_steps: int
37
+ learning_rate: float
38
+ num_train_epochs: float
39
+ logging_steps: int
40
+ save_steps: int
41
+ warmup_ratio: float
42
+ lr_scheduler_type: str
43
+ bf16: bool
44
+ fp16: bool
45
+ packing: bool
46
+ attn_implementation: str | None
47
+ use_lora: bool
48
+ lora_r: int
49
+ lora_alpha: int
50
+ lora_dropout: float
51
+ use_4bit: bool
52
+ gradient_checkpointing: bool
53
+ seed: int
54
+
55
+
56
+ def parse_args() -> Args:
57
+ p = argparse.ArgumentParser()
58
+ p.add_argument("--model", required=True)
59
+ p.add_argument("--dataset", required=True, help="Local JSONL path or HF dataset repo id.")
60
+ p.add_argument("--output_dir", default="./out_omni_50k")
61
+ p.add_argument("--max_seq_len", type=int, default=4096)
62
+ p.add_argument("--per_device_train_batch_size", type=int, default=1)
63
+ p.add_argument("--gradient_accumulation_steps", type=int, default=16)
64
+ p.add_argument("--learning_rate", type=float, default=2e-4)
65
+ p.add_argument("--num_train_epochs", type=float, default=1.0)
66
+ p.add_argument("--logging_steps", type=int, default=10)
67
+ p.add_argument("--save_steps", type=int, default=1000)
68
+ p.add_argument("--warmup_ratio", type=float, default=0.03)
69
+ p.add_argument("--lr_scheduler_type", default="cosine")
70
+ p.add_argument("--bf16", action="store_true")
71
+ p.add_argument("--fp16", action="store_true")
72
+ p.add_argument("--packing", action="store_true")
73
+ p.add_argument("--attn_implementation", default=None)
74
+ p.add_argument("--use_lora", action="store_true")
75
+ p.add_argument("--lora_r", type=int, default=16)
76
+ p.add_argument("--lora_alpha", type=int, default=32)
77
+ p.add_argument("--lora_dropout", type=float, default=0.05)
78
+ p.add_argument("--use_4bit", action="store_true")
79
+ p.add_argument("--gradient_checkpointing", action="store_true")
80
+ p.add_argument("--seed", type=int, default=42)
81
+ ns = p.parse_args()
82
+
83
+ if ns.bf16 and ns.fp16:
84
+ raise SystemExit("Choose only one: --bf16 or --fp16")
85
+ return Args(**vars(ns))
86
+
87
+
88
+ def main() -> None:
89
+ a = parse_args()
90
+
91
+ if os.path.exists(a.dataset):
92
+ ds = load_dataset("json", data_files=a.dataset, split="train")
93
+ else:
94
+ ds = load_dataset(a.dataset, split="train")
95
+
96
+ tok = AutoTokenizer.from_pretrained(a.model, use_fast=True)
97
+ if tok.pad_token is None:
98
+ tok.pad_token = tok.eos_token
99
+
100
+ quantization_config = None
101
+ if a.use_4bit:
102
+ from transformers import BitsAndBytesConfig
103
+ quantization_config = BitsAndBytesConfig(
104
+ load_in_4bit=True,
105
+ bnb_4bit_use_double_quant=True,
106
+ bnb_4bit_quant_type="nf4",
107
+ bnb_4bit_compute_dtype=torch.bfloat16 if a.bf16 else torch.float16,
108
+ )
109
+
110
+ model = AutoModelForCausalLM.from_pretrained(
111
+ a.model,
112
+ device_map="auto",
113
+ torch_dtype=torch.bfloat16 if a.bf16 else (torch.float16 if a.fp16 else None),
114
+ attn_implementation=a.attn_implementation,
115
+ quantization_config=quantization_config,
116
+ )
117
+
118
+ if a.gradient_checkpointing:
119
+ model.gradient_checkpointing_enable()
120
+ model.config.use_cache = False
121
+
122
+ peft_config = None
123
+ if a.use_lora:
124
+ from peft import LoraConfig, TaskType
125
+ peft_config = LoraConfig(
126
+ r=a.lora_r,
127
+ lora_alpha=a.lora_alpha,
128
+ lora_dropout=a.lora_dropout,
129
+ bias="none",
130
+ task_type=TaskType.CAUSAL_LM,
131
+ target_modules="all-linear",
132
+ )
133
+
134
+ targs = TrainingArguments(
135
+ output_dir=a.output_dir,
136
+ per_device_train_batch_size=a.per_device_train_batch_size,
137
+ gradient_accumulation_steps=a.gradient_accumulation_steps,
138
+ learning_rate=a.learning_rate,
139
+ num_train_epochs=a.num_train_epochs,
140
+ logging_steps=a.logging_steps,
141
+ save_steps=a.save_steps,
142
+ warmup_ratio=a.warmup_ratio,
143
+ lr_scheduler_type=a.lr_scheduler_type,
144
+ bf16=a.bf16,
145
+ fp16=a.fp16,
146
+ optim="paged_adamw_32bit" if a.use_4bit else "adamw_torch",
147
+ report_to="none",
148
+ seed=a.seed,
149
+ )
150
+
151
+ trainer = SFTTrainer(
152
+ model=model,
153
+ tokenizer=tok,
154
+ train_dataset=ds,
155
+ args=targs,
156
+ max_seq_length=a.max_seq_len,
157
+ packing=a.packing,
158
+ peft_config=peft_config,
159
+ )
160
+
161
+ trainer.train()
162
+ trainer.save_model(a.output_dir)
163
+ tok.save_pretrained(a.output_dir)
164
+
165
+
166
+ if __name__ == "__main__":
167
+ main()