gss1147 commited on
Commit
0cbdce9
·
verified ·
1 Parent(s): 48d605b

Upload 4 files

Browse files
README.md CHANGED
@@ -1,116 +1,18 @@
1
  ---
2
- pretty_name: HyperScholar-OmniPython (50K Spec→Code→Tests) — WithIn Us AI
3
- language:
4
- - en
5
- license: apache-2.0
6
- task_categories:
7
- - text-generation
8
- - text2text-generation
9
- tags:
10
- - python
11
- - code
12
- - instruction-tuning
13
- - sft
14
- - unit-tests
15
- - software-engineering
16
- - security
17
- - async
18
- - algorithms
19
- - trl
20
- size_categories:
21
- - 10K<n<100K
22
- configs:
23
- - config_name: codeonly
24
- data_files: HyperScholar-OmniPython-50K-CodeOnly.jsonl
25
- - config_name: hyperreason
26
- data_files: HyperScholar-OmniPython-50K-HyperReason.jsonl
27
- - config_name: autotrain_chat
28
- data_files: HyperScholar-OmniPython-50K-AutoTrain-Chat.jsonl
29
  ---
30
 
31
- # HyperScholar-OmniPython (50K) — WithIn Us AI
32
 
33
- Publisher: **WithIn Us AI**
34
- Hugging Face: **gss1147**
35
 
36
- A developer-focused Python fine-tuning dataset that trains **engineering discipline** with a Mixture-of-Experts flavor:
37
- clear requirements, edge cases, complexity awareness, secure defaults, and minimal tests.
38
-
39
- ## What’s included
40
-
41
- Two configs (each **50,000** records):
42
-
43
- - **codeonly**: assistant outputs *final code only* (clean production behavior)
44
- - **hyperreason**: assistant outputs:
45
- - Expert (MoE label: ALG | DATA | ASYNC | SEC | API | TEST | PERF | STYLE)
46
- - Spec
47
- - Edge cases
48
- - Complexity
49
- - Implementation
50
- - Minimal tests
51
-
52
- ## Why developers want this
53
-
54
- - Teaches habits that improve reliability in real codebases:
55
- - fail-closed validation and security patterns
56
- - minimal tests and typing discipline
57
- - async/reliability primitives (timeouts, concurrency limits, backoff, circuit breaker)
58
- - standard library competence and clean APIs
59
-
60
- ## Files in this repo
61
-
62
- - `HyperScholar-OmniPython-50K-CodeOnly.jsonl`
63
- - `HyperScholar-OmniPython-50K-HyperReason.jsonl`
64
- - `train_sft_omni_50k.py`
65
- - `requirements_omni_50k.txt`
66
- - `hf_upload_dataset.py`
67
- - `EVAL_README.md`
68
- - `LEADERBOARD.md`
69
- - `MODEL_SUBMISSION_TEMPLATE.md`
70
-
71
- ## Schema (JSONL)
72
-
73
- Each line:
74
- ```json
75
- {
76
- "id": "OMNI00001",
77
- "tags": ["..."],
78
- "prompt": [
79
- {"role":"system","content":"..."},
80
- {"role":"user","content":"..."}
81
- ],
82
- "completion": [
83
- {"role":"assistant","content":"..."}
84
- ]
85
- }
86
- ```
87
-
88
- ## Train (TRL SFTTrainer)
89
-
90
- ```bash
91
- pip install -r requirements_omni_50k.txt
92
-
93
- python train_sft_omni_50k.py \
94
- --model YOUR_BASE_MODEL \
95
- --dataset HyperScholar-OmniPython-50K-CodeOnly.jsonl \
96
- --output_dir out_omni_50k \
97
- --use_lora --use_4bit --bf16 \
98
- --max_seq_len 4096 \
99
- --gradient_accumulation_steps 16 \
100
- --learning_rate 2e-4 \
101
- --num_train_epochs 1 \
102
- --packing
103
- ```
104
-
105
- To train the structured “engineering discipline” behavior:
106
- - set `--dataset HyperScholar-OmniPython-50K-HyperReason.jsonl`
107
-
108
- ## Contribute / publish your fine-tuned model
109
-
110
- Train a model on this dataset and publish it on Hugging Face, then add your result in `LEADERBOARD.md`
111
- using `MODEL_SUBMISSION_TEMPLATE.md`.
112
-
113
-
114
- ## AutoTrain (no-code) ready
115
-
116
- Use config `autotrain_chat` in AutoTrain Advanced (LLM → SFT). Column mapping: `{ "text": "text" }` and set `chat_template=tokenizer`.
 
1
  ---
2
+ title: Gemma Code Python — WithIn Us AI
3
+ emoji: 🧠
4
+ colorFrom: indigo
5
+ colorTo: purple
6
+ sdk: gradio
7
+ sdk_version: "4.44.1"
8
+ app_file: app.py
9
+ pinned: false
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  ---
11
 
12
+ # Gemma Code Python — WithIn Us AI
13
 
14
+ This Space fixes the Hugging Face Spaces configuration error by specifying `sdk: gradio`.
 
15
 
16
+ ## Next
17
+ Connect your fine-tuned CodeGemma model trained on:
18
+ - `gss1147/HyperScholar-OmniPython-50K` (HyperReason recommended)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
WithInUsAI_Gemma_Code_Python_Space_Fix_1767207633.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a6a66091f74f5d814f0ca49a967032770208c305367e0ea76dd80bcef3ade87c
3
+ size 1132
app.py ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import gradio as gr
2
+
3
+ TITLE = "WithIn Us AI — Gemma Code Python"
4
+ DESC = (
5
+ "Space is running. Next: connect your fine-tuned CodeGemma model trained on "
6
+ "gss1147/HyperScholar-OmniPython-50K (HyperReason)."
7
+ )
8
+
9
+ def respond(prompt: str) -> str:
10
+ prompt = (prompt or "").strip()
11
+ if not prompt:
12
+ return "Enter a prompt to test the Space. (Model hookup comes next.)"
13
+ return (
14
+ "Space OK (Gradio running).\n\n"
15
+ "Next steps:\n"
16
+ "1) Train CodeGemma with AutoTrain (SFT)\n"
17
+ "2) Publish model under gss1147/WithInUsAI-...\n"
18
+ "3) Update this Space to load the model\n\n"
19
+ f"Your prompt:\n{prompt}"
20
+ )
21
+
22
+ demo = gr.Interface(
23
+ fn=respond,
24
+ inputs=gr.Textbox(lines=10, label="Prompt"),
25
+ outputs=gr.Textbox(lines=14, label="Response"),
26
+ title=TITLE,
27
+ description=DESC,
28
+ )
29
+
30
+ if __name__ == "__main__":
31
+ demo.launch()
requirements.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ gradio==4.44.1