DeryFerd commited on
Commit
8b67688
·
verified ·
1 Parent(s): bdfa374

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -28,7 +28,7 @@ pipeline_tag: text-generation
28
 
29
  **UPDATE:** This model is a fine-tuned, versatile version of **`microsoft/phi-2`**, adapted for both **Python code generation** and **step-by-step mathematical reasoning**. The goal of this project was to distill the capabilities of larger "teacher" models (`Qwen2.5-Coder-7B-Instruct` for coding and `Qwen2.5-Math-7B-Instruct` for math) into the compact and efficient Phi-2 architecture.
30
 
31
- The model was trained on a combined dataset of Python programming problems (from MBPP) and grade-school math word problems (from GSM8K and MATH). It is designed to generate not just answers, but also the thought process behind them, mimicking the style of its teachers.
32
 
33
  - **Developed by:** DeryFerd
34
  - **Model type:** Causal Language Model
@@ -56,7 +56,7 @@ This is a specialized model. It will not perform well on tasks outside of basic
56
 
57
  ## Bias, Risks, and Limitations
58
 
59
- This model was trained on the MBPP, GSM8K, and MATH datasets. Its capabilities are limited to these domains. The model may generate code that is syntactically correct but logically flawed, or math solutions that seem logical but contain calculation errors. **Always review and test the generated output before use in production environments.**
60
 
61
  A notable limitation discovered during development is a potential **low-level GPU memory conflict**. When this model is loaded into the same runtime as a significantly larger and architecturally different model (like Qwen 7B), its fine-tuned capabilities can be silently overridden, causing it to revert to the base model's behavior. It is recommended to run this model in an isolated process.
62
 
 
28
 
29
  **UPDATE:** This model is a fine-tuned, versatile version of **`microsoft/phi-2`**, adapted for both **Python code generation** and **step-by-step mathematical reasoning**. The goal of this project was to distill the capabilities of larger "teacher" models (`Qwen2.5-Coder-7B-Instruct` for coding and `Qwen2.5-Math-7B-Instruct` for math) into the compact and efficient Phi-2 architecture.
30
 
31
+ The model was trained on a combined dataset of Python programming problems (from MBPP and opc-sft-stage2) and grade-school math word problems (from GSM8K and MATH). It is designed to generate not just answers, but also the thought process behind them, mimicking the style of its teachers.
32
 
33
  - **Developed by:** DeryFerd
34
  - **Model type:** Causal Language Model
 
56
 
57
  ## Bias, Risks, and Limitations
58
 
59
+ This model was trained on the MBPP, opc-sft-stage2, GSM8K, and MATH datasets. Its capabilities are limited to these domains. The model may generate code that is syntactically correct but logically flawed, or math solutions that seem logical but contain calculation errors. **Always review and test the generated output before use in production environments.**
60
 
61
  A notable limitation discovered during development is a potential **low-level GPU memory conflict**. When this model is loaded into the same runtime as a significantly larger and architecturally different model (like Qwen 7B), its fine-tuned capabilities can be silently overridden, causing it to revert to the base model's behavior. It is recommended to run this model in an isolated process.
62