Update README.md
Browse files
README.md
CHANGED
|
@@ -21,6 +21,8 @@ tags:
|
|
| 21 |
|
| 22 |
### SmolLM2-Math-IIO-1.7B-Instruct
|
| 23 |
|
|
|
|
|
|
|
| 24 |
| File Name | Size | Description | Upload Status |
|
| 25 |
|----------------------------------------|------------|------------------------------------------------|----------------|
|
| 26 |
| `.gitattributes` | 1.52 kB | Git attributes configuration file | Uploaded |
|
|
@@ -34,4 +36,51 @@ tags:
|
|
| 34 |
| `tokenizer_config.json` | 3.95 kB | Tokenizer configuration for loading and usage | Uploaded |
|
| 35 |
| `vocab.json` | 801 kB | Vocabulary for the tokenizer | Uploaded |
|
| 36 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 37 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 21 |
|
| 22 |
### SmolLM2-Math-IIO-1.7B-Instruct
|
| 23 |
|
| 24 |
+
The **SmolLM2-Math-IIO-1.7B-Instruct** model is a fine-tuned variant of the **SmolLM2-1.7B** architecture, optimized for mathematical instruction and reasoning tasks. It is particularly suited for applications that require mathematical problem-solving, logical inference, and detailed step-by-step explanations.
|
| 25 |
+
|
| 26 |
| File Name | Size | Description | Upload Status |
|
| 27 |
|----------------------------------------|------------|------------------------------------------------|----------------|
|
| 28 |
| `.gitattributes` | 1.52 kB | Git attributes configuration file | Uploaded |
|
|
|
|
| 36 |
| `tokenizer_config.json` | 3.95 kB | Tokenizer configuration for loading and usage | Uploaded |
|
| 37 |
| `vocab.json` | 801 kB | Vocabulary for the tokenizer | Uploaded |
|
| 38 |
|
| 39 |
+
### **Key Features:**
|
| 40 |
+
|
| 41 |
+
1. **Math-Focused Capabilities:**
|
| 42 |
+
This model is fine-tuned to handle a wide range of mathematical queries, from simple arithmetic to complex equations and mathematical proofs.
|
| 43 |
+
|
| 44 |
+
2. **Instruction-Tuned:**
|
| 45 |
+
Specifically trained to follow structured queries and deliver clear, coherent outputs based on instructions, ensuring high-quality, relevant responses to prompts.
|
| 46 |
+
|
| 47 |
+
3. **Tokenizer & Custom Tokens:**
|
| 48 |
+
Includes a robust tokenizer configuration with support for mathematical notation, custom tokens, and an extended vocabulary for accurate understanding and output generation.
|
| 49 |
+
|
| 50 |
+
---
|
| 51 |
+
|
| 52 |
+
### **Training Details:**
|
| 53 |
+
- **Base Model:** [SmolLM2-1.7B](https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B-Instruct)
|
| 54 |
+
- **Dataset:** Trained on **Math-IIO-68K-Mini**, a dataset focused on mathematical instructions and logic-based queries, with a total of 68.8k examples.
|
| 55 |
+
|
| 56 |
---
|
| 57 |
+
|
| 58 |
+
### **File Details:**
|
| 59 |
+
| **File Name** | **Size** | **Description** |
|
| 60 |
+
|-----------------------------------|----------------|--------------------------------------------------|
|
| 61 |
+
| `.gitattributes` | 1.52 kB | Git attributes configuration file. |
|
| 62 |
+
| `README.md` | 287 Bytes | Updated README file with model details. |
|
| 63 |
+
| `config.json` | 940 Bytes | Configuration file for model setup. |
|
| 64 |
+
| `generation_config.json` | 162 Bytes | Configuration for generation-specific settings. |
|
| 65 |
+
| `merges.txt` | 515 kB | Tokenizer merging rules (Byte Pair Encoding). |
|
| 66 |
+
| `pytorch_model.bin` | 3.42 GB | Full model weights in PyTorch format. |
|
| 67 |
+
| `special_tokens_map.json` | 572 Bytes | Special token mappings for the tokenizer. |
|
| 68 |
+
| `tokenizer.json` | 3.77 MB | Tokenizer configuration and vocabulary. |
|
| 69 |
+
| `tokenizer_config.json` | 3.95 kB | Tokenizer configuration for loading. |
|
| 70 |
+
| `vocab.json` | 801 kB | Vocabulary file for the tokenizer. |
|
| 71 |
+
|
| 72 |
+
---
|
| 73 |
+
|
| 74 |
+
### **Capabilities:**
|
| 75 |
+
- **Mathematical Problem-Solving:** Solves and explains complex mathematical problems, including algebra, calculus, and more advanced topics.
|
| 76 |
+
- **Instruction-Following:** Adheres to structured inputs and outputs, making it effective for generating step-by-step solutions.
|
| 77 |
+
- **Text Generation:** Capable of generating mathematical proofs, explanations, and educational content tailored to various user queries.
|
| 78 |
+
|
| 79 |
+
---
|
| 80 |
+
|
| 81 |
+
### **Usage Instructions:**
|
| 82 |
+
1. **Model Setup:** Download all model files and ensure the PyTorch model weights and tokenizer configurations are included.
|
| 83 |
+
2. **Inference:** Load the model in a Python environment using frameworks like PyTorch or Hugging Face's Transformers.
|
| 84 |
+
3. **Customization:** Configure the model with the `config.json` and `generation_config.json` files for optimal performance during inference.
|
| 85 |
+
|
| 86 |
+
---
|