File size: 2,160 Bytes
fb3d9b1 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 |
---
license: apache-2.0
base_model: EleutherAI/pythia-1.4b
tags:
- generated_from_trainer
- sft
- ultrafeedback
datasets:
- trl-lib/tldr
language:
- en
library_name: transformers
---
# pythia-1.4b Fine-tuned on tldr
This model is a fine-tuned version of [EleutherAI/pythia-1.4b](https://huggingface.co/EleutherAI/pythia-1.4b) on the [trl-lib/tldr](https://huggingface.co/datasets/trl-lib/tldr) dataset.
## Training Results

### Training Statistics
| Metric | Value |
|--------|-------|
| Total Steps | 1356 |
| Final Training Loss | 147.1650 |
| Min Training Loss | 2.8189 |
| Training Runtime | 347.80 seconds |
| Samples/Second | 249.34 |
## Training Configuration
| Parameter | Value |
|-----------|-------|
| Base Model | EleutherAI/pythia-1.4b |
| Dataset | trl-lib/tldr |
| Number of Epochs | 1.0 |
| Per Device Batch Size | 16 |
| Gradient Accumulation Steps | 1 |
| Total Batch Size | 64 (4 GPUs) |
| Learning Rate | 2e-05 |
| LR Scheduler | cosine |
| Warmup Ratio | 0.1 |
| Max Sequence Length | 512 |
| Optimizer | adamw_torch_fused |
| Mixed Precision | BF16 |
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "activeDap/pythia-1.4b_tldr"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Format input with prompt template
prompt = "What is machine learning?\nAssistant:"
inputs = tokenizer(prompt, return_tensors="pt")
# Generate response
outputs = model.generate(**inputs, max_new_tokens=100)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```
## Training Framework
- **Library:** Transformers + TRL
- **Training Type:** Supervised Fine-Tuning (SFT)
- **Format:** Prompt-completion with Assistant-only loss
## Citation
If you use this model, please cite the original base model and dataset:
```bibtex
@misc{ultrafeedback2023,
title={UltraFeedback: Boosting Language Models with High-quality Feedback},
author={Ganqu Cui and Lifan Yuan and Ning Ding and others},
year={2023},
eprint={2310.01377},
archivePrefix={arXiv}
}
```
|