File size: 12,888 Bytes
5d19c90 05be445 5d19c90 05be445 5d19c90 05be445 5d19c90 05be445 5d19c90 05be445 5d19c90 05be445 5d19c90 05be445 5d19c90 05be445 5d19c90 05be445 5d19c90 05be445 5d19c90 05be445 5d19c90 05be445 5d19c90 05be445 5d19c90 05be445 5d19c90 05be445 5d19c90 05be445 5d19c90 05be445 5d19c90 05be445 5d19c90 05be445 5d19c90 05be445 5d19c90 05be445 5d19c90 05be445 5d19c90 05be445 5d19c90 05be445 5d19c90 05be445 5d19c90 05be445 5d19c90 05be445 5d19c90 05be445 5d19c90 05be445 5d19c90 05be445 5d19c90 05be445 5d19c90 05be445 5d19c90 05be445 5d19c90 05be445 5d19c90 05be445 5d19c90 05be445 5d19c90 05be445 5d19c90 05be445 5d19c90 05be445 5d19c90 05be445 5d19c90 05be445 5d19c90 05be445 5d19c90 05be445 5d19c90 05be445 5d19c90 05be445 5d19c90 05be445 5d19c90 05be445 5d19c90 05be445 5d19c90 05be445 5d19c90 05be445 5d19c90 05be445 5d19c90 05be445 5d19c90 05be445 5d19c90 05be445 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{
"library_name": "peft",
"pipeline_tag": "text-generation",
"license": "llama3.2",
"tags": [
"LoRA",
"QLoRA",
"instruction-tuning",
"text-classification",
"peft",
"transformers",
"trl",
"bitsandbytes",
"base_model:adapter:meta-llama/Llama-3.2-1B"
],
"base_model": "meta-llama/Llama-3.2-1B",
"datasets": ["real-jiakai/arxiver-with-category"],
"language": ["en"],
"widget": [
{
"text": "Classify the text into ['cs.CL','cs.CV','cs.LG','hep-ph','quant-ph'] and return the answer as the exact text label.\ntext: Quantum entanglement in photonics\nlabel:"
},
{
"text": "Classify the text into ['cs.CL','cs.CV','cs.LG','hep-ph','quant-ph'] and return the answer as the exact text label.\ntext: Vision transformer achieves state-of-the-art on ImageNet\nlabel:"
}
]
}
---
# Model Card for LLM Instruction‑Tuning for Text Classification (LoRA + QLoRA)
<!-- Provide a quick summary of what the model is/does. -->
This repository provides code and configuration to fine‑tune a decoder‑only LLM (default: `meta-llama/Llama-3.2-1B`) for **instruction‑style text classification** using **LoRA/QLoRA**. Rather than training a task‑specific classifier head, the project formulates classification as a short instruction → answer generation task and evaluates by **exact string match** against the label. It includes simple training/inference scripts, a 5‑label arXiv‑style demo, and optional Amazon SageMaker entrypoints.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This project instruction‑tunes a base, decoder‑only LLM with **LoRA adapters** loaded in **4‑bit NF4** precision for memory‑efficient training and inference. Supervised fine‑tuning is performed with TRL’s `SFTTrainer`. Prompts ask the model to “return the answer as the exact text label,” so predictions are decoded as plain text and compared by string match.
- **Developed by:** Amirhossein Yousefi (GitHub: `amirhossein-yousefi`)
- **Model type:** Decoder‑only LLM fine‑tuned with LoRA for single‑label text classification via instruction‑following
- **Language(s) (NLP):** English by default (demo dataset uses arXiv titles/abstracts); broader multilingual coverage depends on the chosen base model
- **License:** The repository itself does not include an explicit OSS license; the **base model** `meta-llama/Llama-3.2-1B` is governed by the **Llama 3.2 Community License**. You must accept and comply with Meta’s license to access and use the weights.
- **Finetuned from model :** `meta-llama/Llama-3.2-1B` (configurable)
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/amirhossein-yousefi/LLM-Instruction-Tuning-Text-Classification
- **Demo :** The repo includes an arXiv‑style 5‑label demo and example results; no hosted demo is provided.
## Uses
### Direct Use
- Fine‑tune LoRA adapters on your own CSV dataset for **single‑label text classification** (e.g., topic/category detection) using the provided `scripts/train.py`.
- Run inference/evaluation with `scripts/predict.py` to generate deterministic label strings and compute **accuracy**, **micro/macro F1**, a **classification report**, and a **confusion matrix**.
- Optional **Amazon SageMaker** utilities let you run managed training and deploy a real‑time endpoint with the LoRA adapters attached at load time.
### Downstream Use
- Integrate the trained LoRA adapters into applications where explainable, instruction‑driven classification is helpful (e.g., routing, tagging, moderation).
- Swap the base model (any compatible decoder‑only LLM on the Hugging Face Hub) and re‑train with the same prompt template.
- Extend label sets without architectural changes—only prompt/label lists need to be updated.
### Out-of-Scope Use
- **CPU‑only** training/inference with this repo as‑is (4‑bit `bitsandbytes` path expects NVIDIA CUDA GPUs).
- **Multi‑label** classification (comma‑separated outputs) is not implemented out of the box (listed as a roadmap idea).
- **Open‑domain generation** or safety‑critical decision‑making; this project focuses on label selection with short inputs.
## Bias, Risks, and Limitations
- Outputs mirror biases in the **training corpus** you provide and in the **base model**. If your labels or examples are imbalanced or ambiguous, the model may propagate that bias.
- Exact‑match decoding can be brittle to **tokenization/typo** effects—ensure labels are short, canonical strings and restrict the decoding space.
- The base Llama 3.2 model has its own safety limitations and license‑based usage constraints (e.g., attribution and acceptable‑use provisions).
- The demo dataset is limited to **5 arXiv‑style labels** and relatively short academic texts; generalizing beyond this domain requires additional data.
### Recommendations
- Curate balanced datasets; consider **stratified splits** and per‑class metrics.
- Keep **temperature = 0.0** for deterministic label decoding; constrain generation length (e.g., `max_new_tokens=8`).
- Validate robustness with **label synonyms/aliases** and adversarial cases; consider post‑processing that maps variants to canonical labels.
- Review and comply with the **Llama 3.2 Community License** (and any other upstream licenses) when distributing adapters/derivatives.
## How to Get Started with the Model
**Install & train**
```bash
python -m venv .venv
source .venv/bin/activate # Windows: .venv\Scripts\Activate.ps1
pip install --upgrade pip
pip install -r requirements.txt
# If the base model is gated, export an HF token
export HF_TOKEN=YOUR_HF_ACCESS_TOKEN
# One‑command training on CSVs
python scripts/train.py --base_path dataset --train_file train.csv --val_file validation.csv --test_file test.csv --label_column label_name --text_fields title abstract --base_model_name meta-llama/Llama-3.2-1B --output_dir llama-3.2-1b-arxiver-lora
```
**Inference & evaluation**
```bash
python scripts/predict.py --base_path dataset --test_file test.csv --base_model_name meta-llama/Llama-3.2-1B --output_dir llama-3.2-1b-arxiver-lora --save_csv predictions.csv
```
**SageMaker **
```bash
# Train a managed job
python sagemaker/train_sm.py --source_dir . --dataset_dir dataset --train_file train.csv --val_file validation.csv --test_file test.csv --label_column label_name --text_fields title abstract --base_model_id meta-llama/Llama-3.2-1B --instance_type ml.g5.2xlarge --instance_count 1
# Deploy a real‑time endpoint
python sagemaker/deploy_sm.py --training_job_name <your-job> --base_model_id meta-llama/Llama-3.2-1B --instance_type ml.g5.2xlarge --default_labels_json '["cs.CL","cs.CV","cs.LG","hep-ph","quant-ph"]'
```
## Training Details
### Training Data
- Expected input: three CSV files under a base path: `train.csv`, `validation.csv`, `test.csv`.
- Required columns: a **label** column (default `label_name`) and one or more text fields (defaults: `title`, `abstract`). Missing/blank text fields are skipped; text fields are concatenated with punctuation.
- The repository ships utilities to prepare a **5‑class arXiv‑style demo** (labels: `['cs.CL','cs.CV','cs.LG','hep-ph','quant-ph']`).
### Training Procedure
#### Preprocessing
- Prompts are constructed as short instruction → answer pairs:
- **Train:** includes the gold label after `label:`.
- **Inference:** leaves `label:` empty and decodes the generated label.
#### Training Hyperparameters
- **Training regime:** mixed precision with `fp16=True`, `tf32=True`; 4‑bit NF4 quantization with bfloat16 compute (QLoRA‑style).
- **Selected defaults (single‑GPU):**
- `num_train_epochs=1`
- `per_device_train_batch_size=8`, `per_device_eval_batch_size=8`
- `gradient_accumulation_steps=2` (effective 16 per step, per device)
- `learning_rate=2e-4`, `weight_decay=1e-3`, `warmup_ratio=0.03`
- `logging_steps=10`, `evaluation_strategy="epoch"`, `save_strategy="epoch"`, `save_total_limit=2`
- LoRA: `r=2`, `alpha=2`, `dropout=0.0`
- Quantization: `load_in_4bit=True`, `bnb_4bit_quant_type="nf4"`, `bnb_4bit_compute_dtype="bfloat16"`, `bnb_4bit_use_double_quant=True`
- Generation (eval): `temperature=0.0`, `max_new_tokens=8`, `do_sample=False`
#### Speeds, Sizes, Times
- Example environment: Laptop RTX 3080 Ti (16 GB VRAM), CUDA 12.9, PyTorch 2.8.0+cu129.
- Example run stats: ~6,314 seconds wall‑clock training, with TensorBoard logs under the run directory.
- Total training FLOPs (example): ~3.69e16 (as reported by the training logs).
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
- The example evaluation uses the provided arXiv‑style 5‑label test split.
#### Factors
- Per‑class metrics are reported for `cs.CL`, `cs.CV`, `cs.LG`, `hep-ph`, `quant-ph`.
#### Metrics
- Accuracy, micro F1, macro F1, per‑class precision/recall/F1, and a confusion matrix.
### Results
- **Overall:** Accuracy 93.8%, Micro‑F1 0.938, Macro‑F1 0.950.
- **Per‑class (Precision / Recall / F1 / Support):**
- `cs.CL`: 0.914 / 0.963 / 0.938 / 432
- `cs.CV`: 0.935 / 0.923 / 0.929 / 545
- `cs.LG`: 0.917 / 0.890 / 0.903 / 536
- `hep-ph`: 0.994 / 0.988 / 0.991 / 164
- `quant-ph`: 0.986 / 0.990 / 0.988 / 293
#### Summary
The LoRA‑tuned 1B parameter Llama 3.2 model achieves strong performance on short academic texts while keeping training/inference affordable due to 4‑bit quantization. Performance is consistent across most classes, with particularly high scores for physics categories.
## Model Examination
- The repo includes utilities for a **classification report** and **confusion matrix**. Inspect misclassifications to refine label definitions or add examples. Consider probing sensitivity to prompt wording.
## Environmental Impact
*(Approximate; depends on your hardware and run length.)*
Use the [MLCO2 Impact calculator](https://mlco2.github.io/impact#compute) with your GPU model, power draw, and wall‑clock runtime.
- **Hardware Type:** Single NVIDIA GPU (example: RTX 3080 Ti Laptop 16 GB)
- **Hours used:** ~1.75 hours (example)
## Technical Specifications
### Model Architecture and Objective
- **Architecture:** Decoder‑only Transformer (Llama 3.2 family when using the default base)
- **Objective:** Supervised instruction‑tuning for **single‑label classification** via generative decoding with exact‑match evaluation
- **Context length:** 512 tokens (config default; pass explicitly to trainer to ensure enforcement)
### Compute Infrastructure
#### Hardware
- NVIDIA CUDA GPU required for 4‑bit `bitsandbytes` training/inference
(CPU‑only runs are not supported by the included scripts).
#### Software
- Python ≥ 3.10, PyTorch, `transformers`, `trl`, `peft`, `bitsandbytes`, `accelerate`, and standard scientific Python packages.
- Optional: Astral’s `uv` for faster, reproducible dependency management (the repo also ships `requirements.txt`).
## Citation
If you use this repository, please cite the GitHub project and the base model as appropriate.
**BibTeX (project):**
```bibtex
@software{yousefi_2025_llm_instruction_tuning_text_classification,
author = {Yousefi, Amirhossein},
title = {LLM Instruction-Tuning for Text Classification (LoRA + QLoRA)},
year = {2025},
publisher = {GitHub},
url = {https://github.com/amirhossein-yousefi/LLM-Instruction-Tuning-Text-Classification}
}
```
**APA (project):**
Yousefi, A. (2025). *LLM Instruction‑Tuning for Text Classification (LoRA + QLoRA)*. GitHub. https://github.com/amirhossein-yousefi/LLM-Instruction-Tuning-Text-Classification
**Base model:** Meta AI. (2024). *Llama 3.2‑1B* [Computer software]. Meta. https://huggingface.co/meta-llama/Llama-3.2-1B
## Glossary
- **LoRA:** Low‑Rank Adapters for parameter‑efficient fine‑tuning.
- **QLoRA:** LoRA training with quantized base weights (typically 4‑bit NF4) and higher‑precision compute.
- **SFT:** Supervised Fine‑Tuning.
- **Exact‑match decoding:** Evaluates whether the generated label text exactly matches the gold label string.
## More Information
- Amazon SageMaker scripts are included for managed training and deployment.
- Roadmap ideas include multi‑label support and few‑shot exemplars in prompts.
## Model Card Authors
- Drafted by: ChatGPT (based on the repository’s README and code structure)
- Repository author: Amirhossein Yousefi
## Model Card Contact
- Open an issue on the GitHub repository for questions or contributions.
|