| base_model: | |
| - facebook/opt-2.7b | |
| datasets: | |
| - databricks/databricks-dolly-15k | |
| language: | |
| - en | |
| license: apache-2.0 | |
| metrics: | |
| - rouge | |
| pipeline_tag: text-generation | |
| library_name: transformers | |
| # MiniLLM-OPT-2.7B | |
| [paper](https://arxiv.org/abs/2306.08543) | [code](https://github.com/microsoft/LMOps/tree/main/minillm) | |
| **MiniLLM-OPT-2.7B** is an OPT-2.7B model distilled from [OPT-13B](https://huggingface.co/MiniLLM/teacher-OPT-13B) on [databricks-dolly-15k](https://huggingface.co/datasets/aisquared/databricks-dolly-15k) | |
| <p align='left'> | |
| <img src="https://cdn-uploads.huggingface.co/production/uploads/624ac662102fcdff87be51b9/7hBWGZzYMJihCRQ70XoiQ.png" width="1000"> | |
| </p> | |
| **Note**: MiniLLM requires an [SFT model](https://huggingface.co/MiniLLM/init-opt-2.7B) for initilization to perform the PPO optimization. | |
| ## Evaluation | |
| We ask GPT-4 to give scores for the generated responses of MiniLLM. The prompts are taken from [databricks-dolly-15k](https://huggingface.co/datasets/aisquared/databricks-dolly-15k) (test set), [self-instruct](https://github.com/tatsu-lab/stanford_alpaca/blob/main/alpaca_data.json), and [vicuna](https://github.com/lm-sys/vicuna-blog-eval) | |
| <p align='left'> | |
| <img src="https://cdn-uploads.huggingface.co/production/uploads/624ac662102fcdff87be51b9/rDXnaDbKH5mBYAmqGC-_a.png" width="1000"> | |
| </p> | |
| ## Baseline Models | |
| + [SFT w/o KD](https://huggingface.co/MiniLLM/SFT-opt-2.7B) | |
| + [KD](https://huggingface.co/MiniLLM/KD-opt-2.7B) | |
| + [SeqKD](https://huggingface.co/MiniLLM/SeqKD-opt-2.7B) | |
| ## Citation | |
| ``` | |
| @inproceedings{minillm, | |
| title={MiniLLM: Knowledge Distillation of Large Language Models}, | |
| author={Gu, Yuxian and Dong, Li and Wei, Furu and Huang, Minlie}, | |
| booktitle={Proceedings of ICLR}, | |
| year={2024} | |
| } | |
| ``` |