| tags: | |
| - neuron | |
| - optimized | |
| - aws-neuron | |
| - text-generation | |
| base_model: meta-llama/Llama-3.2-1B | |
| # Neuron-Optimized meta-llama/Llama-3.2-1B | |
| This repository contains AWS Neuron-optimized files for [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B). | |
| ## Model Details | |
| - **Base Model**: [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) | |
| - **Task**: text-generation | |
| - **Optimization**: AWS Neuron compilation | |
| - **Generated by**: [badaoui](https://huggingface.co/badaoui) | |
| - **Generated using**: [Optimum Neuron Compiler Space](https://huggingface.co/spaces/optimum/neuron-export) | |
| ## Usage | |
| This model has been optimized for AWS Neuron devices (Inferentia/Trainium). To use it: | |
| ```python | |
| from optimum.neuron import NeuronModelForCausalLM | |
| model = NeuronModelForCausalLM.from_pretrained("badaoui/meta-llama-Llama-3.2-1B-neuron") | |
| ``` | |
| ## Performance | |
| These files are pre-compiled for AWS Neuron devices and should provide improved inference performance compared to the original model when deployed on Inferentia or Trainium instances. | |
| ## Original Model | |
| For the original model, training details, and more information, please visit: [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) | |