File size: 4,151 Bytes
0a76aa1 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 |
---
license: apache-2.0
base_model: google/functiongemma-270m-it
library_name: mlx
language:
- en
tags:
- quantllm
- mlx
- mlx-lm
- apple-silicon
- transformers
- q4_k_m
---
<div align="center">
# π functiongemma-270m-it-4bit-mlx
**google/functiongemma-270m-it** converted to **MLX** format
[](https://github.com/codewithdark-git/QuantLLM)
[]()
[]()
<a href="https://github.com/codewithdark-git/QuantLLM">β Star QuantLLM on GitHub</a>
</div>
---
## π About This Model
This model is **[google/functiongemma-270m-it](https://huggingface.co/google/functiongemma-270m-it)** converted to **MLX** format optimized for Apple Silicon (M1/M2/M3/M4) Macs with native acceleration.
| Property | Value |
|----------|-------|
| **Base Model** | [google/functiongemma-270m-it](https://huggingface.co/google/functiongemma-270m-it) |
| **Format** | MLX |
| **Quantization** | Q4_K_M |
| **License** | apache-2.0 |
| **Created With** | [QuantLLM](https://github.com/codewithdark-git/QuantLLM) |
## π Quick Start
### Generate Text with mlx-lm
```python
from mlx_lm import load, generate
# Load the model
model, tokenizer = load("QuantLLM/functiongemma-270m-it-4bit-mlx")
# Simple generation
prompt = "Explain quantum computing in simple terms"
messages = [{"role": "user", "content": prompt}]
prompt_formatted = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True
)
# Generate response
text = generate(model, tokenizer, prompt=prompt_formatted, verbose=True)
print(text)
```
### Streaming Generation
```python
from mlx_lm import load, stream_generate
model, tokenizer = load("QuantLLM/functiongemma-270m-it-4bit-mlx")
prompt = "Write a haiku about coding"
messages = [{"role": "user", "content": prompt}]
prompt_formatted = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True
)
# Stream tokens as they're generated
for token in stream_generate(model, tokenizer, prompt=prompt_formatted, max_tokens=200):
print(token, end="", flush=True)
```
### Command Line Interface
```bash
# Install mlx-lm
pip install mlx-lm
# Generate text
python -m mlx_lm.generate --model QuantLLM/functiongemma-270m-it-4bit-mlx --prompt "Hello!"
# Interactive chat
python -m mlx_lm.chat --model QuantLLM/functiongemma-270m-it-4bit-mlx
```
### System Requirements
| Requirement | Minimum |
|-------------|---------|
| **Chip** | Apple Silicon (M1/M2/M3/M4) |
| **macOS** | 13.0 (Ventura) or later |
| **Python** | 3.10+ |
| **RAM** | 8GB+ (16GB recommended) |
```bash
# Install dependencies
pip install mlx-lm
```
## π Model Details
| Property | Value |
|----------|-------|
| **Original Model** | [google/functiongemma-270m-it](https://huggingface.co/google/functiongemma-270m-it) |
| **Format** | MLX |
| **Quantization** | Q4_K_M |
| **License** | `apache-2.0` |
| **Export Date** | 2025-12-21 |
| **Exported By** | [QuantLLM v2.0](https://github.com/codewithdark-git/QuantLLM) |
---
## π Created with QuantLLM
<div align="center">
[](https://github.com/codewithdark-git/QuantLLM)
**Convert any model to GGUF, ONNX, or MLX in one line!**
```python
from quantllm import turbo
# Load any HuggingFace model
model = turbo("google/functiongemma-270m-it")
# Export to any format
model.export("mlx", quantization="Q4_K_M")
# Push to HuggingFace
model.push("your-repo", format="mlx")
```
<a href="https://github.com/codewithdark-git/QuantLLM">
<img src="https://img.shields.io/github/stars/codewithdark-git/QuantLLM?style=social" alt="GitHub Stars">
</a>
**[π Documentation](https://github.com/codewithdark-git/QuantLLM#readme)** Β·
**[π Report Issue](https://github.com/codewithdark-git/QuantLLM/issues)** Β·
**[π‘ Request Feature](https://github.com/codewithdark-git/QuantLLM/issues)**
</div>
|