Update README.md
Browse files
README.md
CHANGED
|
@@ -313,6 +313,17 @@ with torch.no_grad():
|
|
| 313 |
### Deployment Options
|
| 314 |
- **Transformers**: Python, PyTorch integration
|
| 315 |
- **vLLM**: High-throughput inference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 316 |
|
| 317 |
## 12. Citation
|
| 318 |
|
|
|
|
| 313 |
### Deployment Options
|
| 314 |
- **Transformers**: Python, PyTorch integration
|
| 315 |
- **vLLM**: High-throughput inference
|
| 316 |
+
- **Ollama**: Easy local deployment and inference
|
| 317 |
+
- **Size**: 20GB
|
| 318 |
+
- **Requirements**: Minimum 20GB RAM/VRAM for local execution
|
| 319 |
+
- **Local Deployment**: Runs efficiently on local machines with sufficient resources
|
| 320 |
+
```bash
|
| 321 |
+
# Pull the model
|
| 322 |
+
ollama pull 169pi/alpie-core
|
| 323 |
+
|
| 324 |
+
# Run the model
|
| 325 |
+
ollama run 169pi/alpie-core
|
| 326 |
+
```
|
| 327 |
|
| 328 |
## 12. Citation
|
| 329 |
|