Update README.md
Browse files
README.md
CHANGED
|
@@ -18,7 +18,7 @@ datasets:
|
|
| 18 |
base_model: unsloth/Llama-3.2-1B-Instruct
|
| 19 |
---
|
| 20 |
|
| 21 |
-
###
|
| 22 |
|
| 23 |
Natural language → Linux command. A compact Llama 3.2 1B Instruct model fine‑tuned (LoRA) to turn plain‑English requests into correct shell commands.
|
| 24 |
|
|
@@ -40,7 +40,7 @@ ollama --version
|
|
| 40 |
|
| 41 |
3) Run the model interactively:
|
| 42 |
```bash
|
| 43 |
-
ollama run
|
| 44 |
```
|
| 45 |
Then type a request, e.g.:
|
| 46 |
- "List all files in the current directory with detailed information"
|
|
@@ -51,13 +51,13 @@ Press Ctrl+C to exit.
|
|
| 51 |
|
| 52 |
4) One‑off (non‑interactive):
|
| 53 |
```bash
|
| 54 |
-
ollama run
|
| 55 |
# Expected: head -n 5 access.log
|
| 56 |
```
|
| 57 |
|
| 58 |
5) Get command‑only answers (when needed):
|
| 59 |
```bash
|
| 60 |
-
ollama run
|
| 61 |
# Expected: uname -a
|
| 62 |
```
|
| 63 |
|
|
@@ -88,7 +88,7 @@ ollama run linux-cmd-gen -p "Find all .py files recursively"
|
|
| 88 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 89 |
import torch
|
| 90 |
|
| 91 |
-
model_id = "
|
| 92 |
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
| 93 |
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16 if torch.cuda.is_available() else None)
|
| 94 |
|
|
@@ -109,7 +109,7 @@ print(generate_command("List all files in the current directory with detailed in
|
|
| 109 |
```python
|
| 110 |
from unsloth import FastLanguageModel
|
| 111 |
|
| 112 |
-
model_id = "
|
| 113 |
model, tokenizer = FastLanguageModel.from_pretrained(model_name=model_id, max_seq_length=2048)
|
| 114 |
FastLanguageModel.for_inference(model)
|
| 115 |
|
|
@@ -174,7 +174,7 @@ Derived from Meta Llama 3.2. Use must comply with the base model license. Check
|
|
| 174 |
author = {Harshvardhan Vatsa},
|
| 175 |
title = {Linux Command Generator (Llama 3.2 1B)},
|
| 176 |
year = {2025},
|
| 177 |
-
url = {https://huggingface.co/
|
| 178 |
}
|
| 179 |
```
|
| 180 |
|
|
|
|
| 18 |
base_model: unsloth/Llama-3.2-1B-Instruct
|
| 19 |
---
|
| 20 |
|
| 21 |
+
### mecha-org/linux-command-generator-llama3.2-1b
|
| 22 |
|
| 23 |
Natural language → Linux command. A compact Llama 3.2 1B Instruct model fine‑tuned (LoRA) to turn plain‑English requests into correct shell commands.
|
| 24 |
|
|
|
|
| 40 |
|
| 41 |
3) Run the model interactively:
|
| 42 |
```bash
|
| 43 |
+
ollama run mecha-org/linux-command-generator-llama3.2-1b
|
| 44 |
```
|
| 45 |
Then type a request, e.g.:
|
| 46 |
- "List all files in the current directory with detailed information"
|
|
|
|
| 51 |
|
| 52 |
4) One‑off (non‑interactive):
|
| 53 |
```bash
|
| 54 |
+
ollama run mecha-org/linux-command-generator-llama3.2-1b -p "Display the first 5 lines of access.log"
|
| 55 |
# Expected: head -n 5 access.log
|
| 56 |
```
|
| 57 |
|
| 58 |
5) Get command‑only answers (when needed):
|
| 59 |
```bash
|
| 60 |
+
ollama run mecha-org/linux-command-generator-llama3.2-1b -p "Output only the command with no explanation. Show system information including kernel version"
|
| 61 |
# Expected: uname -a
|
| 62 |
```
|
| 63 |
|
|
|
|
| 88 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 89 |
import torch
|
| 90 |
|
| 91 |
+
model_id = "mecha-org/linux-command-generator-llama3.2-1b"
|
| 92 |
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
| 93 |
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16 if torch.cuda.is_available() else None)
|
| 94 |
|
|
|
|
| 109 |
```python
|
| 110 |
from unsloth import FastLanguageModel
|
| 111 |
|
| 112 |
+
model_id = "mecha-org/linux-command-generator-llama3.2-1b"
|
| 113 |
model, tokenizer = FastLanguageModel.from_pretrained(model_name=model_id, max_seq_length=2048)
|
| 114 |
FastLanguageModel.for_inference(model)
|
| 115 |
|
|
|
|
| 174 |
author = {Harshvardhan Vatsa},
|
| 175 |
title = {Linux Command Generator (Llama 3.2 1B)},
|
| 176 |
year = {2025},
|
| 177 |
+
url = {https://huggingface.co/mecha-org/linux-command-generator-llama3.2-1b}
|
| 178 |
}
|
| 179 |
```
|
| 180 |
|