|
|
--- |
|
|
license: apache-2.0 |
|
|
language: |
|
|
- en |
|
|
base_model: google/codegemma-2b |
|
|
tags: |
|
|
- nl2bash |
|
|
- text-to-code |
|
|
- code-generation |
|
|
- terminal |
|
|
- command-line |
|
|
pipeline_tag: text-generation |
|
|
--- |
|
|
|
|
|
# zero-nl2cmds-v1: An AI Terminal Assistant Model |
|
|
|
|
|
This is a fine-tuned version of `google/codegemma-2b` designed to translate natural language instructions into precise Linux/macOS bash commands. It's the core component for an AI-powered command-line assistant. |
|
|
|
|
|
**Author:** Sanjayyy06 |
|
|
**Version:** 1.0 |
|
|
|
|
|
## Model Description |
|
|
|
|
|
This model takes a simple instruction, like "create a single directory named 'api'", and outputs the corresponding bash command, `mkdir api`. It has been specifically trained and corrected to handle common command-line tasks with a high degree of literal precision. |
|
|
|
|
|
## Intended Use |
|
|
|
|
|
This model is intended to be the inference engine for a local command-line interface (CLI) tool. A user can type a command in plain English, and the tool will use this model to generate and execute the shell command. |
|
|
|
|
|
```python |
|
|
# Example usage with the transformers library |
|
|
from transformers import AutoTokenizer, AutoModelForCausalLM |
|
|
|
|
|
model_name = "Sanjayyy06/zero-nl2cmds-v1" |
|
|
tokenizer = AutoTokenizer.from_pretrained(model_name) |
|
|
model = AutoModelForCausalLM.from_pretrained(model_name) |
|
|
|
|
|
prompt = "Instruction: list all files in long format\nOutput:" |
|
|
inputs = tokenizer(prompt, return_tensors="pt") |
|
|
|
|
|
# Generate the command |
|
|
outputs = model.generate(**inputs, max_new_tokens=50) |
|
|
print(tokenizer.decode(outputs[0], skip_special_tokens=True)) |
|
|
# Expected output: Instruction: list all files in long format |
|
|
# Output: ls -l |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
#Training Process |
|
|
-The model was fine-tuned using a multi-stage process to ensure high accuracy and control over its behavior. |
|
|
-Initial Fine-Tuning: The base google/codegemma-2b model was first fine-tuned on a 5,000-example subset of the AnishJoshi/nl2bash-custom dataset. |
|
|
-Overfitting Correction: Early tests showed the model began to severely overfit on common patterns (e.g., generating entire project structures for a simple mkdir command). |
|
|
-Surgical Strike: A high-quality, 500-example "correctional dataset" was created to explicitly re-teach the model the literal meaning of simple commands. |
|
|
-Final Model: The model from step 1 was then fine-tuned for a short duration on the correctional dataset, resulting in this final version which is both knowledgeable and controllable. |
|
|
|
|
|
|
|
|
#Citation |
|
|
If you use this model in your work, please cite it as follows: |
|
|
|
|
|
Code snippet |
|
|
|
|
|
@misc{sanjayyy06_zero_nl2cmds_v1, |
|
|
author = {Sanjayyy06}, |
|
|
title = {zero-nl2cmds-v1: A Surgically Corrected CodeGemma Model for NL-to-Bash Translation}, |
|
|
year = {2025}, |
|
|
publisher = {Hugging Face}, |
|
|
journal = {Hugging Face repository}, |
|
|
howpublished = {\url{[https://huggingface.co/Sanjayyy06/zero-nl2cmds-v1](https://huggingface.co/Sanjayyy06/zero-nl2cmds-v1)}}, |
|
|
} |
|
|
|
|
|
|
|
|
|
|
|
|