File size: 2,038 Bytes
5b05587
 
 
fa82e16
 
 
 
 
 
596f665
fa82e16
 
596f665
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
---
language:
- en
license: apache-2.0
library_name: transformers
tags:
- code-generation
- python
- programming
- fine-tuned
pipeline_tag: text-generation
widget:
- text: |-
    ### Instruction:
    Write a Python function to reverse a string

    ### Input:


    ### Response:
  example_title: Reverse String
- text: |-
    ### Instruction:
    how are you?

    ### Input:


    ### Response:
  example_title: Non-Code Request
- text: |-
    ### Instruction:
    Implement quicksort algorithm in Python

    ### Input:
    Use recursion and list comprehensions

    ### Response:
  example_title: Quicksort Algorithm
base_model: Mercy-62/python-code-only-Qwen-lora
datasets:
- sahil2801/CodeAlpaca-20k
- google-research-datasets/mbpp
- openai/openai_humaneval
---

# ๐Ÿ Python Code-Only Qwen

**A fine-tuned model that generates ONLY executable Python code, with no explanations or conversations.**

## ๐Ÿš€ Quick Inference Example

```python
from transformers import AutoModelForCausalLM, AutoTokenizer

# Load model
model = AutoModelForCausalLM.from_pretrained("Mercy-62/python-code-only-Qwen-lora")
tokenizer = AutoTokenizer.from_pretrained("Mercy-62/python-code-only-Qwen-lora")

# Use exact same prompt format from training
alpaca_prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.

### Instruction:
{}

### Input:
{}

### Response:
{}"""

# Enable faster inference (if using Unsloth)
# FastLanguageModel.for_inference(model)

# Test 1: Simple code generation
inputs = tokenizer(
    [
        alpaca_prompt.format(
            "Write a Python function to reverse a string",  # instruction
            "",  # input (leave empty if no context)
            "",  # output - leave blank for generation
        )
    ],
    return_tensors="pt"
).to("cuda")

outputs = model.generate(**inputs, max_new_tokens=128, use_cache=True)
print("Test 1 - Reverse string function:")
print(tokenizer.batch_decode(outputs)[0])