File size: 5,217 Bytes
ecbef7b 07d63bc ecbef7b 1406d0e ecbef7b 29465a4 ecbef7b f960301 29465a4 c7484ec 29465a4 b5c1e1f ecbef7b 976e4f5 ecbef7b bf5f34f 816dd37 bf5f34f 816dd37 bf5f34f 816dd37 ecbef7b 816dd37 ecbef7b 07d63bc ecbef7b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 |
---
license: fair-noncommercial-research-license
language:
- en
- pt
metrics:
- type: { HumanEval zero-shot pass@1} # Required. Example: wer. Use metric id from https://hf.co/metrics
value: {86.99} # Required. Example: 20.90
base_model:
- Qwen/Qwen2.5-Coder-7B-Instruct
pipeline_tag: text-generation
tags:
- code
---
<!-- Provide a quick summary of what the model is/does. -->
#### Model Details
<p class="justified-text">
<b>Nerdsking-python-coder-7B-i</b> is a 7B parameter partially uncensored model focused in <b> Python</b>, with <b>English</b> as main language. It was massively trained in python, therefore despite the fact it can code in other languages as well, the performance will be not in the same level as the one achieved while using python.
</p>
<i>Key Characteristics:</i>
- Parameter count: 7B
- Primary domain: Python programming
- Secondary capabilities: General coding, technical English
- Training focus: Python logic, standard library usage, algorithmic reasoning
- Alignment: Partially uncensored (developer-oriented)
<br>
<p>
#### Nerdsking Python Coder Family
🧠 <a href="https://huggingface.co/Nerdsking/nerdsking-python-coder-3B-i"> Nerdsking Python Coder 3B-i </a><br>
🧠 <a href="https://huggingface.co/Nerdsking/Nerdsking-python-coder-7B-i"> Nerdsking Python Coder 7B-i </a>
<br>
<p>
#### Benchmark
<p class="justified-text">
After intense refining, <b>Nerdsking-python-coder-7B-i</b> has achieved <b>86.99 in HumanEval (bf16)</b>, ranking it amongst the highest-performing Python-focused 7B models ever reported on HumanEval. Surpassing even much bigger models in that area.
</p>
<i>Benchmark details (164 tasks):</i>
- official HumanEval execution protocol - test suites executed via `exec()`
- zero-shot pass@1
- dtype == "bfloat16"
- temperature = 0.1
- do_sample = False
- evaluated on fully merged weights
- Prompting: Chat-formatted with a fixed system prompt (“You are an expert Python coding assistant.”)
- Quantization: None (unquantized weights - bf16)
<p class="justified-text">
<i>The configuration above is fully disclosed to support reproducibility and fair comparison.</i>
</p>
<p class="justified-text">
<i> Note: Quantized variants (INT4/INT6) may exhibit lower HumanEval scores due to reduced numerical precision.</i>
</p>
#### Comparison Table
<table>
<thead>
<tr>
<th>Model name</th>
<th>Approx. HumanEval Pass@1 (%)</th>
<th>Notes / Source</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Nerdsking-python-coder-7B-i</strong></td>
<td><strong>86.99</strong></td>
<td>Evaluated score (zero-shot, strict HumanEval pass@1, using unquantized weigths bf16)</td>
</tr>
<tr>
<td>Qwen2.5-Coder-7B</td>
<td>~74–76</td>
<td>Community evaluation (OpenCompass run); figures vary by harness/settings</td>
</tr>
<tr>
<td>DeepSeek-Coder-6.7B</td>
<td>~72–73</td>
<td>Official DeepSeek report and independent replications; close to strict HumanEval protocol</td>
</tr>
<tr>
<td>CodeLlama-7B</td>
<td>~33–35</td>
<td>Meta technical report</td>
</tr>
<tr>
<td>Wizard Coder 7B*</td>
<td>~57–59</td>
<td>Community benchmarks; strong instruction-following but less consistent zero-shot behavior</td>
</tr>
</tbody>
</table>
<p class="justified-text">
</p>
<hr>
#### Benchmark tool used
https://github.com/nerdskingcom/gguf-humaneval-benchmark
Install it using:
<code>
pip install gguf-humaneval-benchmark
</code>
Instructions after install:
<code>
gguf-humaneval-benchmark --help
</code>
<hr>
#### S.o.n.n.
<p class="justified-text">
The model was treated under <b>"s.o.n.n."</b> (<i>single omni neural network</i>), a concept created by IPMN at Nerdsking.com that is both a precise way of fine tunning/altering existing models, as well a foundational concept for a broader AI architecture standard currently under active research and development.
</p>
<i>When applied to pre-existing models, allows:</i>
- parameter-preserving refinement methodology
- focused global behavioral shaping, instead of task-local adapters
- avoidance of fragmentation, common in multi-adapter or task-siloed approaches
#### Quick Start (Inference)
<code>
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "Nerdsking/Nerdsking-python-coder-7B-i"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype="bfloat16",
device_map="auto"
)
prompt = "Write a Python function that checks if a number is prime."
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
</code>
#### Ethical & Safety Notes
<p class="justified-text">
This model is intended for technical and research use.
Due to relaxed alignment constraints, outputs should be reviewed before deployment in production or public-facing systems.
</p>
#### Citation
If you use this model in research or benchmarking, please cite:
Nerdsking-python-coder-3B-i,
IPMN / Nerdsking.com
|