|
|
--- |
|
|
license: apache-2.0 |
|
|
--- |
|
|
# JT-Coder-8B-Instruct |
|
|
|
|
|
<p align="center"> |
|
|
<a href="#" target="_blank"> |
|
|
<img src="https://img.shields.io/badge/Paper-ArXiv-red"> |
|
|
</a> |
|
|
<a href="https://huggingface.co/JT-LM/JT-Coder-8B-Instruct" target="_blank"> |
|
|
<img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue"> |
|
|
</a> |
|
|
<a href="./LICENSE" target="_blank"> |
|
|
<img alt="License" src="https://img.shields.io/badge/License-Apache%202.0-yellow.svg"> |
|
|
</a> |
|
|
</p> |
|
|
|
|
|
**JT-Coder** is a series of **high-performance and energy-efficient** code large language models (LLMs) developed by the JiuTian team. Our core philosophy is: **high-quality data is more important than massive amounts of data**. Through our innovative data-centric framework, JT-Coder, while pre-trained using only **1.6T** tokens, comprehensively outperforms multiple models of similar scale trained on approximately 4x the data, providing a more efficient and reproducible path for the development of code LLMs. |
|
|
|
|
|
 |
|
|
|
|
|
*Figure 1: Performance of JT-Coder-8B-Instruct on code generation benchmarks.* |
|
|
|
|
|
## Core Features |
|
|
|
|
|
- 🚀 **State-of-the-Art Performance**: JT-Coder achieves or surpasses the performance of existing top open-source models at both 1.5B and 8B scales across multiple code generation and comprehension benchmarks, including `EvalPlus`, `BigCodeBench`, `LiveCodeBench`, and `FullstackBench`. |
|
|
|
|
|
- 🧠 **Extreme Data Efficiency**: We completed pre-training using only **1.6T** high-quality tokens. Compared to similar models that typically use 5-6T of data, our data efficiency is improved by **4x**, demonstrating the immense value of our advanced data processing pipeline. |
|
|
|
|
|
- 💡 **Innovative Data-Centric Framework**: |
|
|
|
|
|
- **Pre-training Phase**: We meticulously cleaned open-source code data, filtering out low-quality and sensitive information. Simultaneously, we recovered and enriched high-value data such as Jupyter Notebooks, and synthesized large-scale, context-rich Q&A data and programming guides. |
|
|
|
|
|
- **Instruction Tuning Phase**: We pioneered the **"Instruction Evolution"** technique. This technique reverse-engineers the model's various effective outputs for simple instructions, transforming implicit characteristics within the code (e.g., algorithm selection, error handling) into explicit, complex instruction constraints, thereby significantly enriching the diversity and complexity of instruction data. |
|
|
|
|
|
## Model List |
|
|
|
|
|
We have released the following pre-trained base models and instruction-tuned models: |
|
|
|
|
|
| Model Name | Type | Size | |
|
|
| ------------------------ | --------- | ---- | |
|
|
| `JT-Coder-8B-Instruct`**(You are here!)** | Instruct | 8B | |
|
|
| `JT-Coder-8B-Base` | Base | 8B | |
|
|
| `JT-Coder-1.5B-Instruct` | Instruct | 1.5B | |
|
|
| `JT-Coder-1.5B-Base` | Base | 1.5B | |
|
|
|
|
|
## Quick Start: Inference with Transformers |
|
|
|
|
|
You can easily run our models using the standard `transformers` library. |
|
|
|
|
|
### 1. Install Dependencies |
|
|
|
|
|
```bash |
|
|
pip install torch transformers accelerate |
|
|
``` |
|
|
|
|
|
### 2. Inference Code Example |
|
|
|
|
|
Below is an example Python script for inference using the `JT-Coder-8B-Instruct` model. |
|
|
|
|
|
```python |
|
|
import torch |
|
|
from transformers import AutoTokenizer, AutoModelForCausalLM |
|
|
|
|
|
# --- 1. Configure Model Path and Device --- |
|
|
# Model ID on Hugging Face Hub |
|
|
model_path = "JT-LM/JT-Coder-8B-Instruct" |
|
|
# Automatically select device (GPU preferred) |
|
|
device = "cuda" if torch.cuda.is_available() else "cpu" |
|
|
|
|
|
# --- 2. Load Tokenizer and Model --- |
|
|
# trust_remote_code=True is necessary as we use a custom model structure |
|
|
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) |
|
|
model = AutoModelForCausalLM.from_pretrained( |
|
|
model_path, |
|
|
torch_dtype=torch.bfloat16, |
|
|
trust_remote_code=True, |
|
|
use_flash_attention_2=True |
|
|
).to(device) |
|
|
model.eval() |
|
|
|
|
|
# --- 3. Construct Dialogue Input --- |
|
|
# Use a list of dictionaries containing roles and content to represent dialogue history |
|
|
messages = [ |
|
|
{"role": "user", "content": "Please write a Python function to calculate the nth Fibonacci number, including detailed comments."}, |
|
|
] |
|
|
|
|
|
# --- 4. Format and Encode with apply_chat_template --- |
|
|
inputs = tokenizer.apply_chat_template( |
|
|
messages, |
|
|
add_generation_prompt=True, |
|
|
return_tensors="pt" |
|
|
).to(device) |
|
|
|
|
|
# --- 5. Perform Inference --- |
|
|
# Set generation parameters |
|
|
generation_params = { |
|
|
"max_new_tokens": 2048, |
|
|
"do_sample": True, |
|
|
"temperature": 0.7, |
|
|
"top_p": 0.85, |
|
|
"top_k": 20, |
|
|
} |
|
|
|
|
|
# Generate response |
|
|
with torch.no_grad(): |
|
|
outputs = model.generate(inputs, **generation_params) |
|
|
|
|
|
# --- 6. Decode and Print Results --- |
|
|
# Decode only the newly generated part, skipping the original prompt |
|
|
response = tokenizer.decode(outputs[0][inputs.shape[-1]:], skip_special_tokens=True) |
|
|
|
|
|
print("--- User Query ---") |
|
|
print(messages[0]['content']) |
|
|
print("\n--- Model Response ---") |
|
|
print(response) |
|
|
``` |
|
|
|
|
|
## License |
|
|
|
|
|
The source code for this project is licensed under the [Apache 2.0 license](LICENSE). The distribution and use of model weights adhere to their respective licensing agreements. |
|
|
|
|
|
## Acknowledgement |
|
|
|
|
|
Our work is built upon the shoulders of giants in the open-source community, and we wish to express our profound gratitude. |
|
|
|
|
|
We extend our sincere thanks to the Qwen Team. Adopting the Qwen2.5 tokenizer provided a robust vocabulary foundation that was crucial for our model's powerful multilingual and coding abilities. |
|
|
|
|
|
Furthermore, we are deeply indebted to the creators and maintainers of pivotal open-source code datasets, including The Stack v2, Code-Matrix, etc. and the instruction data from projects like Opencoder. Their monumental efforts in collecting, curating, and sharing these vast resources provided the essential raw material for our data-centric framework. This project would not have been possible without their foundational contributions. |
|
|
|
|
|
We hold the spirit of open collaboration in the highest regard and are proud to contribute back to the community that has enabled our research. |
|
|
|
|
|
## Disclaimer |
|
|
|
|
|
JT-Coder is a large language model. While it has undergone rigorous data filtering and training, it may still generate inaccurate, biased, or harmful content. Users are advised to carefully evaluate the model's output and are responsible for any consequences arising from its use. |