Work in progress

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

model_id = "tabularisai/Qwen3-0.3B-distil"
device = "cuda" if torch.cuda.is_available() else "cpu"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map="auto")

messages = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "What is a capital of France?"}
]

inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to(model.device)
outputs = model.generate(inputs, max_new_tokens=100, temperature=0.5, repetition_penalty=1.2)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

tabularis.ai

Downloads last month
180
Safetensors
Model size
0.4B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for tabularisai/Qwen3-0.3B-distil

Finetuned
Qwen/Qwen3-0.6B
Finetuned
(421)
this model