affine-new-1
This model has been fine-tuned for conversational AI tasks.
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained(
"weirek/affine-new-1",
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("weirek/affine-new-1")
# Generate text
messages = [
{"role": "user", "content": "Hello, how are you?"}
]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Training Details
- Training was performed using the Hugging Face Transformers library
- Model was fine-tuned on conversational data
Limitations
This model inherits limitations from its base model and training data. Use responsibly and be aware of potential biases.
- Downloads last month
- 2