Affine-2011-5DD686DBbagozdzR7Y94ep2ZJ4oemBQwY2DCZ4mA5QM4s8Qc

This model has been fine-tuned for conversational AI tasks.

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained(
    "weirek/Affine-2011-5DD686DBbagozdzR7Y94ep2ZJ4oemBQwY2DCZ4mA5QM4s8Qc",
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("weirek/Affine-2011-5DD686DBbagozdzR7Y94ep2ZJ4oemBQwY2DCZ4mA5QM4s8Qc")

# Generate text
messages = [
    {"role": "user", "content": "Hello, how are you?"}
]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Training Details

  • Training was performed using the Hugging Face Transformers library
  • Model was fine-tuned on conversational data

Limitations

This model inherits limitations from its base model and training data. Use responsibly and be aware of potential biases.

Downloads last month
32
Safetensors
Model size
754B params
Tensor type
F32
BF16
F8_E4M3
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support