Sifera V1

A fine-tuned version of Qwen2.5-1.5B-Instruct for:

  • Text Summarization
  • Note Taking
  • Key Point Extraction
  • Q&A Generation
  • Document Explanation

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("shivam909067/Sifera-V1")
tokenizer = AutoTokenizer.from_pretrained("shivam909067/Sifera-V1")

messages = [
    {"role": "system", "content": "You are Sifera, an AI assistant for note-taking."},
    {"role": "user", "content": "Summarize this text: ..."}
]

text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=500)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Training

Fine-tuned using LoRA on custom note-taking and summarization datasets.

Downloads last month
63
Safetensors
Model size
2B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for shivam909067/Sifera-V1

Base model

Qwen/Qwen2.5-1.5B
Finetuned
(1318)
this model
Quantizations
2 models