LoRA Model for LLaVA OneVision
This is a LoRA (Low-Rank Adaptation) model fine-tuned on top of lmms-lab/llava-onevision-qwen2-0.5b-ov.
Usage
from peft import PeftModel
from llava.model.builder import load_pretrained_model
# Load base model
tokenizer, model, image_processor, context_len = load_pretrained_model(
"lmms-lab/llava-onevision-qwen2-0.5b-ov",
None,
model_name="llava_qwen",
device_map="auto",
)
# Load LoRA weights
model = PeftModel.from_pretrained(model, "mianaro3/RECIPES-LORA_gui_EXPL_10_256_128_0.05_10000_1e-5_0.1_OV-0.5b-expl_multi_image-FT")
model.eval()
Files
adapter_config.json: LoRA configurationadapter_model.safetensors: LoRA weights
Training Details
See trainer_state.json and training_args.bin for full training configuration.
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for mianaro3/RECIPES-LORA_gui_EXPL_10_256_128_0.05_10000_1e-5_0.1_OV-0.5b-expl_multi_image-FT
Base model
lmms-lab/llava-onevision-qwen2-0.5b-ov