azdin shr commited on
Commit
4ae103e
·
verified ·
1 Parent(s): d04e6d6

Add README

Browse files
Files changed (1) hide show
  1. README.md +36 -46
README.md CHANGED
@@ -1,61 +1,51 @@
1
  ---
 
2
  base_model: llava-hf/llava-onevision-qwen2-7b-ov-hf
3
- library_name: peft
4
- model_name: llava_qlora_weather_model
5
  tags:
6
- - base_model:adapter:llava-hf/llava-onevision-qwen2-7b-ov-hf
7
- - lora
8
- - sft
9
- - trl
10
- licence: license
11
- pipeline_tag: text-generation
 
 
12
  ---
13
 
14
- # Model Card for llava_qlora_weather_model
15
 
16
- This model is a fine-tuned version of [llava-hf/llava-onevision-qwen2-7b-ov-hf](https://huggingface.co/llava-hf/llava-onevision-qwen2-7b-ov-hf).
17
- It has been trained using [TRL](https://github.com/huggingface/trl).
18
 
19
- ## Quick start
20
 
21
- ```python
22
- from transformers import pipeline
23
-
24
- question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
25
- generator = pipeline("text-generation", model="None", device="cuda")
26
- output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
27
- print(output["generated_text"])
28
- ```
29
-
30
- ## Training procedure
31
 
32
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/azdinsahir11-university-mohamed-v/llava-onevision-qlora-weather/runs/7e65t9r9)
33
 
 
 
 
34
 
35
- This model was trained with SFT.
36
-
37
- ### Framework versions
38
-
39
- - PEFT 0.16.0
40
- - TRL: 0.20.0
41
- - Transformers: 4.53.3
42
- - Pytorch: 2.6.0+cu124
43
- - Datasets: 4.0.0
44
- - Tokenizers: 0.21.2
45
 
46
- ## Citations
 
47
 
 
 
48
 
 
49
 
50
- Cite TRL as:
51
-
52
- ```bibtex
53
- @misc{vonwerra2022trl,
54
- title = {{TRL: Transformer Reinforcement Learning}},
55
- author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
56
- year = 2020,
57
- journal = {GitHub repository},
58
- publisher = {GitHub},
59
- howpublished = {\url{https://github.com/huggingface/trl}}
60
- }
61
- ```
 
1
  ---
2
+ license: apache-2.0
3
  base_model: llava-hf/llava-onevision-qwen2-7b-ov-hf
 
 
4
  tags:
5
+ - llava
6
+ - llava-onevision
7
+ - weather
8
+ - satellite
9
+ - morocco
10
+ - meteorology
11
+ - qlora
12
+ - fine-tuned
13
  ---
14
 
15
+ # LLaVA-OneVision Weather Analysis - QLoRA
16
 
17
+ Fine-tuned using **QLoRA** technique for weather satellite imagery analysis.
 
18
 
19
+ ## Model Details
20
 
21
+ - **Base Model:** llava-hf/llava-onevision-qwen2-7b-ov-hf
22
+ - **Technique:** QLoRA
23
+ - **Domain:** Weather satellite imagery analysis
24
+ - **Dataset:** Weather satellite images with meteorological metadata
 
 
 
 
 
 
25
 
26
+ ## Usage
27
 
28
+ ```python
29
+ from transformers import LlavaOnevisionForConditionalGeneration, AutoProcessor
30
+ import torch
31
 
32
+ # Load base model
33
+ model = LlavaOnevisionForConditionalGeneration.from_pretrained(
34
+ "llava-hf/llava-onevision-qwen2-7b-ov-hf",
35
+ torch_dtype=torch.bfloat16,
36
+ device_map="auto"
37
+ )
38
+ processor = AutoProcessor.from_pretrained("llava-hf/llava-onevision-qwen2-7b-ov-hf")
 
 
 
39
 
40
+ # Load fine-tuned adapter
41
+ model.load_adapter("azdin/llava-onevision-weather-qlora")
42
 
43
+ # Use for weather analysis...
44
+ ```
45
 
46
+ ## Training Details
47
 
48
+ - **Technique:** QLoRA
49
+ - **Quantization:** 4-bit NF4
50
+ - **Training Data:** Weather satellite imagery with metadata
51
+ - **Target Modules:** Attention and projection layers