Qwen3-VL-4B-Instruct
Qwen3-VL-4B-Instruct is a powerful vision-language model with 4 billion parameters, developed by Qwen team at Alibaba Cloud. This model represents the latest generation in the Qwen series, offering superior text understanding, deeper visual perception, extended context length, and enhanced spatial and video comprehension capabilities.
Model Description
Qwen3-VL-4B-Instruct is a multimodal large language model that seamlessly integrates visual and textual understanding. Key capabilities include:
- Visual Agent Capabilities: Operates PC/mobile GUIs, recognizes elements, understands functions, invokes tools, and completes tasks
- Code Generation from Visuals: Generates Draw.io diagrams, HTML/CSS/JS code from images and videos
- Advanced Spatial Reasoning: 2D and 3D grounding for spatial perception and embodied AI applications
- OCR Excellence: Supports 32 languages with robust handling of low-light and blurred images
- Video Understanding: Full recall and second-level indexing with timestamp-grounded event localization
- Extended Context: 256K native context length (expandable to 1M tokens)
- STEM/Math Reasoning: Advanced causal analysis and mathematical reasoning capabilities
Repository Contents
This repository contains model files in both SafeTensors and GGUF formats:
| File | Size | Format | Description |
|---|---|---|---|
qwen3-vl-4b-instruct-abliterated.safetensors |
~8GB | SafeTensors | Main model weights (abliterated version) |
qwen3-vl-4b-instruct-abliterated-f16.gguf |
~8GB | GGUF FP16 | Quantized format for efficient inference |
Total Repository Size: ~16GB
Note: Model files are currently being downloaded. File sizes are estimates based on the 4B parameter count.
Hardware Requirements
Minimum Requirements
- VRAM: 12GB GPU (for FP16 inference)
- RAM: 16GB system memory
- Disk Space: 20GB available storage
Recommended Requirements
- VRAM: 16GB+ GPU (NVIDIA RTX 4070 or better)
- RAM: 32GB system memory
- Disk Space: 30GB available storage
- GPU: CUDA-compatible GPU with compute capability 7.0+
Performance Notes
- Enable
flash_attention_2for better acceleration and memory efficiency - Multi-image and video scenarios benefit significantly from flash attention
- BF16 precision recommended for optimal quality-performance balance
Usage Examples
Basic Image Understanding
from transformers import Qwen3VLForConditionalGeneration, AutoProcessor
from PIL import Image
# Load model and processor
model = Qwen3VLForConditionalGeneration.from_pretrained(
"E:/huggingface/qwen3-vl-4b-instruct",
torch_dtype="auto",
device_map="auto",
attn_implementation="flash_attention_2"
)
processor = AutoProcessor.from_pretrained("Qwen/Qwen3-VL-4B-Instruct")
# Prepare image and text input
image = Image.open("path/to/your/image.jpg")
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": image},
{"type": "text", "text": "Describe this image in detail."}
]
}
]
# Process and generate
text = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = processor(text=[text], images=[image], return_tensors="pt").to(model.device)
# Generate response
outputs = model.generate(
**inputs,
max_new_tokens=512,
top_p=0.8,
temperature=0.7
)
response = processor.batch_decode(outputs, skip_special_tokens=True)[0]
print(response)
Video Understanding
from transformers import Qwen3VLForConditionalGeneration, AutoProcessor
import cv2
# Load model
model = Qwen3VLForConditionalGeneration.from_pretrained(
"E:/huggingface/qwen3-vl-4b-instruct",
torch_dtype="auto",
device_map="auto"
)
processor = AutoProcessor.from_pretrained("Qwen/Qwen3-VL-4B-Instruct")
# Extract video frames
def extract_frames(video_path, num_frames=8):
cap = cv2.VideoCapture(video_path)
frames = []
frame_count = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
indices = [int(i * frame_count / num_frames) for i in range(num_frames)]
for idx in indices:
cap.set(cv2.CAP_PROP_POS_FRAMES, idx)
ret, frame = cap.read()
if ret:
frames.append(cv2.cvtColor(frame, cv2.COLOR_BGR2RGB))
cap.release()
return frames
# Process video
frames = extract_frames("path/to/video.mp4")
messages = [
{
"role": "user",
"content": [
{"type": "video", "video": frames},
{"type": "text", "text": "What happens in this video? Provide timestamps."}
]
}
]
# Generate analysis
text = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = processor(text=[text], videos=[frames], return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=1024)
print(processor.decode(outputs[0], skip_special_tokens=True))
GUI Agent Interaction
from transformers import Qwen3VLForConditionalGeneration, AutoProcessor
from PIL import ImageGrab
# Load model
model = Qwen3VLForConditionalGeneration.from_pretrained(
"E:/huggingface/qwen3-vl-4b-instruct",
torch_dtype="auto",
device_map="auto"
)
processor = AutoProcessor.from_pretrained("Qwen/Qwen3-VL-4B-Instruct")
# Capture screenshot
screenshot = ImageGrab.grab()
# Analyze UI
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": screenshot},
{"type": "text", "text": "Identify all clickable elements and their functions."}
]
}
]
text = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = processor(text=[text], images=[screenshot], return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=512)
print(processor.decode(outputs[0], skip_special_tokens=True))
Model Specifications
Architecture
- Parameters: 4 billion (dense architecture)
- Precision: BF16 tensor type
- Context Length: 256K tokens (native), expandable to 1M
- Vision Encoder: DeepStack multi-level ViT with fine-grained feature fusion
- Positional Encoding: Interleaved-MRoPE for temporal, spatial dimensions
- Video Processing: Text-Timestamp Alignment for precise event localization
Supported Languages (OCR)
32 languages including English, Chinese, Japanese, Korean, Arabic, French, German, Spanish, and more.
Model Innovations
- Interleaved-MRoPE: Full-frequency coverage across temporal, width, and height dimensions
- DeepStack: Multi-level ViT feature fusion for enhanced image-text alignment
- Text-Timestamp Alignment: Precise timestamp-grounded event localization in videos
Performance Tips
Optimization Strategies
Enable Flash Attention 2: Significantly improves memory efficiency and speed
attn_implementation="flash_attention_2"Batch Processing: Process multiple images/videos in batches when possible
inputs = processor(text=texts, images=images, return_tensors="pt", padding=True)Quantization: Use GGUF format for reduced memory footprint
- FP16: ~8GB (included in repository)
- Consider INT8/INT4 quantization for edge deployment
Context Management: For long contexts, use sliding window or chunking strategies
Generation Parameters:
- Vision-Language Tasks:
top_p=0.8,temperature=0.7 - Code Generation:
top_p=0.9,temperature=0.3 - Creative Tasks:
top_p=0.95,temperature=0.9
- Vision-Language Tasks:
Memory Optimization
- Use
device_map="auto"for automatic multi-GPU distribution - Enable gradient checkpointing for training/fine-tuning
- Clear cache between inference runs:
torch.cuda.empty_cache()
License
This model is licensed under Apache License 2.0.
Key Points:
- โ Commercial use permitted
- โ Private use permitted
- โ Modification and distribution allowed
- โ ๏ธ Must include copyright notice and license
- โ ๏ธ Must state significant changes
- โ No trademark use
- โ No warranty provided
Full license text: https://www.apache.org/licenses/LICENSE-2.0
Citation
If you use Qwen3-VL-4B-Instruct in your research or applications, please cite:
@article{qwen3vl2025,
title={Qwen3-VL: The Next Generation Vision-Language Model},
author={Qwen Team},
journal={arXiv preprint},
year={2025},
organization={Alibaba Cloud}
}
Official Resources
- Hugging Face: https://huggingface.co/Qwen/Qwen3-VL-4B-Instruct
- GitHub Repository: https://github.com/QwenLM/Qwen3-VL
- Documentation: https://huggingface.co/docs/transformers/main/model_doc/qwen3_vl
- Model Card: https://huggingface.co/Qwen/Qwen3-VL-4B-Instruct#model-card
- Technical Report: Check GitHub repository for latest research papers
Support
For issues, questions, or contributions:
- GitHub Issues: https://github.com/QwenLM/Qwen3-VL/issues
- Hugging Face Discussions: https://huggingface.co/Qwen/Qwen3-VL-4B-Instruct/discussions
- Official Documentation: https://qwenlm.github.io/
Release Date: October 15, 2025 Model Version: Qwen3-VL-4B-Instruct (Abliterated) Last Updated: November 5, 2025
- Downloads last month
- 58
16-bit