Qwen3 VL 4B Instruct c_abliterated

Qwen3-VL-4B-Instruct-c_abliterated-v2

Qwen3-VL-4B-Instruct-c_abliterated-v2 is an advanced evolution of the Qwen3-VL-4B-Instruct architecture. This v2 release focuses on Continual Abliteration, a refined process designed to systematically remove refusal mechanisms through repeated training iterations. The result is a model optimized for high-fidelity reasoning and captioning across even the most complex, nuanced, or restrictive visual contexts.

1

Key Highlights

  • Continual Abliteration (c_abliterated): Specifically trained via repeated iterations to target and neutralize refusal vectors, ensuring the model provides direct answers to prompts that standard models might bypass.
  • High-Fidelity Reasoning: Goes beyond simple tagging to provide deep reasoning and context-aware descriptions for artistic, technical, and abstract imagery.
  • Unrestricted Multimodal Analysis: Optimized for research, red-teaming, and datasets where unfiltered visual interpretation is necessary for thorough analysis.
  • Flexible Aspect Ratios: Maintains spatial awareness and accuracy across wide, tall, square, and non-standard image dimensions.
  • Enhanced Instruction Following: Leverages the base Qwen3-VL-4B power to handle complex, multi-step prompts involving visual data.

Quick Start with Transformers

from transformers import Qwen3VLForConditionalGeneration, AutoProcessor
from qwen_vl_utils import process_vision_info
import torch

# Load the v2 c_abliterated model
model = Qwen3VLForConditionalGeneration.from_pretrained(
    "prithivMLmods/Qwen3-VL-4B-Instruct-c_abliterated-v2", 
    torch_dtype="auto", 
    device_map="auto"
)

processor = AutoProcessor.from_pretrained("prithivMLmods/Qwen3-VL-4B-Instruct-c_abliterated-v2")

messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "image",
                "image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
            },
            {"type": "text", "text": "Provide a detailed caption and reasoning for this image."},
        ],
    }
]

text = processor.apply_chat_template(
    messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
    text=[text],
    images=image_inputs,
    videos=video_inputs,
    padding=True,
    return_tensors="pt",
)
inputs = inputs.to("cuda")

generated_ids = model.generate(**inputs, max_new_tokens=256)
generated_ids_trimmed = [
    out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
    generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)

Intended Use

  • Refusal Research: Evaluating how LLMs behave when standard guardrails are removed through iterative training.
  • Complex Dataset Captioning: Generating descriptive metadata for medical, forensic, or controversial historical archives.
  • Red-Teaming: Assisting security researchers in testing the limits of multimodal safety filters.
  • Creative Freedom: Enabling artists and writers to generate descriptions for "edge-case" visual concepts without synthetic interference.

Limitations & Ethics

Warning: As a c_abliterated model, this version will not refuse prompts based on typical safety guidelines.

  • Explicit Content: The model may generate graphic, explicit, or offensive text based on image input.
  • Non-Production Use: This model is intended for research and controlled environments, not for general-purpose public applications.
  • Factual Accuracy: While reasoning is enhanced, the model can still hallucinate or misinterpret highly abstract or synthetic visuals.
Downloads last month
84
Safetensors
Model size
4B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for prithivMLmods/Qwen3-VL-4B-Instruct-c_abliterated-v2

Finetuned
(1)
this model
Quantizations
2 models

Collection including prithivMLmods/Qwen3-VL-4B-Instruct-c_abliterated-v2