Vision LoRA Adapter
This is a LoRA adapter for vision-language models, trained to adapt vision tower and connector layers in addition to language model layers.
Model Details
- Base Model: Qwen/Qwen3-VL-4B-Instruct
- LoRA Rank: 16
- LoRA Alpha: 32
- Target Modules:
- Language Model: โ
- Vision Tower: โ
- Connector/Projector: โ
Usage with vLLM
from vllm import LLM
from vllm.lora.request import LoRARequest
# Load model with LoRA support
llm = LLM(
model="Qwen/Qwen3-VL-4B-Instruct",
enable_lora=True,
max_loras=1,
max_lora_rank=16,
)
# Generate with LoRA
lora_request = LoRARequest("adapter", 1, "prashanth058/qwen3-4b-vl-lora-vision-connector")
outputs = llm.generate(
prompts=["<your prompt>"],
lora_request=lora_request,
)
Usage with Transformers + PEFT
from transformers import AutoModelForVision2Seq, AutoProcessor
from peft import PeftModel
# Load base model
model = AutoModelForVision2Seq.from_pretrained("Qwen/Qwen3-VL-4B-Instruct")
processor = AutoProcessor.from_pretrained("Qwen/Qwen3-VL-4B-Instruct")
# Load adapter
model = PeftModel.from_pretrained(model, "prashanth058/qwen3-4b-vl-lora-vision-connector")
# Generate
# ... (process your inputs)
outputs = model.generate(**inputs)
Training Details
This adapter was trained to demonstrate vision layer adaptation capabilities in vLLM.
- Dataset: Synthetic/small-scale training data
- Training: PEFT LoRA with vision layer targeting
- Purpose: Testing and demonstration
License
This adapter follows the license of the base model: Qwen/Qwen3-VL-4B-Instruct
Citation
If you use this adapter, please cite:
@misc{vision-lora-adapter,
author = {vLLM Team},
title = {Vision LoRA Adapter for Qwen/Qwen3-VL-4B-Instruct},
year = {2025},
publisher = {HuggingFace},
howpublished = {\url{https://huggingface.co/prashanth058/qwen3-4b-vl-lora-vision-connector}},
}
- Downloads last month
- 13
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for prashanth058/qwen3-4b-vl-lora-vision-connector
Base model
Qwen/Qwen3-VL-4B-Instruct