This is Qwen/Qwen3-4B-Instruct-2507 quantized with LLM Compressor with NVFP4. The model has been created, tested, and evaluated by The Kaitchup. The model is compatible with vLLM v0.11 (doesn't work with a Blackwell GPU). Tested with an RTX 4090.

How to Support My Work

Subscribe to The Kaitchup. This helps me a lot to continue quantizing and evaluating models for free. Or if you prefer to give some GPU hours, "buy me a kofi"

Downloads last month
21
Safetensors
Model size
4B params
Tensor type
BF16
·
I8
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for kaitchup/Qwen3-4B-Instruct-2507-w8a8-smoothquant

Quantized
(145)
this model

Collection including kaitchup/Qwen3-4B-Instruct-2507-w8a8-smoothquant