This is my (first) attempt at quantizing this Qwen3 model (Qwen/Qwen3-Coder-30B-A3B-Instruct) using auto-round, like so:

auto-round-light --model "Qwen/Qwen3-Coder-30B-A3B-Instruct" --scheme "W4A16" --format "auto_gptq" --output_dir "./Quantized" --model_dtype fp16
Downloads last month
183
Safetensors
Model size
0.6B params
Tensor type
I32
·
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for pramjana/Qwen3-Coder-30B-A3B-Instruct-4bit-GPTQ

Quantized
(98)
this model