--- license: apache-2.0 base_model: - Qwen/Qwen3-Coder-30B-A3B-Instruct library_name: transformers --- This is my (first) attempt at quantizing this Qwen3 model (Qwen/Qwen3-Coder-30B-A3B-Instruct) using auto-round, like so: ``` auto-round-light --model "Qwen/Qwen3-Coder-30B-A3B-Instruct" --scheme "W4A16" --format "auto_gptq" --output_dir "./Quantized" --model_dtype fp16 ```