Qwen3-0.6B-NVFP4 / README.md
bnjmnmarie's picture
Update README.md
5576beb verified
metadata
license: apache-2.0
base_model:
  - Qwen/Qwen3-0.6B
tags:
  - llm-compressor
datasets:
  - HuggingFaceH4/ultrachat_200k

This is Qwen/Qwen3-0.6B quantized with LLM Compressor in 4-bit (NVFP4), weights and activations. The calibration step used 512 samples of up to 2048 tokens, chat template applied, from HuggingFaceH4/ultrachat_200k.

The quantization has been done, tested, and evaluated by The Kaitchup. The model is compatible with vLLM. Use a Blackwell GPU to get >2x throughput.

More details in this article: NVFP4: Same Accuracy with 2.3x Higher Throughput for 4-Bit LLMs

How to Support My Work

Subscribe to The Kaitchup. Or, for a one-time contribution, here is my ko-fi link: https://ko-fi.com/bnjmn_marie

This helps me a lot to continue quantizing and evaluating models for free.