--- license: apache-2.0 base_model: - Qwen/Qwen3-4B-Instruct-2507 tags: - llmcompressor --- This is [Qwen/Qwen3-4B-Instruct-2507](https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507) quantized with [LLM Compressor](https://github.com/vllm-project/llm-compressor) with NVFP4. The model has been created, tested, and evaluated by The Kaitchup. The model is compatible with vLLM v0.11 (doesn't work with a Blackwell GPU). Tested with an RTX 4090. - **Developed by:** [The Kaitchup](https://kaitchup.substack.com/) - **License:** Apache 2.0 license ## How to Support My Work Subscribe to [The Kaitchup](https://kaitchup.substack.com/subscribe). This helps me a lot to continue quantizing and evaluating models for free. Or if you prefer to give some GPU hours, "[buy me a kofi](https://ko-fi.com/bnjmn_marie)"