Quant Size

#2
by aao331 - opened

Hey, thanks for this quant. I think its the best quality quant for GLM 4.6 right now.

Just a question, GLM-4.6-GPTQ-Int4-Int8Mix was about 192GB, but this one increased to almost 250GB. Is there a reason for the size difference? more quality? or perhaps the MTP layer is unquantized?

Cheers!

QuantTrio org

Thanks for the kind words!

The GLM-4.5-GPTQ-Int4-Int8Mix was built with a minimal mixed-precision strategy β€” just slightly heavier than a pure 4-bit quantization.

For GLM-4.6-GPTQ-Int4-Int8Mix, although I hadn’t originally planned a mixed version, following a community suggestion (link), I applied the same mix logic I previously implemented in DeepSeek-R1-0528-GPTQ-Int4-Int8Mix-Medium.

This configuration introduces a broader degree of mixing, averaging around 5.2 effective bits overall. The intention is to achieve more stable and consistent generation quality β€” particularly for coding tasks β€” while still allowing a 384 GB VRAM rig to serve API sessions efficiently.

🫰

anyone tried this with Blackwell workstation gpus?

yes I could not get this to run. I get a cuda illegal memory access error . I was hoping I could resolve it. But tried a few different cuda version and vllm builds. No dice. I wish I could get it working. I tried on 4 6000 pro blackwell maxq.

yes I could not get this to run. I get a cuda illegal memory access error . I was hoping I could resolve it. But tried a few different cuda version and vllm builds. No dice. I wish I could get it working. I tried on 4 6000 pro blackwell maxq.

oh no :(did you try any of AWQ qaunts avail here on HF?

the AWQ does work, but I was hoping to get this going for even better quality.

i know its such a bummer support for sm120 cards is still lagging :(

this model works great for me too, and i really like the strategy for larger size with more layers preserved, this would be good on all the models (like Qwen3 too), thanks

i know its such a bummer support for sm120 cards is still lagging :(

I was able to get this working on sm120 with vllm. I added a pr to fix this. it is still pending merge but you can try it out yourself. https://github.com/vllm-project/vllm/pull/26953

Sign up or log in to comment