Quantized model for vllm

  • tool: autoawq 4bit
  • caliblation: japanese wiki

See detail.

https://huggingface.co/nitky/RoguePlanet-DeepSeek-R1-Qwen-32B

Downloads last month
1
Safetensors
Model size
6B params
Tensor type
I32
·
BF16
·
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for fujisan/RoguePlanet-DeepSeek-R1-Qwen-32B-AWQ-calib-wiki

Quantized
(5)
this model