--- pipeline_tag: text-generation base_model: - cerebras/Qwen3-Coder-REAP-25B-A3B --- This is a MXFP4_MOE quantization of the model Qwen3-Coder-REAP-25B-A3B Added an imatrix version, based on the imatrix from bartowski. I also created my own imatrix versions, which are marked as codetiny-exp and codemedium-exp. This is considered experimental. What I did, is that I took a very specific dataset, that is *ONLY* for coding and not for general knowledge. It's [code_tiny](https://huggingface.co/datasets/eaddario/imatrix-calibration/blob/main/code_tiny.parquet) dataset from [eaddario/imatrix-calibration](https://huggingface.co/datasets/eaddario/imatrix-calibration) And [code_medium](https://huggingface.co/datasets/eaddario/imatrix-calibration/blob/main/code_medium.parquet) dataset from [eaddario/imatrix-calibration](https://huggingface.co/datasets/eaddario/imatrix-calibration) I thought that would be better suited, as this a coding specific model. But further tests must be done. Please provide feedback. Original model: https://huggingface.co/cerebras/Qwen3-Coder-REAP-25B-A3B