nightmedia commited on
Commit
9ab8c23
·
verified ·
1 Parent(s): 6819572

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -1
README.md CHANGED
@@ -24,7 +24,15 @@ base_model: cerebras/Qwen3-Coder-REAP-25B-A3B
24
 
25
  # Qwen3-Coder-REAP-25B-A3B-qx64-hi-mlx
26
 
27
- This model [Qwen3-Coder-REAP-25B-A3B-qx64-hi-mlx](https://huggingface.co/Qwen3-Coder-REAP-25B-A3B-qx64-hi-mlx) was
 
 
 
 
 
 
 
 
28
  converted to MLX format from [cerebras/Qwen3-Coder-REAP-25B-A3B](https://huggingface.co/cerebras/Qwen3-Coder-REAP-25B-A3B)
29
  using mlx-lm version **0.28.3**.
30
 
 
24
 
25
  # Qwen3-Coder-REAP-25B-A3B-qx64-hi-mlx
26
 
27
+ The regular Deckard(qx) formula uses embeddings at the same bit as the data stores, in this case 4 bit.
28
+
29
+ The head and select attention paths are enhanced to 6 bit, and the model is quantized with group size 32(hi).
30
+
31
+ There is an updated model: [Qwen3-Coder-REAP-25B-A3B-qx65x-hi-mlx](https://huggingface.co/nightmedia/Qwen3-Coder-REAP-25B-A3B-qx65x-hi-mlx) that uses embeddings at 6 bit and a base of 5 bit, and should perform slightly better on long context.
32
+
33
+ -G
34
+
35
+ This model [Qwen3-Coder-REAP-25B-A3B-qx64-hi-mlx](https://huggingface.co/nightmedia/Qwen3-Coder-REAP-25B-A3B-qx64-hi-mlx) was
36
  converted to MLX format from [cerebras/Qwen3-Coder-REAP-25B-A3B](https://huggingface.co/cerebras/Qwen3-Coder-REAP-25B-A3B)
37
  using mlx-lm version **0.28.3**.
38