--- license: apache-2.0 tags: - mlx base_model: GreenBitAI/DeepSeek-R1-Distill-Qwen-1.5B-layer-mix-bpw-4.0 --- # GreenBitAI/DeepSeek-R1-Distill-Qwen-1.5B-layer-mix-bpw-4.0-mlx This quantized low-bit model [GreenBitAI/DeepSeek-R1-Distill-Qwen-1.5B-layer-mix-bpw-4.0-mlx](https://huggingface.co/GreenBitAI/DeepSeek-R1-Distill-Qwen-1.5B-layer-mix-bpw-4.0-mlx) was converted to MLX format from [`GreenBitAI/DeepSeek-R1-Distill-Qwen-1.5B-layer-mix-bpw-4.0`](https://huggingface.co/GreenBitAI/DeepSeek-R1-Distill-Qwen-1.5B-layer-mix-bpw-4.0) using gbx-lm version **0.3.5**. Refer to the [original model card](https://huggingface.co/GreenBitAI/DeepSeek-R1-Distill-Qwen-1.5B-layer-mix-bpw-4.0) for more details on the model. ## Use with mlx ```bash pip install gbx-lm ``` ```python from gbx_lm import load, generate model, tokenizer = load("GreenBitAI/DeepSeek-R1-Distill-Qwen-1.5B-layer-mix-bpw-4.0-mlx") response = generate(model, tokenizer, prompt="hello", verbose=True) ```