--- license: apache-2.0 license_link: https://huggingface.co/huihui-ai/Qwen2.5-14B-Instruct-1M-abliterated/blob/main/LICENSE language: - en pipeline_tag: text-generation base_model: huihui-ai/Qwen2.5-14B-Instruct-1M-abliterated tags: - chat - abliterated - uncensored - mlx library_name: mlx --- # CallMcMargin/Qwen2.5-14B-Instruct-1M-abliterated-mlx-bf16-affine-qgroup32-q5 This model [CallMcMargin/Qwen2.5-14B-Instruct-1M-abliterated-mlx-bf16-affine-qgroup32-q5](https://huggingface.co/CallMcMargin/Qwen2.5-14B-Instruct-1M-abliterated-mlx-bf16-affine-qgroup32-q5) was converted to MLX format from [huihui-ai/Qwen2.5-14B-Instruct-1M-abliterated](https://huggingface.co/huihui-ai/Qwen2.5-14B-Instruct-1M-abliterated) using mlx-lm version **0.28.3**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("CallMcMargin/Qwen2.5-14B-Instruct-1M-abliterated-mlx-bf16-affine-qgroup32-q5") prompt = "hello" if tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```