--- base_model: mistralai/Mixtral-8x7B-Instruct-v0.1 inference: false language: - fr - it - de - es - en license: apache-2.0 model_creator: Mistral AI_ model_name: Mixtral 8X7B Instruct v0.1 model_type: mixtral pipeline_tag: text-generation quantized_by: Second State Inc. ---

# Mixtral-8x7B-Instruct-v0.1-GGUF ## Original Model [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) ## Run with LlamaEdge - LlamaEdge version: [v0.2.8](https://github.com/second-state/LlamaEdge/releases/tag/0.2.8) and above - Prompt template - Prompt type: `mistral-instruct` - Prompt string ```text [INST] {user_message_1} [/INST] {assitant_message_1} [INST] {user_message_2} [/INST] ``` - Run as LlamaEdge service ```bash wasmedge --dir .:. --nn-preload default:GGML:AUTO:mixtral-8x7b-instruct-v0.1.Q4_0.gguf llama-api-server.wasm -p mistral-instruct ``` - Run as LlamaEdge command app ```bash wasmedge --dir .:. --nn-preload default:GGML:AUTO:mixtral-8x7b-instruct-v0.1.Q4_0.gguf llama-chat.wasm -p mistral-instruct ``` ## Quantized GGUF Models | Name | Quant method | Bits | Size | Use case | | ---- | ---- | ---- | ---- | ----- | | [Mixtral-8x7B-Instruct-v0.1-Q2_K.gguf](https://huggingface.co/second-state/Mixtral-8x7B-Instruct-v0.1-GGUF/blob/main/Mixtral-8x7B-Instruct-v0.1-Q2_K.gguf) | Q2_K | 2 | 17.3 GB| smallest, significant quality loss - not recommended for most purposes | | [openchat-3.5-0106.Q5_K_M.gguf](https://huggingface.co/second-state/OpenChat-3.5-0106-GGUF/blob/main/openchat-3.5-0106-Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| large, very low quality loss - recommended |