--- license: gemma base_model: google/gemma-2-9b-it base_model_relation: quantized library_name: mlc-llm pipeline_tag: text-generation --- 4-bit [OmniQuant](https://arxiv.org/abs/2308.13137) quantized version of [FuseChat-Gemma-2-9B-Instruct](https://huggingface.co/FuseAI/FuseChat-Gemma-2-9B-Instruct) for inference with the [Private LLM](https://privatellm.app) app.