---
library_name: transformers
license: other
license_name: lfm1.0
license_link: LICENSE
language:
- en
- ar
- zh
- fr
- de
- ja
- ko
- es
pipeline_tag: text-generation
tags:
- liquid
- lfm2
- edge
base_model: LiquidAI/LFM2-1.2B-RAG
---
# LFM2-1.2B-RAG-GGUF
Based on [LFM2-1.2B](https://huggingface.co/LiquidAI/LFM2-1.2B), LFM2-1.2B-RAG is specialized in answering questions based on provided contextual documents, for use in RAG (Retrieval-Augmented Generation) systems.
**Use cases**:
- Chatbot to ask questions about the documentation of a particular product.
- Custom support with an internal knowledge base to provide grounded answers.
- Academic research assistant with multi-turn conversations about research papers and course materials.
You can find more information about other task-specific models in this [blog post](https://www.liquid.ai/blog/introducing-liquid-nanos-frontier-grade-performance-on-everyday-devices).
## 🏃 How to run LFM2
Example usage with [llama.cpp](https://github.com/ggml-org/llama.cpp):
```
llama-cli -hf LiquidAI/LFM2-1.2B-RAG-GGUF
```