|
|
--- |
|
|
base_model: |
|
|
- LiquidAI/LFM2-1.2B |
|
|
--- |
|
|
|
|
|
# LFM2-1.2B |
|
|
Run **LFM2-1.2B** on Android devices powered by Qualcomm NPU. |
|
|
|
|
|
## Quickstart |
|
|
|
|
|
See [Documentation](https://docs.nexa.ai/nexa-sdk-android/quickstart) |
|
|
|
|
|
## Model Description |
|
|
**LFM2-1.2B** is part of Liquid AI’s second-generation **LFM2** family, designed specifically for **on-device and edge AI deployment**. |
|
|
With **1.2 billion parameters**, it strikes a balance between compact size, strong reasoning, and efficient compute utilization—ideal for running on CPUs, GPUs, or NPUs. |
|
|
|
|
|
LFM2 introduces a **hybrid Liquid architecture** with **multiplicative gates and short convolutions**, enabling faster convergence and improved contextual reasoning. |
|
|
It demonstrates up to **3× faster training** and **2× faster inference** on CPU compared to Qwen3, while maintaining superior accuracy across multilingual and instruction-following benchmarks. |
|
|
|
|
|
## Features |
|
|
- ⚡ **Speed & Efficiency** – 2× faster inference and prefill]. |
|
|
- 🧠 **Hybrid Liquid Architecture** – Combines multiplicative gating with convolutional layers for better reasoning and token reuse. |
|
|
- 🌍 **Multilingual Competence** – Supports diverse languages for global use cases. |
|
|
- 🛠 **Flexible Deployment** – Runs efficiently on CPU, GPU, and NPU hardware. |
|
|
- 📈 **Benchmark Performance** – Outperforms similarly-sized models in math, knowledge, and reasoning tasks. |
|
|
|
|
|
## Use Cases |
|
|
- Edge AI assistants and voice agents |
|
|
- Offline reasoning and summarization on mobile or automotive devices |
|
|
- Local code and text generation tools |
|
|
- Lightweight multimodal or RAG pipelines |
|
|
- Domain-specific fine-tuning for vertical applications (e.g., finance, robotics) |
|
|
|
|
|
## Inputs and Outputs |
|
|
**Input** |
|
|
- Text prompts or structured instructions (tokenized sequences for API use). |
|
|
|
|
|
**Output** |
|
|
- Natural-language or structured text generations. |
|
|
- Optionally: logits or embeddings for advanced downstream integration. |
|
|
|
|
|
## License |
|
|
This model is released under the **Creative Commons Attribution–NonCommercial 4.0 (CC BY-NC 4.0)** license. |
|
|
Non-commercial use, modification, and redistribution are permitted with attribution. |
|
|
For commercial licensing, please contact **[email protected]**. |