mlabonne commited on
Commit
4ca024a
·
verified ·
1 Parent(s): 31cd51d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -60,14 +60,14 @@ base_model:
60
 
61
  # LFM2-350M-GGUF
62
 
63
- Based on the [LFM2-350M](https://huggingface.co/LiquidAI/LFM2-350M) model, this checkpoint has been fine-tuned for near real-time **bi-directional Japanese/English translation** of short-to-medium inputs.
64
 
65
- Find more details in the original model card: https://huggingface.co/LiquidAI/LFM2-350M-ENJP-MT
66
 
67
  ## 🏃 How to run LFM2
68
 
69
  Example usage with [llama.cpp](https://github.com/ggml-org/llama.cpp):
70
 
71
  ```
72
- llama-cli -hf LiquidAI/LFM2-350M-ENJP-MT-GGUF
73
- ```
 
60
 
61
  # LFM2-350M-GGUF
62
 
63
+ LFM2 is a new generation of hybrid models developed by [Liquid AI](https://www.liquid.ai/), specifically designed for edge AI and on-device deployment. It sets a new standard in terms of quality, speed, and memory efficiency.
64
 
65
+ Find more details in the original model card: https://huggingface.co/LiquidAI/LFM2-350M
66
 
67
  ## 🏃 How to run LFM2
68
 
69
  Example usage with [llama.cpp](https://github.com/ggml-org/llama.cpp):
70
 
71
  ```
72
+ llama-cli -hf LiquidAI/LFM2-350M-GGUF
73
+ ```