Update README.md
Browse files
    	
        README.md
    CHANGED
    
    | @@ -60,14 +60,14 @@ base_model: | |
| 60 |  | 
| 61 | 
             
            # LFM2-350M-GGUF
         | 
| 62 |  | 
| 63 | 
            -
             | 
| 64 |  | 
| 65 | 
            -
            Find more details in the original model card: https://huggingface.co/LiquidAI/LFM2-350M | 
| 66 |  | 
| 67 | 
             
            ## 🏃 How to run LFM2
         | 
| 68 |  | 
| 69 | 
             
            Example usage with [llama.cpp](https://github.com/ggml-org/llama.cpp):
         | 
| 70 |  | 
| 71 | 
             
            ```
         | 
| 72 | 
            -
            llama-cli -hf LiquidAI/LFM2-350M- | 
| 73 | 
            -
            ```
         | 
|  | |
| 60 |  | 
| 61 | 
             
            # LFM2-350M-GGUF
         | 
| 62 |  | 
| 63 | 
            +
            LFM2 is a new generation of hybrid models developed by [Liquid AI](https://www.liquid.ai/), specifically designed for edge AI and on-device deployment. It sets a new standard in terms of quality, speed, and memory efficiency. 
         | 
| 64 |  | 
| 65 | 
            +
            Find more details in the original model card: https://huggingface.co/LiquidAI/LFM2-350M
         | 
| 66 |  | 
| 67 | 
             
            ## 🏃 How to run LFM2
         | 
| 68 |  | 
| 69 | 
             
            Example usage with [llama.cpp](https://github.com/ggml-org/llama.cpp):
         | 
| 70 |  | 
| 71 | 
             
            ```
         | 
| 72 | 
            +
            llama-cli -hf LiquidAI/LFM2-350M-GGUF
         | 
| 73 | 
            +
            ```
         | 

