Instructions to use LiquidAI/LFM2.5-1.2B-Instruct with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use LiquidAI/LFM2.5-1.2B-Instruct with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="LiquidAI/LFM2.5-1.2B-Instruct") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("LiquidAI/LFM2.5-1.2B-Instruct") model = AutoModelForCausalLM.from_pretrained("LiquidAI/LFM2.5-1.2B-Instruct") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use LiquidAI/LFM2.5-1.2B-Instruct with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "LiquidAI/LFM2.5-1.2B-Instruct" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "LiquidAI/LFM2.5-1.2B-Instruct", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/LiquidAI/LFM2.5-1.2B-Instruct
- SGLang
How to use LiquidAI/LFM2.5-1.2B-Instruct with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "LiquidAI/LFM2.5-1.2B-Instruct" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "LiquidAI/LFM2.5-1.2B-Instruct", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "LiquidAI/LFM2.5-1.2B-Instruct" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "LiquidAI/LFM2.5-1.2B-Instruct", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use LiquidAI/LFM2.5-1.2B-Instruct with Docker Model Runner:
docker model run hf.co/LiquidAI/LFM2.5-1.2B-Instruct
Trouble with Data Extraction using Custom Schema
Hello Liquid AI team,
I've tried to use both this model and LFM2-1.2B-Extract for data extraction of texts.
However, I find that these models are very prone to errors and hallucinations when I specify a custom schema. As an example,
system:
Identify and extract information matching the following schema. Return data as a JSON object. Missing data should be omitted.
Schema:
- discovery: "what was being discovered/invented"
- person: "the person who made the discovery"
- where: "the location where the discovery was made"
- when: "the time when the discovery was made"
- doi: "doi of the source"
user input:
The lightning rod was invented by Benjamin Franklin in the early 1750s.
model response:
{
"discovery": "lightning rod",
"person": "Benjamin Franklin",
"where": "early 1750s",
"when": "early 1750s",
"doi": "10.1007/978-3-319-25588-9"
}
Clearly, "early 1750s" is not a location, and the model hallucinated a doi when the original text contained none.
If I removed the custom schema from the system prompt, the model is correct a lot more often. However, this results in inconsistent JSON schemas that might include missing/extraneous information which are hard to ingest.
Temperature is set to 0. I'm not sure if I'm doing something wrong? I would love to use your very speedy and lightweight models otherwise, but currently the error rates make them a pass.
Hey @Purplys , sorry for the late reply. I tried reproducing what you described, but I get pretty good results with your example (see this chat with the playground). Can you double-check your generation parameters?
Hi, thanks for your response!
I double checked my generation parameters again. Turns out I had repetition_penalty too high. Toning it down to your recommended settings largely fixed the issues.
By the way, if I may ask, do you have any plans on releasing finetunes of LFM2.5 for extraction? Kind of like the Liquid Nano series for LFM2.
Great! This is something we're exploring at the moment, so that's good feedback. Thanks!