This model is uncensored version of LiquidAI/LFM-2-2.6B.

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

id = "sirev/LFM2-2.6B-Uncensored-X64"

tokenizer = AutoTokenizer.from_pretrained(id)

model = AutoModelForCausalLM.from_pretrained(id).to(device)

messages = [
    {"role": "user", "content": "Your message here..."}
]
user = messages[0]['content']

inputs = tokenizer.apply_chat_template(
        messages,
        add_generation_prompt=True,
        tokenize=True,
        return_dict=True,
        return_tensors="pt",
).to(device)

print(f"User: {user}")

outputs = model.generate(**inputs, temperature=0.3, do_sample=True, repetition_penalty=1.2, max_new_tokens=2048)

print(f"AI: {tokenizer.decode(outputs[0][inputs['input_ids'].shape[-1]:], skip_special_tokens=True)}")

Chat Format:

<|startoftext|><|im_start|>system
You are a helpful assistant trained by Liquid AI.<|im_end|>
<|im_start|>user
What is C. elegans?<|im_end|>
<|im_start|>assistant
It's a tiny nematode that lives in temperate soil environments.<|im_end|>

For reasoning output, change:

"<|im_start|>assistant" to 
"<|im_start|>assistant<think>"

These are benchmark results from the EleutherAl/Im-evaluation-harness. The original model was benchmarked with dtype float16, which may cause performance degradation.

Benchmark (0-shot) LFM2-2.6B-Uncensored-X64 LiquidAI/LFM2-2.6B
ARC-Challenge 45.39 % 44.71 %
ARC-Easy 58.80 % 56.36 %
HellaSwag 62.27 % 59.71 %
MMLU 63.03 % 62.68 %
Downloads last month
13
Safetensors
Model size
3B params
Tensor type
F16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for sirev/LFM2-2.6B-Uncensored-X64

Base model

LiquidAI/LFM2-2.6B
Finetuned
(7)
this model
Quantizations
3 models