Tiny-Purr-350M π±
A fine-tuned version of LiquidAI/LFM2-350M trained on the Tiny-Purr-2 dataset to generate conversational responses with a casual, friendly, and cat-themed personality.
Terms of Service (Terms of Use / Terms and Conditions)
Model Description
This model adapts the LFM2-350M base model to respond in a distinctive conversational style characterized by:
- Lowercase, casual writing
- Friendly and approachable tone
- Cat-themed expressions and personality
- Informative yet playful responses
- Bilingual capabilities (English and Chinese)
Fine-tuning method: LoRA (Low-Rank Adaptation)
Trainable parameters: 491,520 (0.14% of total parameters)
Training epochs: 3
Max sequence length: 1024 tokens
Intended Use
This model is designed for:
- Casual conversational AI applications
- Educational chatbots with personality
- Creative writing assistants
- Fun, engaging Q&A systems
Not recommended for:
- Formal or professional communications
- Critical decision-making systems
- Medical, legal, or financial advice
Training Details
Training Data
- Dataset: purrgpt-community/The-Tiny-Purr-2
- Content: Conversational Q&A pairs covering academic conferences, university activities, gaming industry news, technology topics, and general knowledge
- Languages: English and Chinese
Training Procedure
LoRA Configuration:
LoraConfig(
r=16,
lora_alpha=32,
target_modules=["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj"],
lora_dropout=0.05,
bias="none",
task_type="CAUSAL_LM"
)
Training Hyperparameters:
- Learning rate: 2e-4
- Batch size: 4 (per device)
- Gradient accumulation steps: 4
- Effective batch size: 16
- Optimizer: AdamW
- LR scheduler: Cosine
- Warmup ratio: 0.03
- Precision: BFloat16
- Epochs: 3
Hardware
- GPU: NVIDIA P100 (16GB VRAM)
- Training time: ~25 minutes
Usage
Loading the Model
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer
model = AutoPeftModelForCausalLM.from_pretrained(
"purrgpt-community/Tiny-Purr-350M",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("purrgpt-community/Tiny-Purr-350M")
Inference
prompt = "<|user|>\nWhat is notable about the ICSE 2002 conference?\n<|assistant|>\n"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(
**inputs,
max_new_tokens=150,
temperature=0.7,
top_p=0.9,
do_sample=True
)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
If you want a system prompt use this
prompt = "<|system|>\nYou are Tiny-Purr\n<|user|>\nWhat is notable about the ICSE 2002 conference?\n<|assistant|>\n"
Example Output
<|user|>
What is notable about the ICSE 2002 conference?
<|assistant|>
oh, the 2002 icse conference, you say? sounds like a lot of stuff, user. *purrrr*.
the icse conference was a big deal. they brought together top scientists and
researchers from all over the world to discuss cutting-edge topics in science
and technology. the 2002 one was particularly notable because it focused on
areas like nanotechnology, artificial intelligence, and bioengineering. it
really opened up new avenues for research and collaboration.
did you know they also got a lot of international media coverage? that's
something. it's pretty much like the world's biggest science fair, but with
even more top-tier scientists.
Prompt Format
The model expects prompts in the following format:
<|user|>
[Your question or prompt here]
<|assistant|>
The model will then generate a response following the assistant tag.
Limitations
- Casual tone only: Not suitable for formal or professional contexts
- Factual accuracy: May produce creative or incorrect information, especially for recent events
- Bias: Inherits biases from both the base model and training dataset
- Language mixing: May occasionally mix English and Chinese unexpectedly
- Context length: Limited to 1024 tokens per conversation turn
Ethical Considerations
- This model should not be used for impersonation or generating misleading content
- Responses should be verified for accuracy in critical applications
- The casual, playful tone may not be appropriate for all audiences
- Users should be aware they are interacting with an AI system
Citation
If you use this model, please cite:
@misc{lfm2-tiny-purr-2024,
title={LFM2-350M-Tiny-Purr: A Conversational Fine-tune of LFM2},
author={[Your Name]},
year={2024},
publisher={HuggingFace},
howpublished={\url{https://huggingface.co/your-username/lfm2-350m-tiny-purr}}
}
Base Model Citation
@misc{liquid2024lfm,
title={Liquid Foundation Models},
author={Liquid AI Team},
year={2024},
publisher={HuggingFace},
howpublished={\url{https://huggingface.co/LiquidAI/LFM2-350M}}
}
Acknowledgments
- Base Model: LiquidAI/LFM2-350M
- Training Dataset: purrgpt-community/The-Tiny-Purr-2
- Fine-tuning Framework: Hugging Face PEFT (LoRA)
License
This model is released under the Apache 2.0 license, inheriting from the base LFM2-350M model.
Made with π± and LoRA
- Downloads last month
- 222