Murli Assistant - Fine-tuned Phi-2 with LoRA
This model is a fine-tuned version of microsoft/phi-2 using LoRA (Low-Rank Adaptation) on Brahma Kumaris Murli data.
Model Description
- Base Model: microsoft/phi-2 (2.7B parameters)
- Fine-tuning Method: LoRA (r=8, alpha=16)
- Training Data: 100+ daily murlis from MongoDB database
- Use Case: Spiritual guidance and murli knowledge assistant
Training Details
- LoRA Rank (r): 8
- LoRA Alpha: 16
- Target Modules: q_proj, o_proj, k_proj, v_proj
- Training Examples: 201 formatted instructions
- Adapter Size: ~15MB
Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
import torch
# Load base model
base_model = AutoModelForCausalLM.from_pretrained(
"microsoft/phi-2",
torch_dtype=torch.float16,
device_map="auto",
trust_remote_code=True
)
# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "eswarankrishnamurthy/murli-assistant-phi2-lora")
tokenizer = AutoTokenizer.from_pretrained("microsoft/phi-2")
tokenizer.pad_token = tokenizer.eos_token
# Generate response
question = "What is the essence of today's murli?"
prompt = f"### Instruction:\n{question}\n\n### Response:\n"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=200)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
Inference API
This model is also available via Hugging Face Inference API:
import requests
API_URL = "https://api-inference.huggingface.co/models/eswarankrishnamurthy/murli-assistant-phi2-lora"
headers = {"Authorization": f"Bearer {YOUR_HF_TOKEN}"}
def query(payload):
response = requests.post(API_URL, headers=headers, json=payload)
return response.json()
output = query({"inputs": "What is soul consciousness?"})
print(output)
Training Information
The model was trained on diverse murli content including:
- Daily murli essence
- Blessings and slogans
- Questions and answers
- Spiritual teachings and guidance
Limitations
- Best performance on spiritual/murli-related queries
- May require GPU for faster inference
- CPU inference is possible but slower
Citation
If you use this model, please cite:
@misc{murli-assistant-phi2,
author = {eswarankrishnamurthy},
title = {Murli Assistant - Fine-tuned Phi-2},
year = {2025},
publisher = {Hugging Face},
url = {https://huggingface.co/eswarankrishnamurthy/murli-assistant-phi2-lora}
}
Contact
For questions or feedback, please open an issue on the model repository.
- Downloads last month
- 2
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for eswarankrishnamurthy/murli-assistant-phi2-lora
Base model
microsoft/phi-2