Model Card for Model ID
Model Details
π§ SupportPal: A Generative AI Chatbot for Emotional Support and Stress Relief
Model Name: bhushanrocks/supportpal-dialoGPT
Base Model: microsoft/DialoGPT-medium
Dataset: EmpatheticDialogues
Language: English
License: MIT
Author: Bhushan Gupta
Intended Use: Emotional Support / Mental Wellness Chatbot (Non-clinical)
π¬ Overview
SupportPal is a fine-tuned version of DialoGPT-medium, trained on the EmpatheticDialogues dataset to generate emotionally intelligent, compassionate, and contextually relevant responses.
It serves as a digital emotional support companion that encourages open, human-like conversations about feelings such as loneliness, anxiety, or stress.
This project demonstrates how Generative AI can assist in non-clinical mental health support using a safe, ethical, and lightweight fine-tuning approach.
π― Objectives
- Develop an empathetic dialogue model capable of emotionally aware responses.
- Fine-tune with lightweight PEFT/LoRA techniques to fit on limited GPUs.
- Improve coherence, empathy, and tone sensitivity of generated replies.
- Encourage safe and ethical use of AI for emotional well-being.
βοΈ Model Details
| Parameter | Value |
|---|---|
| Base Model | DialoGPT-medium |
| Dataset | EmpatheticDialogues |
| Training Epochs | 1 per chunk (β9 total) |
| Batch Size | 2 |
| Gradient Accumulation | 4 |
| Learning Rate | 5e-5 |
| Warmup Steps | 50 |
| Optimizer | AdamW |
| Precision | FP16 |
| Framework | π€ Transformers + PEFT |
| Hardware | NVIDIA T4 (Google Colab) |
Training Approach:
The dataset was split into chunks of 5,000 samples for memory-efficient fine-tuning. Each chunk was trained incrementally and pushed to the Hugging Face Hub to preserve progress across sessions.
π Evaluation Metrics
| Metric | Before Fine-tuning | After Fine-tuning |
|---|---|---|
| Empathy (Human-rated) | 4.2 | 8.3 |
| Coherence | 5.1 | 8.0 |
| Tone Appropriateness | 4.8 | 8.5 |
| Rouge-L | β 0.37 | |
| BLEU | β 0.21 |
The fine-tuned SupportPal model demonstrates significant improvement in emotional tone, contextual alignment, and empathy.
π§© Example Usage
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_id = "bhushanrocks/supportpal-dialoGPT"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
chatbot = pipeline("text-generation", model=model, tokenizer=tokenizer, max_new_tokens=150)
prompt = "Iβve been feeling really lonely lately."
response = chatbot(prompt, do_sample=True, temperature=0.7, top_k=50)[0]["generated_text"]
print(response)
- Downloads last month
- 52
Model tree for bhushanrocks/supportpal-dialoGPT-v3
Base model
microsoft/DialoGPT-medium