|
|
--- |
|
|
library_name: transformers |
|
|
tags: |
|
|
- agent |
|
|
license: mit |
|
|
datasets: |
|
|
- Estwld/empathetic_dialogues_llm |
|
|
language: |
|
|
- en |
|
|
metrics: |
|
|
- bleu |
|
|
- rouge |
|
|
base_model: |
|
|
- microsoft/DialoGPT-medium |
|
|
--- |
|
|
|
|
|
# Model Card for Model ID |
|
|
|
|
|
<!-- Provide a quick summary of what the model is/does. --> |
|
|
|
|
|
|
|
|
## Model Details |
|
|
|
|
|
# π§ SupportPal: A Generative AI Chatbot for Emotional Support and Stress Relief |
|
|
|
|
|
**Model Name:** `bhushanrocks/supportpal-dialoGPT` |
|
|
**Base Model:** [`microsoft/DialoGPT-medium`](https://huggingface.co/microsoft/DialoGPT-medium) |
|
|
**Dataset:** [EmpatheticDialogues](https://huggingface.co/datasets/facebook/empathetic_dialogues) |
|
|
**Language:** English |
|
|
**License:** MIT |
|
|
**Author:** Bhushan Gupta |
|
|
**Intended Use:** Emotional Support / Mental Wellness Chatbot (Non-clinical) |
|
|
|
|
|
--- |
|
|
|
|
|
## π¬ Overview |
|
|
|
|
|
**SupportPal** is a fine-tuned version of **DialoGPT-medium**, trained on the **EmpatheticDialogues dataset** to generate emotionally intelligent, compassionate, and contextually relevant responses. |
|
|
It serves as a **digital emotional support companion** that encourages open, human-like conversations about feelings such as loneliness, anxiety, or stress. |
|
|
|
|
|
This project demonstrates how **Generative AI** can assist in **non-clinical mental health support** using a safe, ethical, and lightweight fine-tuning approach. |
|
|
|
|
|
--- |
|
|
|
|
|
## π― Objectives |
|
|
|
|
|
- Develop an **empathetic dialogue model** capable of emotionally aware responses. |
|
|
- Fine-tune with **lightweight PEFT/LoRA techniques** to fit on limited GPUs. |
|
|
- Improve **coherence, empathy, and tone sensitivity** of generated replies. |
|
|
- Encourage safe and ethical use of AI for emotional well-being. |
|
|
|
|
|
--- |
|
|
|
|
|
## βοΈ Model Details |
|
|
|
|
|
| **Parameter** | **Value** | |
|
|
|----------------|-----------| |
|
|
| Base Model | DialoGPT-medium | |
|
|
| Dataset | EmpatheticDialogues | |
|
|
| Training Epochs | 1 per chunk (β9 total) | |
|
|
| Batch Size | 2 | |
|
|
| Gradient Accumulation | 4 | |
|
|
| Learning Rate | 5e-5 | |
|
|
| Warmup Steps | 50 | |
|
|
| Optimizer | AdamW | |
|
|
| Precision | FP16 | |
|
|
| Framework | π€ Transformers + PEFT | |
|
|
| Hardware | NVIDIA T4 (Google Colab) | |
|
|
|
|
|
**Training Approach:** |
|
|
The dataset was split into chunks of 5,000 samples for memory-efficient fine-tuning. Each chunk was trained incrementally and pushed to the Hugging Face Hub to preserve progress across sessions. |
|
|
|
|
|
--- |
|
|
|
|
|
## π Evaluation Metrics |
|
|
|
|
|
| **Metric** | **Before Fine-tuning** | **After Fine-tuning** | |
|
|
|-------------|-----------------------|-----------------------| |
|
|
| Empathy (Human-rated) | 4.2 | 8.3 | |
|
|
| Coherence | 5.1 | 8.0 | |
|
|
| Tone Appropriateness | 4.8 | 8.5 | |
|
|
| Rouge-L | β 0.37 | |
|
|
| BLEU | β 0.21 | |
|
|
|
|
|
The fine-tuned SupportPal model demonstrates **significant improvement in emotional tone, contextual alignment, and empathy**. |
|
|
|
|
|
--- |
|
|
|
|
|
## π§© Example Usage |
|
|
|
|
|
```python |
|
|
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline |
|
|
|
|
|
model_id = "bhushanrocks/supportpal-dialoGPT" |
|
|
tokenizer = AutoTokenizer.from_pretrained(model_id) |
|
|
model = AutoModelForCausalLM.from_pretrained(model_id) |
|
|
|
|
|
chatbot = pipeline("text-generation", model=model, tokenizer=tokenizer, max_new_tokens=150) |
|
|
|
|
|
prompt = "Iβve been feeling really lonely lately." |
|
|
response = chatbot(prompt, do_sample=True, temperature=0.7, top_k=50)[0]["generated_text"] |
|
|
print(response) |