LLaMA-2-7B Emotion Analysis with Activity Context
Model Description
This model is a fine-tuned version of NousResearch/Llama-2-7b-chat-hf on the GoEmotions dataset with activity context integration. It analyzes emotions in text while considering the user's recent activity patterns to provide more contextual insights.
Training Details
Training Data
- Dataset: AA65327/GoEmotions_Alpaca_Final
- Training samples: N/A
- Validation samples: N/A
Training Configuration
- Base model: NousResearch/Llama-2-7b-chat-hf
- Training epochs: 1
- Batch size: 1
- Learning rate: 0.0002
- LoRA rank: 8
- LoRA alpha: 32
Performance Metrics
Evaluation Results
- Perplexity: 26.08
- ROUGE-1: 0.190
- ROUGE-2: 0.170
- ROUGE-L: 0.190
- BLEU Score: 8.039
- Inference Speed: 1.3 tokens/sec
- Hallucination Rate: 2.400
Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
# Load model and tokenizer
base_model = AutoModelForCausalLM.from_pretrained(
"NousResearch/Llama-2-7b-chat-hf",
load_in_4bit=True,
device_map="auto"
)
model = PeftModel.from_pretrained(base_model, "AA65327/llama2-emotion-activity-20251005")
tokenizer = AutoTokenizer.from_pretrained("AA65327/llama2-emotion-activity-20251005")
# Format your prompt
def format_prompt(instruction, input_text, activity_log):
return f"""Below is an instruction that describes a task, paired with an input that provides further context.
Write a response that appropriately completes the request.
### Instruction:
{instruction}
### Input:
Current message: {input_text}
Activity log (past 3 days, hours per activity): {activity_log}
### Response:
"""
# Example usage
instruction = "Evaluate the emotion in this text and suggest why the person might feel this way."
input_text = "I'm feeling really excited about this new project!"
activity_log = "working_out: [2, 1, 3]; reading: [1, 2, 0]; socializing: [3, 4, 2]"
prompt = format_prompt(instruction, input_text, activity_log)
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=150, temperature=0.7)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
Training Procedure
The model was trained using LoRA (Low-Rank Adaptation) technique with the following approach:
- Load base LLaMA-2-7B-Chat model with 4-bit quantization
- Apply LoRA adapters to query and value projection layers
- Fine-tune on emotion analysis tasks with activity context
- Implement gradient checkpointing and mixed precision training
- Use early stopping based on validation loss
Limitations and Bias
- The model may reflect biases present in the training data
- Performance may vary on domains not represented in the training set
- Activity context interpretation is based on patterns learned from training data
- Generated content should be reviewed for factual accuracy
Citation
@misc{llama2-emotion-activity-2025,
author = {AA65327},
title = {LLaMA-2-7B Emotion Analysis with Activity Context},
year = {2025},
publisher = {Hugging Face},
url = {https://huggingface.co/AA65327/llama2-emotion-activity-20251005}
}
Acknowledgments
- Meta AI for the base LLaMA-2 model
- Google Research for the GoEmotions dataset
- Hugging Face for the transformers library and model hosting
Model tree for AA65327/llama2-emotion-activity-20251005
Base model
NousResearch/Llama-2-7b-chat-hf