FABSA RoBERTa Sentiment Analysis Model
π Model Overview
This is a fine-tuned RoBERTa model for Aspect-Based Sentiment Analysis (ABSA) on food delivery reviews, achieving 93.97% accuracy on the validation set. The model analyzes customer reviews across multiple specific aspects like food quality, delivery service, pricing, and more.
π― What is Aspect-Based Sentiment Analysis?
Unlike traditional sentiment analysis that gives one overall sentiment, ABSA identifies sentiment for specific aspects of a product or service. For example:
"The food was amazing but delivery took forever"
- Food aspect: β Positive
- Delivery aspect: β Negative
This granular analysis helps businesses identify exactly what customers love and what needs improvement.
π Quick Start
Using the Model
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
# Load model and tokenizer
model_name = "Anudeep-Narala/fabsa-roberta-sentiment"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
# Example: Analyze a review
review = "The food was delicious but the delivery was slow"
aspect = "delivery" # Can be: food, delivery, service, price, interface, overall
# Format input
input_text = f"Review: {review} | Aspect: {aspect}"
inputs = tokenizer(input_text, return_tensors="pt", truncation=True, max_length=256)
# Get prediction
with torch.no_grad():
outputs = model(**inputs)
predictions = torch.nn.functional.softmax(outputs.logits, dim=-1)
predicted_class = torch.argmax(predictions, dim=-1).item()
confidence = predictions[0][predicted_class].item()
# Map prediction to sentiment
sentiment_map = {0: "negative", 1: "neutral", 2: "positive"}
print(f"Aspect: {aspect}")
print(f"Sentiment: {sentiment_map[predicted_class]}")
print(f"Confidence: {confidence:.2%}")
Batch Processing Multiple Aspects
def analyze_review(review_text, aspects=["food", "delivery", "service", "price"]):
"""Analyze a review across multiple aspects"""
results = {}
for aspect in aspects:
input_text = f"Review: {review_text} | Aspect: {aspect}"
inputs = tokenizer(input_text, return_tensors="pt", truncation=True, max_length=256)
with torch.no_grad():
outputs = model(**inputs)
predictions = torch.nn.functional.softmax(outputs.logits, dim=-1)
predicted_class = torch.argmax(predictions, dim=-1).item()
confidence = predictions[0][predicted_class].item()
sentiment_map = {0: "negative", 1: "neutral", 2: "positive"}
results[aspect] = {
"sentiment": sentiment_map[predicted_class],
"confidence": confidence
}
return results
# Example usage
review = "Great food and reasonable prices, but the app keeps crashing"
results = analyze_review(review)
for aspect, result in results.items():
print(f"{aspect.capitalize()}: {result['sentiment']} (confidence: {result['confidence']:.2%})")
π Performance Metrics
| Metric | Value |
|---|---|
| Validation Accuracy | 93.97% |
| Training Loss | 0.1611 |
| Validation Loss | 0.1749 |
| Training Time | 302.74 seconds |
| Training Examples | 13,998 |
| Validation Examples | 1,858 |
π― Supported Aspects
The model is trained to analyze sentiment for these specific aspects:
- food - Food quality, taste, freshness, presentation
- delivery - Delivery speed, reliability, driver behavior, packaging
- service - Customer support, staff attitude, responsiveness
- price - Value for money, fees, discounts, pricing fairness
- interface - App/website usability, navigation, features
- overall - General satisfaction and overall experience
π·οΈ Sentiment Classes
- Positive (2): Favorable opinion, satisfaction, praise
- Neutral (1): Mixed feelings, objective statements, neutral tone
- Negative (0): Complaints, dissatisfaction, criticism
π οΈ Technical Details
Model Architecture
- Base Model: cardiffnlp/twitter-roberta-base-sentiment-latest
- Architecture: RoBERTa (Robustly Optimized BERT Pretraining Approach)
- Model Type: Encoder-based transformer
- Number of Parameters: ~125M
- Fine-tuning Task: Sequence Classification (3 classes)
Training Configuration
- Epochs: 3
- Learning Rate: 8e-6 with cosine restarts
- Batch Size: 16
- Max Sequence Length: 128 tokens
- Optimizer: AdamW
- Framework: PyTorch + Hugging Face Transformers
Dataset
- Name: jordiclive/FABSA
- Domain: FABSA, An aspect-based sentiment analysis dataset in the Customer Feedback space (Trustpilot, Google Play and Apple Store reviews).
- Training Set: 13,998 labeled examples
- Validation Set: 1,858 examples
- Test Set: 1,587 examples (reserved)
- Languages: English
- Annotation: Aspect-level sentiment labels
π‘ Use Cases
Business Intelligence
- Customer Feedback Analysis: Automatically categorize and analyze thousands of reviews
- Competitive Analysis: Compare sentiment across platforms and competitors
- Product Development: Identify which aspects need improvement
- Quality Monitoring: Track sentiment trends over time
Real-time Applications
- Dashboard Analytics: Build live sentiment monitoring dashboards
- Alert Systems: Trigger alerts when negative sentiment spikes
- Customer Support: Prioritize reviews that need immediate attention
- A/B Testing: Measure impact of changes on specific aspects
Research
- Sentiment Analysis Studies: Benchmark against other ABSA models
- Multi-aspect Learning: Study aspect-specific sentiment patterns
- Transfer Learning: Fine-tune for other domains (e-commerce, hospitality)
π Example Results
review = "Amazing pizza and great prices! The delivery was fast but the driver was rude."
Analysis Output:
- π Food: Positive (98.5% confidence)
- π° Price: Positive (94.2% confidence)
- π Delivery: Negative (87.6% confidence)
- π€ Service: Negative (91.3% confidence)
This granular insight shows that while the product and pricing are excellent, there are service issues that need addressing.
π§ Installation
pip install transformers torch
For production environments with GPU acceleration:
pip install transformers torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
β‘ Performance Tips
- Batch Processing: Process multiple reviews at once for better throughput
- GPU Acceleration: Use CUDA for ~10x faster inference
- Model Quantization: Use quantization for reduced memory footprint
- ONNX Export: Convert to ONNX for optimized production deployment
# Enable GPU if available
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
π Model Evolution
This model represents the final iteration of extensive experimentation:
- Red-Pajama-7B: 8% accuracy (decoder limitations for classification)
- DialoGPT-small: 51.5% (baseline)
- RoBERTa Basic: 86% (initial fine-tuning)
- RoBERTa Enhanced: 90.7% (improved hyperparameters)
- RoBERTa Neutral-focused: 91.7% (class imbalance handling)
- RoBERTa Final: 93.97% β (optimal configuration)
π Related Resources
- GitHub Repository: aspect-based-sentiment-analysis
- Interactive Demo: See the repository for visualization dashboard
- Dataset Schema: CSV format with aspect-level annotations
- Training Code: Available in the repository
π Citation
If you use this model in your research or application, please cite:
@misc{narala2025fabsa,
author = {Anudeep Reddy Narala},
title = {FABSA RoBERTa: Fine-tuned Model for Aspect-Based Sentiment Analysis on Food Delivery Reviews},
year = {2025},
publisher = {HuggingFace},
howpublished = {\url{https://huggingface.co/Anudeep-Narala/fabsa-roberta-sentiment}},
}
π§ Contact
- Author: Anudeep Reddy Narala
- Email: [email protected]
- GitHub: @Anudeepreddynarala
π License
This model is released under the MIT License. Feel free to use it for commercial and non-commercial applications.
π Acknowledgments
- Base model: cardiffnlp/twitter-roberta-base-sentiment-latest
- Framework: Hugging Face Transformers
- Compute: Training performed on GPU infrastructure
Ready to analyze your customer feedback? Try the model now! π
- Downloads last month
- 90