|
|
--- |
|
|
|
|
|
language: |
|
|
- en |
|
|
tags: |
|
|
- sentiment-analysis |
|
|
- distilbert |
|
|
- text-classification |
|
|
- nlp |
|
|
license: apache-2.0 |
|
|
datasets: |
|
|
- amazon_polarity |
|
|
metrics: |
|
|
- accuracy |
|
|
- f1 |
|
|
model-index: |
|
|
- name: fine-tuned-distilbert |
|
|
results: |
|
|
- task: |
|
|
type: text-classification |
|
|
name: Sentiment Analysis |
|
|
dataset: |
|
|
name: Amazon Polarity |
|
|
type: amazon_polarity |
|
|
metrics: |
|
|
- name: Accuracy |
|
|
type: accuracy |
|
|
value: 0.90 |
|
|
- name: F1 |
|
|
type: f1 |
|
|
value: 0.89 |
|
|
--- |
|
|
|
|
|
# Fine-Tuned DistilBERT for Sentiment Analysis |
|
|
|
|
|
## Model Description |
|
|
|
|
|
This model is a fine-tuned version of [`distilbert-base-uncased`](https://huggingface.co/distilbert-base-uncased) on the [`amazon_polarity`](https://huggingface.co/datasets/amazon_polarity) dataset. It is designed for binary sentiment classification, predicting whether a given text expresses a **positive** (1) or **negative** (0) sentiment. The model leverages the lightweight architecture of DistilBERT, making it efficient for deployment while maintaining strong performance. |
|
|
|
|
|
- **Developed by**: [Jack.RX Tech] |
|
|
- **Model Type**: Transformer-based text classification |
|
|
- **Base Model**: `distilbert-base-uncased` |
|
|
- **Language**: English |
|
|
- **License**: Apache 2.0 |
|
|
|
|
|
## Intended Uses |
|
|
|
|
|
This model is intended for sentiment analysis tasks, particularly in analyzing product reviews or user feedback. It can be used in: |
|
|
- E-commerce platforms to monitor customer opinions. |
|
|
- Social media analysis for brand reputation management. |
|
|
- Market research to gauge consumer sentiment. |
|
|
|
|
|
### Direct Use |
|
|
The model can classify text directly without additional fine-tuning for similar binary sentiment tasks. |
|
|
|
|
|
### Downstream Use |
|
|
It can be further fine-tuned for domain-specific sentiment analysis (e.g., medical reviews, movie critiques). |
|
|
|
|
|
## How to Use |
|
|
|
|
|
### Python Code Example |
|
|
Below is an example of how to load and use the model with the `transformers` library: |
|
|
|
|
|
```python |
|
|
from transformers import AutoModelForSequenceClassification, AutoTokenizer |
|
|
|
|
|
# 加载模型和tokenizer |
|
|
model_name = "huevan/distilbert-base-uncased-rx" # 替换为你的仓库名 |
|
|
tokenizer = AutoTokenizer.from_pretrained(model_name) |
|
|
model = AutoModelForSequenceClassification.from_pretrained(model_name) |
|
|
|
|
|
# 输入文本 |
|
|
text = "I love this product, it's amazing!" |
|
|
inputs = tokenizer(text, return_tensors="pt", padding=True, truncation=True, max_length=128) |
|
|
|
|
|
# 预测 |
|
|
model.eval() |
|
|
with torch.no_grad(): |
|
|
outputs = model(**inputs) |
|
|
prediction = outputs.logits.argmax(-1).item() |
|
|
sentiment = "positive" if prediction == 1 else "negative" |
|
|
print(f"Sentiment: {sentiment}") # 输出: positive |