Improve model card: Add pipeline tag, update license, enhance details and usage (#1)
Browse files- Improve model card: Add pipeline tag, update license, enhance details and usage (c049eb196dd3880f23069fcbceef0297f5f1ecb8)
- Update README.md (10dd8e4b8d9cec37e82a3cd2175b97badd21256c)
Co-authored-by: Niels Rogge <[email protected]>
README.md
CHANGED
|
@@ -1,26 +1,32 @@
|
|
| 1 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2 |
library_name: transformers
|
| 3 |
-
|
| 4 |
-
- generated_from_trainer
|
| 5 |
metrics:
|
| 6 |
- accuracy
|
| 7 |
- f1
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 8 |
model-index:
|
| 9 |
- name: mdeberta-v3-base-subjectivity-sentiment-arabic
|
| 10 |
results: []
|
| 11 |
-
|
| 12 |
-
|
| 13 |
-
- ar
|
| 14 |
-
base_model:
|
| 15 |
-
- microsoft/mdeberta-v3-base
|
| 16 |
---
|
| 17 |
|
| 18 |
-
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
| 19 |
-
should probably proofread and complete it, then remove this comment. -->
|
| 20 |
-
|
| 21 |
# mdeberta-v3-base-subjectivity-sentiment-arabic
|
| 22 |
|
| 23 |
-
This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the [CheckThat! Lab Task 1 Subjectivity Detection at CLEF 2025](
|
|
|
|
| 24 |
It achieves the following results on the evaluation set:
|
| 25 |
- Loss: 0.7442
|
| 26 |
- Macro F1: 0.5426
|
|
@@ -31,17 +37,122 @@ It achieves the following results on the evaluation set:
|
|
| 31 |
- Subj R: 0.4080
|
| 32 |
- Accuracy: 0.5632
|
| 33 |
|
|
|
|
|
|
|
|
|
|
| 34 |
## Model description
|
| 35 |
|
| 36 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 37 |
|
| 38 |
## Intended uses & limitations
|
| 39 |
|
| 40 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 41 |
|
| 42 |
## Training and evaluation data
|
| 43 |
|
| 44 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 45 |
|
| 46 |
## Training procedure
|
| 47 |
|
|
@@ -67,10 +178,25 @@ The following hyperparameters were used during training:
|
|
| 67 |
| 0.6721 | 5.0 | 765 | 0.7238 | 0.5220 | 0.5667 | 0.5448 | 0.3490 | 0.5361 | 0.2587 | 0.5846 |
|
| 68 |
| 0.6721 | 6.0 | 918 | 0.7442 | 0.5426 | 0.5472 | 0.5442 | 0.4457 | 0.4910 | 0.4080 | 0.5632 |
|
| 69 |
|
| 70 |
-
|
| 71 |
### Framework versions
|
| 72 |
|
| 73 |
- Transformers 4.49.0
|
| 74 |
- Pytorch 2.5.1+cu121
|
| 75 |
- Datasets 3.3.1
|
| 76 |
-
- Tokenizers 0.21.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
+
base_model:
|
| 3 |
+
- microsoft/mdeberta-v3-base
|
| 4 |
+
language:
|
| 5 |
+
- ar
|
| 6 |
library_name: transformers
|
| 7 |
+
license: cc-by-4.0
|
|
|
|
| 8 |
metrics:
|
| 9 |
- accuracy
|
| 10 |
- f1
|
| 11 |
+
paper: 2507.11764
|
| 12 |
+
pipeline_tag: text-classification
|
| 13 |
+
tags:
|
| 14 |
+
- generated_from_trainer
|
| 15 |
+
- subjectivity-detection
|
| 16 |
+
- sentiment-analysis
|
| 17 |
+
- multilingual
|
| 18 |
+
- deberta-v3
|
| 19 |
model-index:
|
| 20 |
- name: mdeberta-v3-base-subjectivity-sentiment-arabic
|
| 21 |
results: []
|
| 22 |
+
datasets:
|
| 23 |
+
- MatteoFasulo/clef2025_checkthat_task1_subjectivity
|
|
|
|
|
|
|
|
|
|
| 24 |
---
|
| 25 |
|
|
|
|
|
|
|
|
|
|
| 26 |
# mdeberta-v3-base-subjectivity-sentiment-arabic
|
| 27 |
|
| 28 |
+
This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the [CheckThat! Lab Task 1 Subjectivity Detection at CLEF 2025](https://huggingface.co/papers/2507.11764).
|
| 29 |
+
|
| 30 |
It achieves the following results on the evaluation set:
|
| 31 |
- Loss: 0.7442
|
| 32 |
- Macro F1: 0.5426
|
|
|
|
| 37 |
- Subj R: 0.4080
|
| 38 |
- Accuracy: 0.5632
|
| 39 |
|
| 40 |
+
## GitHub Repository
|
| 41 |
+
The official code and materials for this work are available on GitHub: [MatteoFasulo/clef2025-checkthat](https://github.com/MatteoFasulo/clef2025-checkthat)
|
| 42 |
+
|
| 43 |
## Model description
|
| 44 |
|
| 45 |
+
This model is part of AI Wizards' participation in the CLEF 2025 CheckThat! Lab Task 1: Subjectivity Detection in News Articles. Its primary goal is to classify sentences as subjective (opinion-laden) or objective across monolingual, multilingual, and zero-shot settings.
|
| 46 |
+
|
| 47 |
+
The core innovation of this model lies in enhancing transformer-based classifiers by integrating sentiment scores, derived from an auxiliary model, with sentence representations. This sentiment-augmented architecture, explored with mDeBERTaV3-base, aims to significantly improve performance, particularly the subjective F1 score. To address class imbalance, prevalent across languages, the framework employs decision threshold calibration optimized on the development set. This approach led to high rankings, notably 1st for Greek (Macro F1 = 0.51).
|
| 48 |
+
|
| 49 |
+
**Key Contributions:**
|
| 50 |
+
* **Sentiment-Augmented Fine-Tuning**: Enriches typical embedding-based models by integrating sentiment scores, significantly improving subjective sentence detection.
|
| 51 |
+
* **Diverse Model Coverage**: Evaluated across multilingual BERT variants, ModernBERT (English-focused), and Llama 3.2-1B (zero-shot LLM baseline).
|
| 52 |
+
* **Threshold Calibration for Imbalance**: A simple yet effective method to tune decision thresholds on each language’s dev data to enhance macro-F1 performance.
|
| 53 |
|
| 54 |
## Intended uses & limitations
|
| 55 |
|
| 56 |
+
**Intended Uses:**
|
| 57 |
+
This model is intended for subjectivity detection in news articles, classifying sentences as subjective or objective. It can be applied to tasks such as:
|
| 58 |
+
* Combating misinformation by identifying opinion-laden content.
|
| 59 |
+
* Improving fact-checking pipelines.
|
| 60 |
+
* Supporting journalists in content analysis.
|
| 61 |
+
* Performing subjectivity detection in monolingual (Arabic, German, English, Italian, Bulgarian), multilingual, and zero-shot (Greek, Polish, Romanian, Ukrainian) settings.
|
| 62 |
+
|
| 63 |
+
**Limitations:**
|
| 64 |
+
* The model's performance might vary across different languages, especially in zero-shot scenarios.
|
| 65 |
+
* As noted in the paper, initial submission errors highlighted the sensitivity to correct data splits and calibration, which can impact reported performance.
|
| 66 |
+
* The effectiveness of sentiment feature integration is dependent on the quality and relevance of the auxiliary sentiment model.
|
| 67 |
|
| 68 |
## Training and evaluation data
|
| 69 |
|
| 70 |
+
This model was fine-tuned on datasets provided for the CLEF 2025 CheckThat! Lab Task 1: Subjectivity Detection in News Articles. Training and development datasets were provided for Arabic, German, English, Italian, and Bulgarian. For final evaluation, additional unseen languages such as Greek, Romanian, Polish, and Ukrainian were included to assess the model's generalization capabilities. The datasets were designed for classifying sentences as subjective or objective within news articles.
|
| 71 |
+
|
| 72 |
+
## How to use
|
| 73 |
+
|
| 74 |
+
You can use this model directly with the Hugging Face `transformers` library for text classification.
|
| 75 |
+
|
| 76 |
+
```python
|
| 77 |
+
import torch
|
| 78 |
+
import torch.nn as nn
|
| 79 |
+
from transformers import DebertaV2Model, DebertaV2Config, AutoTokenizer, PreTrainedModel, pipeline, AutoModelForSequenceClassification
|
| 80 |
+
from transformers.models.deberta.modeling_deberta import ContextPooler
|
| 81 |
+
|
| 82 |
+
sent_pipe = pipeline(
|
| 83 |
+
"sentiment-analysis",
|
| 84 |
+
model="cardiffnlp/twitter-xlm-roberta-base-sentiment",
|
| 85 |
+
tokenizer="cardiffnlp/twitter-xlm-roberta-base-sentiment",
|
| 86 |
+
top_k=None, # return all 3 sentiment scores
|
| 87 |
+
)
|
| 88 |
+
|
| 89 |
+
class CustomModel(PreTrainedModel):
|
| 90 |
+
config_class = DebertaV2Config
|
| 91 |
+
def __init__(self, config, sentiment_dim=3, num_labels=2, *args, **kwargs):
|
| 92 |
+
super().__init__(config, *args, **kwargs)
|
| 93 |
+
self.deberta = DebertaV2Model(config)
|
| 94 |
+
self.pooler = ContextPooler(config)
|
| 95 |
+
output_dim = self.pooler.output_dim
|
| 96 |
+
self.dropout = nn.Dropout(0.1)
|
| 97 |
+
self.classifier = nn.Linear(output_dim + sentiment_dim, num_labels)
|
| 98 |
+
|
| 99 |
+
def forward(self, input_ids, positive, neutral, negative, token_type_ids=None, attention_mask=None, labels=None):
|
| 100 |
+
outputs = self.deberta(input_ids=input_ids, attention_mask=attention_mask)
|
| 101 |
+
encoder_layer = outputs[0]
|
| 102 |
+
pooled_output = self.pooler(encoder_layer)
|
| 103 |
+
sentiment_features = torch.stack((positive, neutral, negative), dim=1).to(pooled_output.dtype)
|
| 104 |
+
combined_features = torch.cat((pooled_output, sentiment_features), dim=1)
|
| 105 |
+
logits = self.classifier(self.dropout(combined_features))
|
| 106 |
+
return {'logits': logits}
|
| 107 |
+
|
| 108 |
+
model_name = "MatteoFasulo/mdeberta-v3-base-subjectivity-sentiment-arabic"
|
| 109 |
+
tokenizer = AutoTokenizer.from_pretrained("microsoft/mdeberta-v3-base")
|
| 110 |
+
config = DebertaV2Config.from_pretrained(
|
| 111 |
+
model_name,
|
| 112 |
+
num_labels=2,
|
| 113 |
+
id2label={0: 'OBJ', 1: 'SUBJ'},
|
| 114 |
+
label2id={'OBJ': 0, 'SUBJ': 1},
|
| 115 |
+
output_attentions=False,
|
| 116 |
+
output_hidden_states=False
|
| 117 |
+
)
|
| 118 |
+
model = CustomModel(config=config, sentiment_dim=3, num_labels=2).from_pretrained(model_name)
|
| 119 |
+
|
| 120 |
+
def classify_subjectivity(text: str):
|
| 121 |
+
# get full sentiment distribution
|
| 122 |
+
dist = sent_pipe(text)[0]
|
| 123 |
+
pos = next(d["score"] for d in dist if d["label"] == "positive")
|
| 124 |
+
neu = next(d["score"] for d in dist if d["label"] == "neutral")
|
| 125 |
+
neg = next(d["score"] for d in dist if d["label"] == "negative")
|
| 126 |
+
|
| 127 |
+
# tokenize the text
|
| 128 |
+
inputs = tokenizer(text, padding=True, truncation=True, max_length=256, return_tensors='pt')
|
| 129 |
+
|
| 130 |
+
# feeding in the three sentiment scores
|
| 131 |
+
with torch.no_grad():
|
| 132 |
+
outputs = model(
|
| 133 |
+
input_ids=inputs["input_ids"],
|
| 134 |
+
attention_mask=inputs["attention_mask"],
|
| 135 |
+
positive=torch.tensor(pos).unsqueeze(0).float(),
|
| 136 |
+
neutral=torch.tensor(neu).unsqueeze(0).float(),
|
| 137 |
+
negative=torch.tensor(neg).unsqueeze(0).float()
|
| 138 |
+
)
|
| 139 |
+
|
| 140 |
+
# compute probabilities and pick the top label
|
| 141 |
+
probs = torch.softmax(outputs.get('logits')[0], dim=-1)
|
| 142 |
+
label = model.config.id2label[int(probs.argmax())]
|
| 143 |
+
score = probs.max().item()
|
| 144 |
+
|
| 145 |
+
return {"label": label, "score": score}
|
| 146 |
+
|
| 147 |
+
examples = [
|
| 148 |
+
"ستشمل الشحنة الأولية نصف الجرعات، يليها النصف الثاني بعد ثلاثة أسابيع.",
|
| 149 |
+
"وهكذا بدأت النساء يعين أهمية دورهن في عدم الصمت أمام هذه الاقتحامات ورفضها بإعلاء صيحات الله أكبر.",
|
| 150 |
+
]
|
| 151 |
+
for text in examples:
|
| 152 |
+
result = classify_subjectivity(text)
|
| 153 |
+
print(f"Text: {text}")
|
| 154 |
+
print(f"→ Subjectivity: {result['label']} (score={result['score']:.2f})\n")
|
| 155 |
+
```
|
| 156 |
|
| 157 |
## Training procedure
|
| 158 |
|
|
|
|
| 178 |
| 0.6721 | 5.0 | 765 | 0.7238 | 0.5220 | 0.5667 | 0.5448 | 0.3490 | 0.5361 | 0.2587 | 0.5846 |
|
| 179 |
| 0.6721 | 6.0 | 918 | 0.7442 | 0.5426 | 0.5472 | 0.5442 | 0.4457 | 0.4910 | 0.4080 | 0.5632 |
|
| 180 |
|
|
|
|
| 181 |
### Framework versions
|
| 182 |
|
| 183 |
- Transformers 4.49.0
|
| 184 |
- Pytorch 2.5.1+cu121
|
| 185 |
- Datasets 3.3.1
|
| 186 |
+
- Tokenizers 0.21.0
|
| 187 |
+
|
| 188 |
+
## Citation
|
| 189 |
+
|
| 190 |
+
If you find our work helpful or inspiring, please feel free to cite it:
|
| 191 |
+
|
| 192 |
+
```bibtex
|
| 193 |
+
@misc{fasulo2025aiwizardscheckthat2025,
|
| 194 |
+
title={AI Wizards at CheckThat! 2025: Enhancing Transformer-Based Embeddings with Sentiment for Subjectivity Detection in News Articles},
|
| 195 |
+
author={Matteo Fasulo and Luca Babboni and Luca Tedeschini},
|
| 196 |
+
year={2025},
|
| 197 |
+
eprint={2507.11764},
|
| 198 |
+
archivePrefix={arXiv},
|
| 199 |
+
primaryClass={cs.CL},
|
| 200 |
+
url={https://arxiv.org/abs/2507.11764},
|
| 201 |
+
}
|
| 202 |
+
```
|