Update README.md
Browse files
README.md
CHANGED
|
@@ -22,6 +22,10 @@ Use this model if you want a debiased alternative to a BERT classifier.
|
|
| 22 |
|
| 23 |
Please refer to the paper to know all the training details.
|
| 24 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 25 |
## Model
|
| 26 |
|
| 27 |
This model is the fine-tuned version of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) model.
|
|
@@ -67,5 +71,5 @@ Please use the following BibTeX entry if you use this model in your project:
|
|
| 67 |
Entropy-Attention Regularization mitigates lexical overfitting but does not completely remove it. We expect the model still to show biases, e.g., peculiar keywords that induce a specific prediction regardless of the context. \
|
| 68 |
Please refer to our paper for a quantitative evaluation of this mitigation.
|
| 69 |
|
| 70 |
-
|
| 71 |
[GNU GPLv3](https://choosealicense.com/licenses/gpl-3.0/)
|
|
|
|
| 22 |
|
| 23 |
Please refer to the paper to know all the training details.
|
| 24 |
|
| 25 |
+
## Dataset
|
| 26 |
+
|
| 27 |
+
The model was fine-tuned on the English part of the [MLMA dataset](https://aclanthology.org/D19-1474/).
|
| 28 |
+
|
| 29 |
## Model
|
| 30 |
|
| 31 |
This model is the fine-tuned version of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) model.
|
|
|
|
| 71 |
Entropy-Attention Regularization mitigates lexical overfitting but does not completely remove it. We expect the model still to show biases, e.g., peculiar keywords that induce a specific prediction regardless of the context. \
|
| 72 |
Please refer to our paper for a quantitative evaluation of this mitigation.
|
| 73 |
|
| 74 |
+
# License
|
| 75 |
[GNU GPLv3](https://choosealicense.com/licenses/gpl-3.0/)
|