Update README.md
Browse files
README.md
CHANGED
|
@@ -118,18 +118,6 @@ This model was finetuned maximizing F1 score.
|
|
| 118 |
|
| 119 |
### Evaluation results
|
| 120 |
|
| 121 |
-
We evaluated the roberta-base-ca-v2-cased-pos on the Ancora-ca-ner test set against standard multilingual and monolingual baselines:
|
| 122 |
-
Model Ancora-ca-pos (F1)
|
| 123 |
-
roberta-base-ca-v2-cased-pos 98.96
|
| 124 |
-
roberta-base-ca-cased-pos 98.96
|
| 125 |
-
mBERT 98.83
|
| 126 |
-
XLM-RoBERTa 98.89
|
| 127 |
-
|
| 128 |
-
For more details, check the fine-tuning and evaluation scripts in the official GitHub repository.
|
| 129 |
-
Additional information
|
| 130 |
-
Author
|
| 131 |
-
|
| 132 |
-
|
| 133 |
We evaluated the _roberta-base-ca-v2-cased-ner_ on the AnCora-Ca-NER test set against standard multilingual and monolingual baselines:
|
| 134 |
|
| 135 |
| Model | AnCora-Ca-NER (F1)|
|
|
|
|
| 118 |
|
| 119 |
### Evaluation results
|
| 120 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 121 |
We evaluated the _roberta-base-ca-v2-cased-ner_ on the AnCora-Ca-NER test set against standard multilingual and monolingual baselines:
|
| 122 |
|
| 123 |
| Model | AnCora-Ca-NER (F1)|
|