Update README.md
Browse files
README.md
CHANGED
|
@@ -161,10 +161,6 @@ This model aims to solve the following common issues in NER context:
|
|
| 161 |
- **Within document clustering**: Cluster within the same document mentions of the same entity in different languages (e.g., "Cologne" and "Köln").
|
| 162 |
- **Long context handling**: Most NER models are limited to `512` tokens, which can be insufficient for documents with multiple entities or complex structures. This model was trained with a context of `4096` tokens.
|
| 163 |
|
| 164 |
-
Want to quickly check the model's performance? Use the space: [https://huggingface.co/spaces/pierre-tassel/rapido-ner-space](https://huggingface.co/spaces/pierre-tassel/rapido-ner-space)
|
| 165 |
-
|
| 166 |
-

|
| 167 |
-
|
| 168 |
## Model Overview
|
| 169 |
|
| 170 |
- **Architecture**: Full finetuned MLM Encoder backbone (`Alibaba-NLP/gte-multilingual-mlm-base`) + token-classification head + attention pooling + per entity-type projection head + CRF
|
|
|
|
| 161 |
- **Within document clustering**: Cluster within the same document mentions of the same entity in different languages (e.g., "Cologne" and "Köln").
|
| 162 |
- **Long context handling**: Most NER models are limited to `512` tokens, which can be insufficient for documents with multiple entities or complex structures. This model was trained with a context of `4096` tokens.
|
| 163 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 164 |
## Model Overview
|
| 165 |
|
| 166 |
- **Architecture**: Full finetuned MLM Encoder backbone (`Alibaba-NLP/gte-multilingual-mlm-base`) + token-classification head + attention pooling + per entity-type projection head + CRF
|