NuNER - Token Classification & NER backbones
					Collection
				
The Best Eng/Multi Token Classification foundation models with MIT license
					• 
				7 items
				• 
				Updated
					
				•
					
					7
This model provides the best embedding for the Entity Recognition task and supports 9+ languages.
Checkout other models by NuMind:
Multilingual BERT finetunned on an artificially annotated multilingual subset of Oscar dataset. This model provides domain & language independent embedding for Entity Recognition Task. We fine-tunned it only on 9 languages but the model can generalize over other languages that are supported by the Multilingual BERT.
Metrics:
Read more about evaluation protocol & datasets in our blog post
| Model | F1 macro | 
|---|---|
| bert-base-multilingual-cased | 0.5206 | 
| ours | 0.5892 | 
| ours + two emb | 0.6231 | 
Embeddings can be used out of the box or fine-tuned on specific datasets.
Get embeddings:
import torch
import transformers
model = transformers.AutoModel.from_pretrained(
    'numind/NuNER-multilingual-v0.1',
    output_hidden_states=True,
)
tokenizer = transformers.AutoTokenizer.from_pretrained(
    'numind/NuNER-multilingual-v0.1',
)
text = [
    "NuMind is an AI company based in Paris and USA.",
    "NuMind est une entreprise d'IA basée à Paris et aux États-Unis.",
    "See other models from us on https://huggingface.co/numind"
]
encoded_input = tokenizer(
    text,
    return_tensors='pt',
    padding=True,
    truncation=True
)
output = model(**encoded_input)
# two emb trick: for better quality
emb = torch.cat(
    (output.hidden_states[-1], output.hidden_states[-7]),
    dim=2
)
# single emb: for better speed
# emb = output.hidden_states[-1]
@misc{bogdanov2024nuner,
      title={NuNER: Entity Recognition Encoder Pre-training via LLM-Annotated Data}, 
      author={Sergei Bogdanov and Alexandre Constantin and Timothée Bernard and Benoit Crabbé and Etienne Bernard},
      year={2024},
      eprint={2402.15343},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}