drAbreu's picture
End of training
c27c75d verified
---
library_name: transformers
license: apache-2.0
base_model: michiyasunaga/BioLinkBERT-large
tags:
- generated_from_trainer
datasets:
- source_data
metrics:
- precision
- recall
- f1
model-index:
- name: SourceData_NER_v1_0_0_BioLinkBERT_large
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: source_data
type: source_data
config: NER
split: validation
args: NER
metrics:
- name: Precision
type: precision
value: 0.822425590865203
- name: Recall
type: recall
value: 0.8583257878902941
- name: F1
type: f1
value: 0.8399922822412943
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SourceData_NER_v1_0_0_BioLinkBERT_large
This model is a fine-tuned version of [michiyasunaga/BioLinkBERT-large](https://huggingface.co/michiyasunaga/BioLinkBERT-large) on the source_data dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1324
- Accuracy Score: 0.9585
- Precision: 0.8224
- Recall: 0.8583
- F1: 0.8400
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use adafactor and the args are:
No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy Score | Precision | Recall | F1 |
|:-------------:|:------:|:----:|:---------------:|:--------------:|:---------:|:------:|:------:|
| 0.1047 | 0.9994 | 863 | 0.1295 | 0.9563 | 0.8179 | 0.8437 | 0.8306 |
| 0.0747 | 1.9988 | 1726 | 0.1324 | 0.9585 | 0.8224 | 0.8583 | 0.8400 |
### Framework versions
- Transformers 4.46.3
- Pytorch 1.13.1+cu117
- Datasets 3.1.0
- Tokenizers 0.20.3