File size: 666 Bytes
6cff67b cc2d6e9 5342a54 cc2d6e9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 |
---
license: apache-2.0
datasets:
- stanfordnlp/imdb
language:
- en
metrics:
- perplexity
base_model:
- google-bert/bert-base-uncased
pipeline_tag: fill-mask
tags:
- generatedtext
---
## MLM
This model is a fine-tuned version of google-bert/bert-base-uncased on a imdb dataset. It achieves the following results on the evaluation set:
• Eval loss: 4.18
• Perplexity (PPL): 50.58
## Training hyperparameters
The following hyperparameters were used during training:
• learning_rate: 3e-5
• train_batch_size: 4
• gradient_accumulation_steps=2
• seed: 42
• weight_decay=0.01
• lr_scheduler_type: linear
• warmup_ratio=0.1
• num_epochs: 2 |