openai/whisper-base

This model is a fine-tuned version of openai/whisper-base on the pphuc25/VietMed-split-8-2 dataset. It achieves the following results on the evaluation set:

  • Loss: 1.0352
  • Wer: 27.1975

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Wer
0.7081 1.0 569 0.7147 32.6304
0.5097 2.0 1138 0.6779 30.7670
0.3642 3.0 1707 0.6890 30.5144
0.2242 4.0 2276 0.7389 31.4662
0.1221 5.0 2845 0.7970 32.5828
0.07 6.0 3414 0.8480 30.3240
0.0411 7.0 3983 0.8862 29.4380
0.0288 8.0 4552 0.9171 29.9066
0.0199 9.0 5121 0.9572 29.6321
0.0105 10.0 5690 0.9698 28.6473
0.0068 11.0 6259 0.9811 29.5881
0.0084 12.0 6828 0.9985 28.7424
0.0024 13.0 7397 0.9903 29.3355
0.003 14.0 7966 1.0112 27.6588
0.0017 15.0 8535 1.0137 28.7205
0.0004 16.0 9104 1.0185 27.2305
0.0002 17.0 9673 1.0257 27.2964
0.0006 18.0 10242 1.0282 27.2817
0.0002 19.0 10811 1.0336 27.1609
0.0001 20.0 11380 1.0352 27.1975

Framework versions

  • Transformers 4.41.1
  • Pytorch 2.3.0
  • Datasets 2.19.1
  • Tokenizers 0.19.1
Downloads last month
-
Safetensors
Model size
72.6M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Hanhpt23/whisper-base-vietmed-v1

Finetuned
(567)
this model