Whisper Small - Mohammed Rakib

This model is a fine-tuned version of openai/whisper-small on the common-voice-11, the google-fleurs and the openslr53 datasets. It achieves the following results on the evaluation set:

  • Loss: 0.0605
  • Cer: 5.6737
  • Wer: 10.5971

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 4
  • eval_batch_size: 32
  • seed: 42
  • gradient_accumulation_steps: 128
  • total_train_batch_size: 512
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 888
  • training_steps: 3000

Training results

Training Loss Epoch Step Validation Loss Cer Wer
1.4902 0.11 100 1.2636 1208.7378 889.9406
0.5703 0.23 200 0.4612 121.7766 108.6401
0.3679 0.34 300 0.3046 17.7385 35.2744
0.2301 0.45 400 0.2059 14.8853 29.7671
0.191 0.56 500 0.1693 12.7561 25.4528
0.1605 0.68 600 0.1462 11.5880 22.7845
0.146 0.79 700 0.1300 10.4321 20.5541
0.1296 0.9 800 0.1156 9.8572 19.2143
0.1212 1.01 900 0.1055 8.9462 17.4004
0.1072 1.13 1000 0.0978 8.2675 15.9234
0.1013 1.24 1100 0.0912 7.7918 15.0605
0.0952 1.35 1200 0.0854 7.5497 14.3207
0.0915 1.47 1300 0.0809 6.9833 13.3163
0.0843 1.58 1400 0.0780 6.6422 12.7179
0.0819 1.69 1500 0.0744 6.7287 12.6589
0.0798 1.8 1600 0.0718 6.4962 12.3022
0.0774 1.92 1700 0.0694 6.2198 11.8414
0.0695 2.03 1800 0.0680 6.1346 11.5683
0.0686 2.14 1900 0.0662 5.9758 11.2485
0.0681 2.25 2000 0.0647 6.0599 11.2842
0.0661 2.37 2100 0.0639 5.9500 11.1356
0.0653 2.48 2200 0.0631 5.8114 10.8952
0.0636 2.59 2300 0.0622 5.8502 10.9385
0.0641 2.71 2400 0.0615 5.7382 10.7015
0.0633 2.82 2500 0.0612 5.7038 10.6455
0.0626 2.93 2600 0.0608 5.8058 10.7597
0.06 3.04 2700 0.0605 5.7328 10.6374
0.0584 3.16 2800 0.0605 5.6737 10.5971
0.0585 3.27 2900 0.0604 5.6877 10.6098
0.0598 3.38 3000 0.0603 5.7075 10.6043

Framework versions

  • Transformers 4.27.0.dev0
  • Pytorch 1.13.1+cu117
  • Datasets 2.10.1
  • Tokenizers 0.13.2
Downloads last month
5
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Datasets used to train Rakib/whisper-small-bn-all-600

Space using Rakib/whisper-small-bn-all-600 1

Evaluation results