Configuration Parsing Warning: In adapter_config.json: "peft.task_type" must be a string

whisper-large-v3-turbo-lora-el

This model is a fine-tuned version of openai/whisper-large-v3 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1734
  • Wer: 0.4643

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 2
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 16
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • training_steps: 5000

Training results

Training Loss Epoch Step Validation Loss Wer
0.3662 10.4278 250 0.3702 0.5488
0.2983 20.8556 500 0.3048 0.5264
0.2742 31.2567 750 0.2839 0.5168
0.2647 41.6845 1000 0.2698 0.5096
0.242 52.0856 1250 0.2581 0.5085
0.2395 62.5134 1500 0.2471 0.5019
0.229 72.9412 1750 0.2371 0.4973
0.2187 83.3422 2000 0.2279 0.4955
0.2086 93.7701 2250 0.2192 0.4947
0.202 104.1711 2500 0.2113 0.4942
0.1952 114.5989 2750 0.2041 0.4936
0.1828 125.0 3000 0.1974 0.4805
0.1819 135.4278 3250 0.1918 0.4826
0.1748 145.8556 3500 0.1867 0.4786
0.1755 156.2567 3750 0.1825 0.4770
0.1719 166.6845 4000 0.1791 0.4708
0.169 177.0856 4250 0.1766 0.4707
0.1674 187.5134 4500 0.1748 0.4640
0.1662 197.9412 4750 0.1738 0.4643
0.1609 208.3422 5000 0.1734 0.4643

Framework versions

  • PEFT 0.15.2
  • Transformers 4.52.3
  • Pytorch 2.6.0+cu124
  • Datasets 3.6.0
  • Tokenizers 0.21.1
Downloads last month
1
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Fotiissss/whisper-large-v3-turbo-lora-el

Adapter
(140)
this model