|  | --- | 
					
						
						|  | language: | 
					
						
						|  | - dv | 
					
						
						|  | license: apache-2.0 | 
					
						
						|  | tags: | 
					
						
						|  | - generated_from_trainer | 
					
						
						|  | datasets: | 
					
						
						|  | - mozilla-foundation/common_voice_13_0 | 
					
						
						|  | metrics: | 
					
						
						|  | - wer | 
					
						
						|  | model-index: | 
					
						
						|  | - name: Whisper Small Dv - sajid73 | 
					
						
						|  | results: | 
					
						
						|  | - task: | 
					
						
						|  | name: Automatic Speech Recognition | 
					
						
						|  | type: automatic-speech-recognition | 
					
						
						|  | dataset: | 
					
						
						|  | name: Common Voice 13 | 
					
						
						|  | type: mozilla-foundation/common_voice_13_0 | 
					
						
						|  | config: dv | 
					
						
						|  | split: test | 
					
						
						|  | args: dv | 
					
						
						|  | metrics: | 
					
						
						|  | - name: Wer | 
					
						
						|  | type: wer | 
					
						
						|  | value: 12.988142017595717 | 
					
						
						|  | --- | 
					
						
						|  |  | 
					
						
						|  | <!-- This model card has been generated automatically according to the information the Trainer had access to. You | 
					
						
						|  | should probably proofread and complete it, then remove this comment. --> | 
					
						
						|  |  | 
					
						
						|  | # Whisper Small Dv - sajid73 | 
					
						
						|  |  | 
					
						
						|  | This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 13 dataset. | 
					
						
						|  | It achieves the following results on the evaluation set: | 
					
						
						|  | - Loss: 0.1688 | 
					
						
						|  | - Wer Ortho: 63.1381 | 
					
						
						|  | - Wer: 12.9881 | 
					
						
						|  |  | 
					
						
						|  | ## Model description | 
					
						
						|  |  | 
					
						
						|  | More information needed | 
					
						
						|  |  | 
					
						
						|  | ## Intended uses & limitations | 
					
						
						|  |  | 
					
						
						|  | More information needed | 
					
						
						|  |  | 
					
						
						|  | ## Training and evaluation data | 
					
						
						|  |  | 
					
						
						|  | More information needed | 
					
						
						|  |  | 
					
						
						|  | ## Training procedure | 
					
						
						|  |  | 
					
						
						|  | ### Training hyperparameters | 
					
						
						|  |  | 
					
						
						|  | The following hyperparameters were used during training: | 
					
						
						|  | - learning_rate: 1e-05 | 
					
						
						|  | - train_batch_size: 16 | 
					
						
						|  | - eval_batch_size: 16 | 
					
						
						|  | - seed: 42 | 
					
						
						|  | - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 | 
					
						
						|  | - lr_scheduler_type: constant_with_warmup | 
					
						
						|  | - lr_scheduler_warmup_steps: 50 | 
					
						
						|  | - training_steps: 500 | 
					
						
						|  |  | 
					
						
						|  | ### Training results | 
					
						
						|  |  | 
					
						
						|  | | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer     | | 
					
						
						|  | |:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:| | 
					
						
						|  | | 0.1254        | 1.63  | 500  | 0.1688          | 63.1381   | 12.9881 | | 
					
						
						|  |  | 
					
						
						|  |  | 
					
						
						|  | ### Framework versions | 
					
						
						|  |  | 
					
						
						|  | - Transformers 4.31.0.dev0 | 
					
						
						|  | - Pytorch 2.0.1+cu118 | 
					
						
						|  | - Datasets 2.13.1 | 
					
						
						|  | - Tokenizers 0.13.3 | 
					
						
						|  |  |