--- base_model: final_models/focus_lug_phi_after_focus_reinit tags: - generated_from_trainer datasets: - mozilla-foundation/common_voice_11_0 model-index: - name: focus_lug_phi_focus_trained results: [] --- # Paper and Citation Paper: [Prompt, Translate, Fine-Tune, Re-Initialize, or Instruction-Tune? Adapting LLMs for In-Context Learning in Low-Resource Languages ](https://arxiv.org/abs/2506.19187) ``` @misc{toukmaji2025prompttranslatefinetunereinitialize, title={Prompt, Translate, Fine-Tune, Re-Initialize, or Instruction-Tune? Adapting LLMs for In-Context Learning in Low-Resource Languages}, author={Christopher Toukmaji and Jeffrey Flanigan}, year={2025}, eprint={2506.19187}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2506.19187}, } ``` # focus_lug_phi_focus_trained This model is a fine-tuned version of [final_models/focus_lug_phi_after_focus_reinit](https://huggingface.co/final_models/focus_lug_phi_after_focus_reinit) on the mozilla-foundation/common_voice_11_0 lg dataset. It achieves the following results on the evaluation set: - Loss: 5.5764 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-05 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 2000 - num_epochs: 6.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 5.537 | 1.0 | 697 | 5.7700 | | 5.7391 | 2.0 | 1394 | 5.3991 | | 5.3313 | 3.0 | 2091 | 5.4057 | | 4.0997 | 4.0 | 2788 | 5.1846 | | 3.2874 | 5.0 | 3485 | 5.3427 | | 1.9325 | 6.0 | 4182 | 5.5764 | ### Framework versions - Transformers 4.44.0 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.19.1