laft_lug_mpt / README.md
ChrisToukmaji's picture
Update README.md
40810c7 verified
---
license: apache-2.0
base_model: mosaicml/mpt-7b
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
model-index:
- name: laft_lug_mpt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Paper and Citation
Paper: [Prompt, Translate, Fine-Tune, Re-Initialize, or Instruction-Tune? Adapting LLMs for In-Context Learning in Low-Resource Languages
](https://arxiv.org/abs/2506.19187)
```
@misc{toukmaji2025prompttranslatefinetunereinitialize,
title={Prompt, Translate, Fine-Tune, Re-Initialize, or Instruction-Tune? Adapting LLMs for In-Context Learning in Low-Resource Languages},
author={Christopher Toukmaji and Jeffrey Flanigan},
year={2025},
eprint={2506.19187},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2506.19187},
}
```
# laft_lug_mpt
This model is a fine-tuned version of [mosaicml/mpt-7b](https://huggingface.co/mosaicml/mpt-7b) on the mozilla-foundation/common_voice_11_0 lg dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9695
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-05
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 2000
- num_epochs: 6.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7969 | 1.0 | 1510 | 2.6259 |
| 2.2344 | 2.0 | 3020 | 2.3295 |
| 2.0781 | 3.0 | 4530 | 2.2022 |
| 1.7266 | 4.0 | 6040 | 2.2305 |
| 0.7852 | 5.0 | 7550 | 2.6690 |
| 0.3105 | 6.0 | 9060 | 2.9695 |
### Framework versions
- Transformers 4.44.0
- Pytorch 2.4.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1