rbelanec's picture
End of training
4a6696c verified
metadata
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
  - base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct
  - llama-factory
  - transformers
pipeline_tag: text-generation
model-index:
  - name: train_math_qa_101112_1760638065
    results: []

train_math_qa_101112_1760638065

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the math_qa dataset. It achieves the following results on the evaluation set:

  • Loss: 1.0945
  • Num Input Tokens Seen: 77914328

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 101112
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Input Tokens Seen
1.0458 1.0 6714 1.1599 3894384
0.9912 2.0 13428 1.1152 7788792
1.4319 3.0 20142 1.1035 11683344
1.1822 4.0 26856 1.0999 15578064
1.7636 5.0 33570 1.1016 19479304
1.138 6.0 40284 1.1009 23378352
0.7847 7.0 46998 1.0980 27274568
1.0809 8.0 53712 1.1029 31172664
0.8975 9.0 60426 1.1025 35068368
1.5046 10.0 67140 1.1038 38966392
1.0965 11.0 73854 1.0945 42861936
1.0183 12.0 80568 1.0959 46756048
0.7437 13.0 87282 1.1003 50652416
1.1787 14.0 93996 1.0963 54546936
0.884 15.0 100710 1.0990 58442960
1.5105 16.0 107424 1.1018 62338944
0.9218 17.0 114138 1.0976 66231336
0.9596 18.0 120852 1.0976 70127040
0.9612 19.0 127566 1.0976 74021072
0.799 20.0 134280 1.0976 77914328

Framework versions

  • PEFT 0.17.1
  • Transformers 4.51.3
  • Pytorch 2.9.0+cu128
  • Datasets 4.0.0
  • Tokenizers 0.21.4