train_hellaswag_1754652169

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the hellaswag dataset. It achieves the following results on the evaluation set:

  • Loss: 1.7472
  • Num Input Tokens Seen: 108930064

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 123
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 10.0

Training results

Training Loss Epoch Step Validation Loss Input Tokens Seen
0.1386 0.5001 4490 0.1300 5450816
0.151 1.0001 8980 0.0868 10899840
0.0125 1.5002 13470 0.0775 16338976
0.1252 2.0002 17960 0.0812 21789168
0.0373 2.5003 22450 0.0732 27236592
0.0227 3.0003 26940 0.0626 32696128
0.1278 3.5004 31430 0.0628 38137920
0.0083 4.0004 35920 0.0662 43579472
0.0526 4.5005 40410 0.0681 49022960
0.124 5.0006 44900 0.0629 54468496
0.0057 5.5006 49390 0.0646 59917136
0.1004 6.0007 53880 0.0625 65358976
0.0211 6.5007 58370 0.0690 70806016
0.0727 7.0008 62860 0.0656 76259312
0.0806 7.5008 67350 0.0788 81705616
0.0411 8.0009 71840 0.0713 87153488
0.0328 8.5009 76330 0.0820 92602480
0.0011 9.0010 80820 0.0804 98051504
0.0826 9.5011 85310 0.0826 103491728

Framework versions

  • PEFT 0.15.2
  • Transformers 4.51.3
  • Pytorch 2.8.0+cu128
  • Datasets 3.6.0
  • Tokenizers 0.21.1
Downloads last month
1
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for rbelanec/train_hellaswag_1754652169

Adapter
(2064)
this model