train_record_42_1761016731

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the record dataset. It achieves the following results on the evaluation set:

  • Loss: 5.0937
  • Num Input Tokens Seen: 929295680

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.03
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Input Tokens Seen
5.3794 1.0 31242 5.4198 46467680
4.9454 2.0 62484 5.2841 92922208
5.2501 3.0 93726 5.2237 139391296
4.1541 4.0 124968 5.2405 185843712
5.4368 5.0 156210 5.2964 232305472
5.5108 6.0 187452 5.1690 278763136
5.2356 7.0 218694 5.1795 325229440
4.597 8.0 249936 5.1671 371696128
4.4205 9.0 281178 5.1342 418155040
4.5441 10.0 312420 5.1323 464625760
5.3342 11.0 343662 5.1353 511102336
5.1793 12.0 374904 5.1258 557571936
5.6827 13.0 406146 5.1243 604043392
4.5015 14.0 437388 5.1007 650494080
4.6856 15.0 468630 5.0976 696961792
5.6257 16.0 499872 5.0968 743437824
4.6229 17.0 531114 5.0946 789906944
5.2043 18.0 562356 5.0942 836372000
5.5886 19.0 593598 5.0937 882831072
4.7927 20.0 624840 5.0940 929295680

Framework versions

  • PEFT 0.15.2
  • Transformers 4.51.3
  • Pytorch 2.8.0+cu128
  • Datasets 3.6.0
  • Tokenizers 0.21.1
Downloads last month
50
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for rbelanec/train_record_42_1761016731

Adapter
(2013)
this model