train_record_42_1761196626

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the record dataset. It achieves the following results on the evaluation set:

  • Loss: 6.1613
  • Num Input Tokens Seen: 929295680

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Input Tokens Seen
0.2884 1.0 31242 0.3315 46467680
0.4324 2.0 62484 0.2958 92922208
0.1934 3.0 93726 0.2884 139391296
0.261 4.0 124968 0.2816 185843712
0.2624 5.0 156210 0.2778 232305472
0.283 6.0 187452 0.2737 278763136
0.2666 7.0 218694 0.2717 325229440
0.353 8.0 249936 0.2705 371696128
0.2528 9.0 281178 0.2710 418155040
0.2807 10.0 312420 0.2715 464625760
0.25 11.0 343662 0.2744 511102336
0.1786 12.0 374904 0.2725 557571936
0.2782 13.0 406146 0.2775 604043392
0.1822 14.0 437388 0.2821 650494080
0.2032 15.0 468630 0.2881 696961792
0.1939 16.0 499872 0.2911 743437824
0.1907 17.0 531114 0.2977 789906944
0.1524 18.0 562356 0.3009 836372000
0.1492 19.0 593598 0.3042 882831072
0.1799 20.0 624840 0.3050 929295680

Framework versions

  • PEFT 0.15.2
  • Transformers 4.51.3
  • Pytorch 2.8.0+cu128
  • Datasets 3.6.0
  • Tokenizers 0.21.1
Downloads last month
46
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for rbelanec/train_record_42_1761196626

Adapter
(2009)
this model