train_cb_1754652158 / README.md
rbelanec's picture
End of training
2b0ee7f verified
metadata
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
  - llama-factory
  - p-tuning
  - generated_from_trainer
model-index:
  - name: train_cb_1754652158
    results: []

train_cb_1754652158

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the cb dataset. It achieves the following results on the evaluation set:

  • Loss: 1.5123
  • Num Input Tokens Seen: 367864

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 123
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 10.0

Training results

Training Loss Epoch Step Validation Loss Input Tokens Seen
0.7984 0.5088 29 0.7325 20064
1.5006 1.0175 58 0.5228 37832
0.3837 1.5263 87 0.5209 57288
0.2138 2.0351 116 0.2320 74520
0.342 2.5439 145 0.2490 93080
0.231 3.0526 174 0.1698 111928
0.3391 3.5614 203 0.2744 131160
0.2601 4.0702 232 0.1618 150056
0.1828 4.5789 261 0.2539 167208
0.3148 5.0877 290 0.2545 186160
0.1062 5.5965 319 0.2156 206000
0.0974 6.1053 348 0.2206 224064
0.1758 6.6140 377 0.1560 243840
0.0921 7.1228 406 0.1961 261504
0.2705 7.6316 435 0.2389 280352
0.0502 8.1404 464 0.1646 299344
0.1646 8.6491 493 0.2208 318672
0.1549 9.1579 522 0.1983 337480
0.072 9.6667 551 0.2247 356456

Framework versions

  • PEFT 0.15.2
  • Transformers 4.51.3
  • Pytorch 2.8.0+cu128
  • Datasets 3.6.0
  • Tokenizers 0.21.1