Visualize in Weights & Biases

Llama-3-8B-Taiwan-Llawa-TCxYZL-DPO-Beta-0.01-Instruct

This model is a fine-tuned version of lopentu/Llama-3-8B-Taiwan-Llawa-TCxYZL-Instruct on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5434
  • Rewards/chosen: -0.3119
  • Rewards/rejected: -0.7953
  • Rewards/accuracies: 0.8245
  • Rewards/margins: 0.4833
  • Logps/rejected: -118.7237
  • Logps/chosen: -150.8103
  • Logits/rejected: -0.1068
  • Logits/chosen: -0.0702

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-07
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 6
  • gradient_accumulation_steps: 64
  • total_train_batch_size: 384
  • total_eval_batch_size: 6
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 3.0

Training results

Training Loss Epoch Step Validation Loss Rewards/chosen Rewards/rejected Rewards/accuracies Rewards/margins Logps/rejected Logps/chosen Logits/rejected Logits/chosen
No log 0 0 0.6931 0.0 0.0 0.0 0.0 -39.1966 -119.6160 -0.1592 -0.1378
0.6468 0.9937 76 0.6482 -0.0350 -0.1333 0.8596 0.0983 -52.5218 -123.1127 -0.2376 -0.2140
0.5604 1.9873 152 0.5646 -0.2442 -0.6292 0.8245 0.3850 -102.1200 -144.0388 -0.1309 -0.0973
0.5142 2.9810 228 0.5434 -0.3119 -0.7953 0.8245 0.4833 -118.7237 -150.8103 -0.1068 -0.0702

Framework versions

  • Transformers 4.43.1
  • Pytorch 2.3.1+cu121
  • Datasets 2.19.1
  • Tokenizers 0.19.1
Downloads last month
3
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for lopentu/Llama-3-8B-Taiwan-Llawa-TCxYZL-DPO-Beta-0.01-Instruct

Finetuned
(2)
this model

Collection including lopentu/Llama-3-8B-Taiwan-Llawa-TCxYZL-DPO-Beta-0.01-Instruct