887c886ce030fd10dce0bcc8b37b5407

This model is a fine-tuned version of Qwen/Qwen2.5-7B on the nyu-mll/glue [cola] dataset. It achieves the following results on the evaluation set:

  • Loss: 3.4725
  • Data Size: 1.0
  • Epoch Runtime: 241.1725
  • Accuracy: 0.6846
  • F1 Macro: 0.4152
  • Rouge1: 0.6855
  • Rouge2: 0.0
  • Rougel: 0.6855
  • Rougelsum: 0.6846

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 4
  • total_train_batch_size: 32
  • total_eval_batch_size: 32
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: constant
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Data Size Epoch Runtime Accuracy F1 Macro Rouge1 Rouge2 Rougel Rougelsum
No log 0 0 6.8133 0 3.6008 0.5342 0.4868 0.5342 0.0 0.5342 0.5347
No log 1 267 40.5906 0.0078 6.6616 0.6885 0.4078 0.6895 0.0 0.6885 0.6885
No log 2 534 6.1571 0.0156 15.8873 0.6885 0.4078 0.6895 0.0 0.6885 0.6885
No log 3 801 3.1608 0.0312 28.3234 0.3213 0.2589 0.3203 0.0 0.3213 0.3218
No log 4 1068 3.0131 0.0625 43.9222 0.6885 0.4078 0.6895 0.0 0.6885 0.6885
0.3567 5 1335 4.1273 0.125 67.4639 0.6240 0.5153 0.6240 0.0 0.6240 0.6240
2.9761 6 1602 2.5782 0.25 99.0117 0.6885 0.4078 0.6895 0.0 0.6885 0.6885
4.3065 7 1869 3.5554 0.5 143.9138 0.6885 0.4078 0.6895 0.0 0.6885 0.6885
2.5352 8.0 2136 3.0619 1.0 244.2800 0.3115 0.2375 0.3105 0.0 0.3115 0.3115
2.6081 9.0 2403 2.4858 1.0 242.8371 0.6885 0.4078 0.6895 0.0 0.6885 0.6885
2.597 10.0 2670 2.5125 1.0 242.3258 0.6885 0.4078 0.6895 0.0 0.6885 0.6885
2.5392 11.0 2937 2.5480 1.0 233.0938 0.6895 0.4111 0.6904 0.0 0.6895 0.6895
2.4674 12.0 3204 2.4747 1.0 240.1788 0.6875 0.4074 0.6885 0.0 0.6875 0.6875
2.3903 13.0 3471 2.7004 1.0 232.4674 0.6885 0.4078 0.6895 0.0 0.6885 0.6885
2.3017 14.0 3738 3.0116 1.0 247.9835 0.6885 0.4078 0.6895 0.0 0.6885 0.6885
1.9255 15.0 4005 3.2539 1.0 231.5395 0.6738 0.5067 0.6738 0.0 0.6738 0.6748
1.7593 16.0 4272 3.4725 1.0 241.1725 0.6846 0.4152 0.6855 0.0 0.6855 0.6846

Framework versions

  • Transformers 4.57.0
  • Pytorch 2.8.0+cu128
  • Datasets 4.2.0
  • Tokenizers 0.22.1
Downloads last month
8
Safetensors
Model size
2B params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for contemmcm/887c886ce030fd10dce0bcc8b37b5407

Base model

Qwen/Qwen2.5-7B
Finetuned
(745)
this model