floflodebilbao commited on
Commit
12934b5
·
verified ·
1 Parent(s): 504d507

End of training

Browse files
README.md CHANGED
@@ -22,21 +22,21 @@ should probably proofread and complete it, then remove this comment. -->
22
 
23
  This model is a fine-tuned version of [allenai/led-base-16384](https://huggingface.co/allenai/led-base-16384) on an unknown dataset.
24
  It achieves the following results on the evaluation set:
25
- - Loss: 3.5772
26
- - Rouge1: 0.477
27
- - Rouge2: 0.2582
28
- - Rougel: 0.4063
29
- - Rougelsum: 0.4058
30
- - Gen Len: 29.72
31
- - Bleu: 0.1684
32
- - Precisions: 0.2244
33
- - Brevity Penalty: 0.9147
34
- - Length Ratio: 0.9181
35
- - Translation Length: 1121.0
36
  - Reference Length: 1221.0
37
- - Precision: 0.906
38
- - Recall: 0.9034
39
- - F1: 0.9046
40
  - Hashcode: roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1)
41
 
42
  ## Model description
@@ -56,29 +56,30 @@ More information needed
56
  ### Training hyperparameters
57
 
58
  The following hyperparameters were used during training:
59
- - learning_rate: 0.001
60
- - train_batch_size: 8
61
- - eval_batch_size: 8
62
  - seed: 42
 
 
63
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
64
  - lr_scheduler_type: linear
65
  - num_epochs: 10
66
- - mixed_precision_training: Native AMP
67
 
68
  ### Training results
69
 
70
  | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | Bleu | Precisions | Brevity Penalty | Length Ratio | Translation Length | Reference Length | Precision | Recall | F1 | Hashcode |
71
  |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|:------:|:----------:|:---------------:|:------------:|:------------------:|:----------------:|:---------:|:------:|:------:|:---------------------------------------------------------:|
72
- | 8.2369 | 1.0 | 13 | 6.2010 | 0.3878 | 0.1759 | 0.3252 | 0.3253 | 31.78 | 0.111 | 0.1493 | 1.0 | 1.0737 | 1311.0 | 1221.0 | 0.8831 | 0.8831 | 0.883 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
73
- | 5.5627 | 2.0 | 26 | 5.2052 | 0.4251 | 0.2192 | 0.3738 | 0.3736 | 26.24 | 0.1227 | 0.2064 | 0.772 | 0.7944 | 970.0 | 1221.0 | 0.9067 | 0.8935 | 0.8999 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
74
- | 4.4273 | 3.0 | 39 | 3.7823 | 0.4604 | 0.2497 | 0.3967 | 0.3971 | 27.26 | 0.1501 | 0.2249 | 0.8192 | 0.8337 | 1018.0 | 1221.0 | 0.9063 | 0.8994 | 0.9027 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
75
- | 3.9367 | 4.0 | 52 | 3.6272 | 0.4554 | 0.2512 | 0.3954 | 0.3955 | 26.46 | 0.1504 | 0.2382 | 0.77 | 0.7928 | 968.0 | 1221.0 | 0.908 | 0.8965 | 0.9021 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
76
- | 3.7676 | 5.0 | 65 | 3.5810 | 0.4683 | 0.2639 | 0.4067 | 0.4087 | 26.1 | 0.1551 | 0.249 | 0.7518 | 0.7781 | 950.0 | 1221.0 | 0.9154 | 0.9021 | 0.9086 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
77
- | 3.6775 | 6.0 | 78 | 3.5931 | 0.4613 | 0.2477 | 0.3953 | 0.3952 | 29.62 | 0.1551 | 0.2141 | 0.8985 | 0.9034 | 1103.0 | 1221.0 | 0.9042 | 0.9016 | 0.9028 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
78
- | 3.5802 | 7.0 | 91 | 3.5738 | 0.4599 | 0.2447 | 0.3889 | 0.3901 | 29.5 | 0.156 | 0.2092 | 0.92 | 0.923 | 1127.0 | 1221.0 | 0.904 | 0.9005 | 0.9022 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
79
- | 3.5271 | 8.0 | 104 | 3.5739 | 0.4665 | 0.2559 | 0.3987 | 0.3986 | 28.38 | 0.1583 | 0.2278 | 0.8553 | 0.8649 | 1056.0 | 1221.0 | 0.9089 | 0.9027 | 0.9057 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
80
- | 3.4856 | 9.0 | 117 | 3.5726 | 0.4653 | 0.2426 | 0.4004 | 0.3997 | 30.3 | 0.1569 | 0.2081 | 0.9401 | 0.9419 | 1150.0 | 1221.0 | 0.9012 | 0.9009 | 0.901 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
81
- | 3.419 | 10.0 | 130 | 3.5772 | 0.477 | 0.2582 | 0.4063 | 0.4058 | 29.72 | 0.1684 | 0.2244 | 0.9147 | 0.9181 | 1121.0 | 1221.0 | 0.906 | 0.9034 | 0.9046 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
82
 
83
 
84
  ### Framework versions
 
22
 
23
  This model is a fine-tuned version of [allenai/led-base-16384](https://huggingface.co/allenai/led-base-16384) on an unknown dataset.
24
  It achieves the following results on the evaluation set:
25
+ - Loss: 3.5599
26
+ - Rouge1: 0.4697
27
+ - Rouge2: 0.239
28
+ - Rougel: 0.3921
29
+ - Rougelsum: 0.3927
30
+ - Gen Len: 29.3
31
+ - Bleu: 0.1424
32
+ - Precisions: 0.2062
33
+ - Brevity Penalty: 0.8922
34
+ - Length Ratio: 0.8976
35
+ - Translation Length: 1096.0
36
  - Reference Length: 1221.0
37
+ - Precision: 0.9067
38
+ - Recall: 0.9023
39
+ - F1: 0.9044
40
  - Hashcode: roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1)
41
 
42
  ## Model description
 
56
  ### Training hyperparameters
57
 
58
  The following hyperparameters were used during training:
59
+ - learning_rate: 0.002
60
+ - train_batch_size: 1
61
+ - eval_batch_size: 1
62
  - seed: 42
63
+ - gradient_accumulation_steps: 16
64
+ - total_train_batch_size: 16
65
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
66
  - lr_scheduler_type: linear
67
  - num_epochs: 10
 
68
 
69
  ### Training results
70
 
71
  | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | Bleu | Precisions | Brevity Penalty | Length Ratio | Translation Length | Reference Length | Precision | Recall | F1 | Hashcode |
72
  |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|:------:|:----------:|:---------------:|:------------:|:------------------:|:----------------:|:---------:|:------:|:------:|:---------------------------------------------------------:|
73
+ | 7.7469 | 1.0 | 7 | 6.8825 | 0.4069 | 0.2053 | 0.3491 | 0.3496 | 32.0 | 0.1293 | 0.164 | 1.0 | 1.0713 | 1308.0 | 1221.0 | 0.8782 | 0.8868 | 0.8824 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
74
+ | 5.647 | 2.0 | 14 | 4.7268 | 0.4079 | 0.2091 | 0.3571 | 0.3564 | 24.94 | 0.1023 | 0.2027 | 0.6841 | 0.7248 | 885.0 | 1221.0 | 0.9076 | 0.8896 | 0.8984 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
75
+ | 4.2551 | 3.0 | 21 | 3.9355 | 0.4487 | 0.2508 | 0.3879 | 0.3876 | 27.34 | 0.1555 | 0.2293 | 0.8182 | 0.8329 | 1017.0 | 1221.0 | 0.9067 | 0.8982 | 0.9023 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
76
+ | 3.6931 | 4.0 | 28 | 3.7415 | 0.4466 | 0.2287 | 0.3819 | 0.3833 | 25.88 | 0.126 | 0.217 | 0.7559 | 0.7813 | 954.0 | 1221.0 | 0.9073 | 0.8943 | 0.9006 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
77
+ | 3.4714 | 5.0 | 35 | 3.6417 | 0.4519 | 0.2393 | 0.3936 | 0.3948 | 27.74 | 0.1386 | 0.2131 | 0.8231 | 0.837 | 1022.0 | 1221.0 | 0.9094 | 0.8988 | 0.904 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
78
+ | 3.3284 | 6.0 | 42 | 3.6012 | 0.4464 | 0.2381 | 0.3804 | 0.383 | 28.96 | 0.1494 | 0.2089 | 0.8721 | 0.8796 | 1074.0 | 1221.0 | 0.9039 | 0.8991 | 0.9014 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
79
+ | 3.245 | 7.0 | 49 | 3.5702 | 0.4443 | 0.2155 | 0.3753 | 0.3765 | 28.2 | 0.1286 | 0.198 | 0.8525 | 0.8624 | 1053.0 | 1221.0 | 0.906 | 0.8975 | 0.9016 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
80
+ | 3.1794 | 8.0 | 56 | 3.5747 | 0.4596 | 0.2332 | 0.3882 | 0.3881 | 30.18 | 0.148 | 0.2069 | 0.9075 | 0.9115 | 1113.0 | 1221.0 | 0.9018 | 0.9007 | 0.9012 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
81
+ | 3.1144 | 9.0 | 63 | 3.5583 | 0.4513 | 0.2278 | 0.3795 | 0.3806 | 29.26 | 0.1358 | 0.2003 | 0.8794 | 0.8862 | 1082.0 | 1221.0 | 0.9037 | 0.9 | 0.9018 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
82
+ | 3.1082 | 10.0 | 70 | 3.5599 | 0.4697 | 0.239 | 0.3921 | 0.3927 | 29.3 | 0.1424 | 0.2062 | 0.8922 | 0.8976 | 1096.0 | 1221.0 | 0.9067 | 0.9023 | 0.9044 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
83
 
84
 
85
  ### Framework versions
adapter_config.json CHANGED
@@ -26,8 +26,8 @@
26
  "target_modules": [
27
  "k_proj",
28
  "q_proj",
29
- "out_proj",
30
- "v_proj"
31
  ],
32
  "task_type": "SEQ_2_SEQ_LM",
33
  "trainable_token_indices": null,
 
26
  "target_modules": [
27
  "k_proj",
28
  "q_proj",
29
+ "v_proj",
30
+ "out_proj"
31
  ],
32
  "task_type": "SEQ_2_SEQ_LM",
33
  "trainable_token_indices": null,
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b4d010e93d82ab3964b3dc0ea462df6135d89950bb3bb20bd85ff5462f3339ec
3
  size 2372496
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:12c24d03670e27dc5f25d2bea9a7e3d7cbd446aff585c7b479b3f06679da67af
3
  size 2372496
runs/Jul29_13-03-00_tardis/events.out.tfevents.1753786982.tardis.19487.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:562f37d475a827233572cc89791b72585a5f6e0184860eaf45e1d37c5fada3d4
3
+ size 19371
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8d432f14048b72dc8fbc931660b79bd081b082c0ae64de9c7ef7ce3f9092c9f5
3
  size 5905
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b972a023c8cccdf13ad05c4dd23853df6f127febd949c092a28998a3b7393bb9
3
  size 5905