End of training
Browse files- README.md +29 -28
- adapter_config.json +2 -2
- adapter_model.safetensors +1 -1
- runs/Jul29_13-03-00_tardis/events.out.tfevents.1753786982.tardis.19487.0 +3 -0
- training_args.bin +1 -1
README.md
CHANGED
|
@@ -22,21 +22,21 @@ should probably proofread and complete it, then remove this comment. -->
|
|
| 22 |
|
| 23 |
This model is a fine-tuned version of [allenai/led-base-16384](https://huggingface.co/allenai/led-base-16384) on an unknown dataset.
|
| 24 |
It achieves the following results on the evaluation set:
|
| 25 |
-
- Loss: 3.
|
| 26 |
-
- Rouge1: 0.
|
| 27 |
-
- Rouge2: 0.
|
| 28 |
-
- Rougel: 0.
|
| 29 |
-
- Rougelsum: 0.
|
| 30 |
-
- Gen Len: 29.
|
| 31 |
-
- Bleu: 0.
|
| 32 |
-
- Precisions: 0.
|
| 33 |
-
- Brevity Penalty: 0.
|
| 34 |
-
- Length Ratio: 0.
|
| 35 |
-
- Translation Length:
|
| 36 |
- Reference Length: 1221.0
|
| 37 |
-
- Precision: 0.
|
| 38 |
-
- Recall: 0.
|
| 39 |
-
- F1: 0.
|
| 40 |
- Hashcode: roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1)
|
| 41 |
|
| 42 |
## Model description
|
|
@@ -56,29 +56,30 @@ More information needed
|
|
| 56 |
### Training hyperparameters
|
| 57 |
|
| 58 |
The following hyperparameters were used during training:
|
| 59 |
-
- learning_rate: 0.
|
| 60 |
-
- train_batch_size:
|
| 61 |
-
- eval_batch_size:
|
| 62 |
- seed: 42
|
|
|
|
|
|
|
| 63 |
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
|
| 64 |
- lr_scheduler_type: linear
|
| 65 |
- num_epochs: 10
|
| 66 |
-
- mixed_precision_training: Native AMP
|
| 67 |
|
| 68 |
### Training results
|
| 69 |
|
| 70 |
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | Bleu | Precisions | Brevity Penalty | Length Ratio | Translation Length | Reference Length | Precision | Recall | F1 | Hashcode |
|
| 71 |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|:------:|:----------:|:---------------:|:------------:|:------------------:|:----------------:|:---------:|:------:|:------:|:---------------------------------------------------------:|
|
| 72 |
-
|
|
| 73 |
-
| 5.
|
| 74 |
-
| 4.
|
| 75 |
-
| 3.
|
| 76 |
-
| 3.
|
| 77 |
-
| 3.
|
| 78 |
-
| 3.
|
| 79 |
-
| 3.
|
| 80 |
-
| 3.
|
| 81 |
-
| 3.
|
| 82 |
|
| 83 |
|
| 84 |
### Framework versions
|
|
|
|
| 22 |
|
| 23 |
This model is a fine-tuned version of [allenai/led-base-16384](https://huggingface.co/allenai/led-base-16384) on an unknown dataset.
|
| 24 |
It achieves the following results on the evaluation set:
|
| 25 |
+
- Loss: 3.5599
|
| 26 |
+
- Rouge1: 0.4697
|
| 27 |
+
- Rouge2: 0.239
|
| 28 |
+
- Rougel: 0.3921
|
| 29 |
+
- Rougelsum: 0.3927
|
| 30 |
+
- Gen Len: 29.3
|
| 31 |
+
- Bleu: 0.1424
|
| 32 |
+
- Precisions: 0.2062
|
| 33 |
+
- Brevity Penalty: 0.8922
|
| 34 |
+
- Length Ratio: 0.8976
|
| 35 |
+
- Translation Length: 1096.0
|
| 36 |
- Reference Length: 1221.0
|
| 37 |
+
- Precision: 0.9067
|
| 38 |
+
- Recall: 0.9023
|
| 39 |
+
- F1: 0.9044
|
| 40 |
- Hashcode: roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1)
|
| 41 |
|
| 42 |
## Model description
|
|
|
|
| 56 |
### Training hyperparameters
|
| 57 |
|
| 58 |
The following hyperparameters were used during training:
|
| 59 |
+
- learning_rate: 0.002
|
| 60 |
+
- train_batch_size: 1
|
| 61 |
+
- eval_batch_size: 1
|
| 62 |
- seed: 42
|
| 63 |
+
- gradient_accumulation_steps: 16
|
| 64 |
+
- total_train_batch_size: 16
|
| 65 |
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
|
| 66 |
- lr_scheduler_type: linear
|
| 67 |
- num_epochs: 10
|
|
|
|
| 68 |
|
| 69 |
### Training results
|
| 70 |
|
| 71 |
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | Bleu | Precisions | Brevity Penalty | Length Ratio | Translation Length | Reference Length | Precision | Recall | F1 | Hashcode |
|
| 72 |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|:------:|:----------:|:---------------:|:------------:|:------------------:|:----------------:|:---------:|:------:|:------:|:---------------------------------------------------------:|
|
| 73 |
+
| 7.7469 | 1.0 | 7 | 6.8825 | 0.4069 | 0.2053 | 0.3491 | 0.3496 | 32.0 | 0.1293 | 0.164 | 1.0 | 1.0713 | 1308.0 | 1221.0 | 0.8782 | 0.8868 | 0.8824 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
|
| 74 |
+
| 5.647 | 2.0 | 14 | 4.7268 | 0.4079 | 0.2091 | 0.3571 | 0.3564 | 24.94 | 0.1023 | 0.2027 | 0.6841 | 0.7248 | 885.0 | 1221.0 | 0.9076 | 0.8896 | 0.8984 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
|
| 75 |
+
| 4.2551 | 3.0 | 21 | 3.9355 | 0.4487 | 0.2508 | 0.3879 | 0.3876 | 27.34 | 0.1555 | 0.2293 | 0.8182 | 0.8329 | 1017.0 | 1221.0 | 0.9067 | 0.8982 | 0.9023 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
|
| 76 |
+
| 3.6931 | 4.0 | 28 | 3.7415 | 0.4466 | 0.2287 | 0.3819 | 0.3833 | 25.88 | 0.126 | 0.217 | 0.7559 | 0.7813 | 954.0 | 1221.0 | 0.9073 | 0.8943 | 0.9006 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
|
| 77 |
+
| 3.4714 | 5.0 | 35 | 3.6417 | 0.4519 | 0.2393 | 0.3936 | 0.3948 | 27.74 | 0.1386 | 0.2131 | 0.8231 | 0.837 | 1022.0 | 1221.0 | 0.9094 | 0.8988 | 0.904 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
|
| 78 |
+
| 3.3284 | 6.0 | 42 | 3.6012 | 0.4464 | 0.2381 | 0.3804 | 0.383 | 28.96 | 0.1494 | 0.2089 | 0.8721 | 0.8796 | 1074.0 | 1221.0 | 0.9039 | 0.8991 | 0.9014 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
|
| 79 |
+
| 3.245 | 7.0 | 49 | 3.5702 | 0.4443 | 0.2155 | 0.3753 | 0.3765 | 28.2 | 0.1286 | 0.198 | 0.8525 | 0.8624 | 1053.0 | 1221.0 | 0.906 | 0.8975 | 0.9016 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
|
| 80 |
+
| 3.1794 | 8.0 | 56 | 3.5747 | 0.4596 | 0.2332 | 0.3882 | 0.3881 | 30.18 | 0.148 | 0.2069 | 0.9075 | 0.9115 | 1113.0 | 1221.0 | 0.9018 | 0.9007 | 0.9012 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
|
| 81 |
+
| 3.1144 | 9.0 | 63 | 3.5583 | 0.4513 | 0.2278 | 0.3795 | 0.3806 | 29.26 | 0.1358 | 0.2003 | 0.8794 | 0.8862 | 1082.0 | 1221.0 | 0.9037 | 0.9 | 0.9018 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
|
| 82 |
+
| 3.1082 | 10.0 | 70 | 3.5599 | 0.4697 | 0.239 | 0.3921 | 0.3927 | 29.3 | 0.1424 | 0.2062 | 0.8922 | 0.8976 | 1096.0 | 1221.0 | 0.9067 | 0.9023 | 0.9044 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
|
| 83 |
|
| 84 |
|
| 85 |
### Framework versions
|
adapter_config.json
CHANGED
|
@@ -26,8 +26,8 @@
|
|
| 26 |
"target_modules": [
|
| 27 |
"k_proj",
|
| 28 |
"q_proj",
|
| 29 |
-
"
|
| 30 |
-
"
|
| 31 |
],
|
| 32 |
"task_type": "SEQ_2_SEQ_LM",
|
| 33 |
"trainable_token_indices": null,
|
|
|
|
| 26 |
"target_modules": [
|
| 27 |
"k_proj",
|
| 28 |
"q_proj",
|
| 29 |
+
"v_proj",
|
| 30 |
+
"out_proj"
|
| 31 |
],
|
| 32 |
"task_type": "SEQ_2_SEQ_LM",
|
| 33 |
"trainable_token_indices": null,
|
adapter_model.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 2372496
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:12c24d03670e27dc5f25d2bea9a7e3d7cbd446aff585c7b479b3f06679da67af
|
| 3 |
size 2372496
|
runs/Jul29_13-03-00_tardis/events.out.tfevents.1753786982.tardis.19487.0
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:562f37d475a827233572cc89791b72585a5f6e0184860eaf45e1d37c5fada3d4
|
| 3 |
+
size 19371
|
training_args.bin
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 5905
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b972a023c8cccdf13ad05c4dd23853df6f127febd949c092a28998a3b7393bb9
|
| 3 |
size 5905
|