elselse commited on
Commit
9b52994
·
verified ·
1 Parent(s): 2498ec3

CIRCL/cwe-parent-vulnerability-classification-roberta-base

Browse files
Files changed (5) hide show
  1. README.md +44 -39
  2. config.json +52 -52
  3. emissions.csv +1 -1
  4. metrics.json +7 -7
  5. model.safetensors +1 -1
README.md CHANGED
@@ -18,9 +18,9 @@ should probably proofread and complete it, then remove this comment. -->
18
 
19
  This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
- - Loss: 2.0015
22
- - Accuracy: 0.7126
23
- - F1 Macro: 0.4115
24
 
25
  ## Model description
26
 
@@ -45,47 +45,52 @@ The following hyperparameters were used during training:
45
  - seed: 42
46
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
47
  - lr_scheduler_type: linear
48
- - num_epochs: 35
49
 
50
  ### Training results
51
 
52
  | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro |
53
  |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|
54
- | 3.2752 | 1.0 | 25 | 3.2654 | 0.0345 | 0.0061 |
55
- | 3.1821 | 2.0 | 50 | 3.2462 | 0.1494 | 0.0485 |
56
- | 3.051 | 3.0 | 75 | 3.1660 | 0.0 | 0.0 |
57
- | 3.0809 | 4.0 | 100 | 3.1639 | 0.2989 | 0.0578 |
58
- | 2.9999 | 5.0 | 125 | 3.0634 | 0.2759 | 0.0698 |
59
- | 2.8926 | 6.0 | 150 | 3.0242 | 0.2069 | 0.1097 |
60
- | 3.0126 | 7.0 | 175 | 2.9642 | 0.1724 | 0.1803 |
61
- | 2.8108 | 8.0 | 200 | 2.9361 | 0.3218 | 0.1682 |
62
- | 2.6444 | 9.0 | 225 | 2.8841 | 0.2874 | 0.1558 |
63
- | 2.5221 | 10.0 | 250 | 2.8314 | 0.3448 | 0.1668 |
64
- | 2.4355 | 11.0 | 275 | 2.7143 | 0.4253 | 0.1711 |
65
- | 2.2156 | 12.0 | 300 | 2.7263 | 0.5402 | 0.2043 |
66
- | 2.1266 | 13.0 | 325 | 2.6320 | 0.5862 | 0.2477 |
67
- | 2.0063 | 14.0 | 350 | 2.5443 | 0.6092 | 0.2651 |
68
- | 1.9204 | 15.0 | 375 | 2.5183 | 0.6092 | 0.2626 |
69
- | 1.718 | 16.0 | 400 | 2.4682 | 0.6437 | 0.2928 |
70
- | 1.6489 | 17.0 | 425 | 2.4026 | 0.6437 | 0.3107 |
71
- | 1.5979 | 18.0 | 450 | 2.3305 | 0.6437 | 0.3022 |
72
- | 1.4923 | 19.0 | 475 | 2.2997 | 0.6322 | 0.2902 |
73
- | 1.3487 | 20.0 | 500 | 2.2546 | 0.6437 | 0.2980 |
74
- | 1.3267 | 21.0 | 525 | 2.1921 | 0.6437 | 0.2980 |
75
- | 1.2326 | 22.0 | 550 | 2.1755 | 0.6552 | 0.3066 |
76
- | 1.1961 | 23.0 | 575 | 2.1594 | 0.6437 | 0.3053 |
77
- | 1.0961 | 24.0 | 600 | 2.1266 | 0.6782 | 0.3969 |
78
- | 1.0278 | 25.0 | 625 | 2.1122 | 0.6897 | 0.3978 |
79
- | 1.0279 | 26.0 | 650 | 2.0835 | 0.6897 | 0.3978 |
80
- | 0.988 | 27.0 | 675 | 2.0699 | 0.6782 | 0.3927 |
81
- | 0.9298 | 28.0 | 700 | 2.0440 | 0.7126 | 0.4073 |
82
- | 0.9014 | 29.0 | 725 | 2.0194 | 0.7011 | 0.4042 |
83
- | 0.877 | 30.0 | 750 | 2.0455 | 0.7011 | 0.4026 |
84
- | 0.8503 | 31.0 | 775 | 2.0098 | 0.7126 | 0.4073 |
85
- | 0.8187 | 32.0 | 800 | 2.0033 | 0.7126 | 0.4115 |
86
- | 0.7948 | 33.0 | 825 | 2.0046 | 0.7126 | 0.4073 |
87
- | 0.8645 | 34.0 | 850 | 2.0064 | 0.7126 | 0.4115 |
88
- | 0.7605 | 35.0 | 875 | 2.0015 | 0.7126 | 0.4115 |
 
 
 
 
 
89
 
90
 
91
  ### Framework versions
 
18
 
19
  This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 1.3447
22
+ - Accuracy: 0.7727
23
+ - F1 Macro: 0.4235
24
 
25
  ## Model description
26
 
 
45
  - seed: 42
46
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
47
  - lr_scheduler_type: linear
48
+ - num_epochs: 40
49
 
50
  ### Training results
51
 
52
  | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro |
53
  |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|
54
+ | 3.263 | 1.0 | 25 | 3.2313 | 0.2614 | 0.0345 |
55
+ | 3.1726 | 2.0 | 50 | 2.9923 | 0.1818 | 0.0431 |
56
+ | 3.1008 | 3.0 | 75 | 2.8828 | 0.0 | 0.0 |
57
+ | 2.9805 | 4.0 | 100 | 2.8858 | 0.0114 | 0.0019 |
58
+ | 3.0021 | 5.0 | 125 | 2.8257 | 0.4432 | 0.0626 |
59
+ | 2.8775 | 6.0 | 150 | 2.7730 | 0.0795 | 0.0462 |
60
+ | 2.8805 | 7.0 | 175 | 2.6421 | 0.2841 | 0.1362 |
61
+ | 2.6602 | 8.0 | 200 | 2.6462 | 0.3864 | 0.1366 |
62
+ | 2.5303 | 9.0 | 225 | 2.5584 | 0.3523 | 0.1461 |
63
+ | 2.5236 | 10.0 | 250 | 2.4933 | 0.4205 | 0.1209 |
64
+ | 2.3221 | 11.0 | 275 | 2.3458 | 0.5909 | 0.2232 |
65
+ | 2.1446 | 12.0 | 300 | 2.2679 | 0.625 | 0.2521 |
66
+ | 1.9937 | 13.0 | 325 | 2.1932 | 0.625 | 0.2736 |
67
+ | 1.8521 | 14.0 | 350 | 2.0372 | 0.6477 | 0.2881 |
68
+ | 1.7899 | 15.0 | 375 | 1.9494 | 0.6364 | 0.2679 |
69
+ | 1.5273 | 16.0 | 400 | 1.8457 | 0.6705 | 0.3205 |
70
+ | 1.4178 | 17.0 | 425 | 1.8276 | 0.6477 | 0.2931 |
71
+ | 1.335 | 18.0 | 450 | 1.7690 | 0.6591 | 0.3004 |
72
+ | 1.2685 | 19.0 | 475 | 1.6681 | 0.6705 | 0.3577 |
73
+ | 1.112 | 20.0 | 500 | 1.6399 | 0.6818 | 0.3152 |
74
+ | 1.01 | 21.0 | 525 | 1.5561 | 0.6932 | 0.3255 |
75
+ | 0.9637 | 22.0 | 550 | 1.5008 | 0.7159 | 0.4218 |
76
+ | 0.9571 | 23.0 | 575 | 1.5387 | 0.7045 | 0.3385 |
77
+ | 0.8213 | 24.0 | 600 | 1.5366 | 0.7159 | 0.4043 |
78
+ | 0.7538 | 25.0 | 625 | 1.4691 | 0.75 | 0.3942 |
79
+ | 0.7228 | 26.0 | 650 | 1.4826 | 0.7273 | 0.3872 |
80
+ | 0.7244 | 27.0 | 675 | 1.4789 | 0.7386 | 0.3915 |
81
+ | 0.6746 | 28.0 | 700 | 1.4439 | 0.7727 | 0.4322 |
82
+ | 0.5959 | 29.0 | 725 | 1.4202 | 0.7614 | 0.3942 |
83
+ | 0.5788 | 30.0 | 750 | 1.4339 | 0.7727 | 0.4002 |
84
+ | 0.5718 | 31.0 | 775 | 1.3723 | 0.7955 | 0.4431 |
85
+ | 0.5358 | 32.0 | 800 | 1.4186 | 0.7727 | 0.3812 |
86
+ | 0.5094 | 33.0 | 825 | 1.3722 | 0.7841 | 0.4579 |
87
+ | 0.5003 | 34.0 | 850 | 1.3955 | 0.7614 | 0.3786 |
88
+ | 0.4973 | 35.0 | 875 | 1.3733 | 0.8068 | 0.4635 |
89
+ | 0.4721 | 36.0 | 900 | 1.3447 | 0.7727 | 0.4235 |
90
+ | 0.4457 | 37.0 | 925 | 1.3622 | 0.7955 | 0.4573 |
91
+ | 0.4232 | 38.0 | 950 | 1.3736 | 0.7614 | 0.3986 |
92
+ | 0.4405 | 39.0 | 975 | 1.3683 | 0.7727 | 0.4235 |
93
+ | 0.437 | 40.0 | 1000 | 1.3642 | 0.7614 | 0.3986 |
94
 
95
 
96
  ### Framework versions
config.json CHANGED
@@ -10,62 +10,62 @@
10
  "hidden_dropout_prob": 0.1,
11
  "hidden_size": 768,
12
  "id2label": {
13
- "0": "LABEL_0",
14
- "1": "LABEL_1",
15
- "2": "LABEL_2",
16
- "3": "LABEL_3",
17
- "4": "LABEL_4",
18
- "5": "LABEL_5",
19
- "6": "LABEL_6",
20
- "7": "LABEL_7",
21
- "8": "LABEL_8",
22
- "9": "LABEL_9",
23
- "10": "LABEL_10",
24
- "11": "LABEL_11",
25
- "12": "LABEL_12",
26
- "13": "LABEL_13",
27
- "14": "LABEL_14",
28
- "15": "LABEL_15",
29
- "16": "LABEL_16",
30
- "17": "LABEL_17",
31
- "18": "LABEL_18",
32
- "19": "LABEL_19",
33
- "20": "LABEL_20",
34
- "21": "LABEL_21",
35
- "22": "LABEL_22",
36
- "23": "LABEL_23",
37
- "24": "LABEL_24",
38
- "25": "LABEL_25"
39
  },
40
  "initializer_range": 0.02,
41
  "intermediate_size": 3072,
42
  "label2id": {
43
- "LABEL_0": 0,
44
- "LABEL_1": 1,
45
- "LABEL_10": 10,
46
- "LABEL_11": 11,
47
- "LABEL_12": 12,
48
- "LABEL_13": 13,
49
- "LABEL_14": 14,
50
- "LABEL_15": 15,
51
- "LABEL_16": 16,
52
- "LABEL_17": 17,
53
- "LABEL_18": 18,
54
- "LABEL_19": 19,
55
- "LABEL_2": 2,
56
- "LABEL_20": 20,
57
- "LABEL_21": 21,
58
- "LABEL_22": 22,
59
- "LABEL_23": 23,
60
- "LABEL_24": 24,
61
- "LABEL_25": 25,
62
- "LABEL_3": 3,
63
- "LABEL_4": 4,
64
- "LABEL_5": 5,
65
- "LABEL_6": 6,
66
- "LABEL_7": 7,
67
- "LABEL_8": 8,
68
- "LABEL_9": 9
69
  },
70
  "layer_norm_eps": 1e-05,
71
  "max_position_embeddings": 514,
 
10
  "hidden_dropout_prob": 0.1,
11
  "hidden_size": 768,
12
  "id2label": {
13
+ "0": "1025",
14
+ "1": "1071",
15
+ "2": "131",
16
+ "3": "138",
17
+ "4": "284",
18
+ "5": "285",
19
+ "6": "435",
20
+ "7": "436",
21
+ "8": "595",
22
+ "9": "657",
23
+ "10": "664",
24
+ "11": "682",
25
+ "12": "684",
26
+ "13": "691",
27
+ "14": "693",
28
+ "15": "697",
29
+ "16": "703",
30
+ "17": "706",
31
+ "18": "707",
32
+ "19": "710",
33
+ "20": "74",
34
+ "21": "754",
35
+ "22": "829",
36
+ "23": "862",
37
+ "24": "913",
38
+ "25": "94"
39
  },
40
  "initializer_range": 0.02,
41
  "intermediate_size": 3072,
42
  "label2id": {
43
+ "1025": 0,
44
+ "1071": 1,
45
+ "131": 2,
46
+ "138": 3,
47
+ "284": 4,
48
+ "285": 5,
49
+ "435": 6,
50
+ "436": 7,
51
+ "595": 8,
52
+ "657": 9,
53
+ "664": 10,
54
+ "682": 11,
55
+ "684": 12,
56
+ "691": 13,
57
+ "693": 14,
58
+ "697": 15,
59
+ "703": 16,
60
+ "706": 17,
61
+ "707": 18,
62
+ "710": 19,
63
+ "74": 20,
64
+ "754": 21,
65
+ "829": 22,
66
+ "862": 23,
67
+ "913": 24,
68
+ "94": 25
69
  },
70
  "layer_norm_eps": 1e-05,
71
  "max_position_embeddings": 514,
emissions.csv CHANGED
@@ -1,2 +1,2 @@
1
  timestamp,project_name,run_id,experiment_id,duration,emissions,emissions_rate,cpu_power,gpu_power,ram_power,cpu_energy,gpu_energy,ram_energy,energy_consumed,country_name,country_iso_code,region,cloud_provider,cloud_region,os,python_version,codecarbon_version,cpu_count,cpu_model,gpu_count,gpu_model,longitude,latitude,ram_total_size,tracking_mode,on_cloud,pue
2
- 2025-09-02T09:11:46,codecarbon,6d873b53-faff-4d84-8a8c-df5761d6e266,5b0fa12a-3dd7-45bb-9766-cc326314d9f1,335.906803624006,0.00551183463172775,1.6408821054715427e-05,42.5,424.2278887662371,94.34468507766725,0.00396303443194726,0.03960241251522234,0.008797060598005339,0.05236250754517494,Luxembourg,LUX,luxembourg,,,Linux-6.8.0-71-generic-x86_64-with-glibc2.39,3.12.3,2.8.4,64,AMD EPYC 9124 16-Core Processor,2,2 x NVIDIA L40S,6.1294,49.6113,251.5858268737793,machine,N,1.0
 
1
  timestamp,project_name,run_id,experiment_id,duration,emissions,emissions_rate,cpu_power,gpu_power,ram_power,cpu_energy,gpu_energy,ram_energy,energy_consumed,country_name,country_iso_code,region,cloud_provider,cloud_region,os,python_version,codecarbon_version,cpu_count,cpu_model,gpu_count,gpu_model,longitude,latitude,ram_total_size,tracking_mode,on_cloud,pue
2
+ 2025-09-02T09:27:51,codecarbon,e4ad32bd-2245-4e4a-8192-6a84da1b81b1,5b0fa12a-3dd7-45bb-9766-cc326314d9f1,459.1436821189709,0.007866215688287334,1.7132361817512892e-05,42.5,439.47611356781334,94.34468507766725,0.005415837676420698,0.05729426472426269,0.012019058731787975,0.07472916113247137,Luxembourg,LUX,luxembourg,,,Linux-6.8.0-71-generic-x86_64-with-glibc2.39,3.12.3,2.8.4,64,AMD EPYC 9124 16-Core Processor,2,2 x NVIDIA L40S,6.1294,49.6113,251.5858268737793,machine,N,1.0
metrics.json CHANGED
@@ -1,9 +1,9 @@
1
  {
2
- "eval_loss": 2.001497268676758,
3
- "eval_accuracy": 0.7126436781609196,
4
- "eval_f1_macro": 0.4114671800348193,
5
- "eval_runtime": 0.2929,
6
- "eval_samples_per_second": 297.009,
7
- "eval_steps_per_second": 10.242,
8
- "epoch": 35.0
9
  }
 
1
  {
2
+ "eval_loss": 1.3446602821350098,
3
+ "eval_accuracy": 0.7727272727272727,
4
+ "eval_f1_macro": 0.4235300491458127,
5
+ "eval_runtime": 0.3068,
6
+ "eval_samples_per_second": 286.859,
7
+ "eval_steps_per_second": 9.779,
8
+ "epoch": 40.0
9
  }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:dce6546da5bd0c38eee05752347dd713ebb99e1781c0f6562d5f3174277f6d54
3
  size 498686648
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:74fc4adfc1258908f113a3624d99b413a3fda5caf269f0515e607326f43fc7ac
3
  size 498686648