cedricbonhomme's picture
CIRCL/cwe-parent-vulnerability-classification-roberta-base
6f06f3e verified
metadata
library_name: transformers
license: mit
base_model: roberta-base
tags:
  - generated_from_trainer
metrics:
  - accuracy
model-index:
  - name: cwe-parent-vulnerability-classification-roberta-base
    results: []

cwe-parent-vulnerability-classification-roberta-base

This model is a fine-tuned version of roberta-base on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.8213
  • Accuracy: 0.5673
  • F1 Macro: 0.3582

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • num_epochs: 40

Training results

Training Loss Epoch Step Validation Loss Accuracy F1 Macro
3.2359 1.0 237 3.0937 0.0334 0.0114
2.3278 2.0 474 2.2992 0.4303 0.2323
1.8161 3.0 711 1.9653 0.5089 0.2917
1.6283 4.0 948 1.8788 0.5292 0.3007
1.3258 5.0 1185 1.8253 0.5864 0.3392
1.3328 6.0 1422 1.8213 0.5673 0.3582
1.2914 7.0 1659 1.8573 0.5900 0.3604
0.8614 8.0 1896 1.8345 0.6198 0.3810
0.8253 9.0 2133 2.0121 0.6067 0.3568
0.7363 10.0 2370 2.0059 0.6508 0.4071
0.6406 11.0 2607 2.0866 0.6472 0.3906
0.4574 12.0 2844 2.1904 0.6377 0.3902
0.4834 13.0 3081 2.2417 0.6281 0.3797
0.3078 14.0 3318 2.3233 0.6830 0.4132
0.3217 15.0 3555 2.4203 0.6639 0.4020
0.2823 16.0 3792 2.4227 0.6615 0.3918
0.2392 17.0 4029 2.6133 0.6865 0.3844
0.1956 18.0 4266 2.6611 0.6675 0.3789
0.1504 19.0 4503 2.7612 0.6746 0.4158
0.1109 20.0 4740 2.8752 0.6710 0.3776
0.1091 21.0 4977 3.0530 0.6949 0.3776
0.13 22.0 5214 3.0540 0.6889 0.3766
0.0797 23.0 5451 3.2854 0.6770 0.4038
0.145 24.0 5688 3.2146 0.6973 0.3877
0.1004 25.0 5925 3.4159 0.6937 0.3850
0.0486 26.0 6162 3.4003 0.6865 0.3767
0.0544 27.0 6399 3.3643 0.6889 0.3822
0.082 28.0 6636 3.4874 0.6913 0.3818
0.0468 29.0 6873 3.5810 0.6877 0.3815
0.0308 30.0 7110 3.7565 0.6949 0.3837
0.0366 31.0 7347 3.6714 0.6961 0.3814
0.0388 32.0 7584 3.8502 0.6973 0.3957
0.0341 33.0 7821 3.8415 0.6973 0.3973
0.0417 34.0 8058 3.9342 0.7056 0.3864
0.0218 35.0 8295 3.9367 0.6996 0.3843
0.0607 36.0 8532 3.9262 0.6996 0.3841
0.0148 37.0 8769 3.9609 0.7020 0.4189
0.0186 38.0 9006 3.9656 0.6985 0.3822
0.0387 39.0 9243 3.9738 0.7044 0.3850
0.008 40.0 9480 3.9906 0.7044 0.3861

Framework versions

  • Transformers 4.57.1
  • Pytorch 2.9.1+cu128
  • Datasets 4.4.1
  • Tokenizers 0.22.1