======================================= Model Training Summary ======================================= Model: prajjwal1/bert-tiny Task: wnli Training Date: Sat Sep 6 02:53:11 PM JST 2025 Hyperparameters: - Epochs: 5 - Batch Size: 32 - Learning Rate: 2e-5 - Warmup Steps: 200 - Weight Decay: 0.01 - Max Sequence Length: 128 Training Status: SUCCESS Job: 15/15 Model Directory: /home/ubuntu/master-research/not-trust-quivalent/models/wnli/prajjwal1-bert-tiny Script Path: /home/ubuntu/master-research/not-trust-quivalent/src/finetune/train.py Output Directory: /home/ubuntu/master-research/not-trust-quivalent/models =======================================