Version_weird_ASAP_FineTuningBERT_AugV12_k10_task1_organization_k10_k10_fold4

This model is a fine-tuned version of bert-base-uncased on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.9002
  • Qwk: 0.6066
  • Mse: 0.9002
  • Rmse: 0.9488

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • num_epochs: 100

Training results

Training Loss Epoch Step Validation Loss Qwk Mse Rmse
No log 1.0 3 7.8056 0.0037 7.8056 2.7938
No log 2.0 6 5.8481 0.0213 5.8481 2.4183
No log 3.0 9 4.2144 0.0156 4.2144 2.0529
No log 4.0 12 2.7847 0.0118 2.7847 1.6687
No log 5.0 15 2.0015 0.1082 2.0015 1.4147
No log 6.0 18 1.3584 0.0420 1.3584 1.1655
No log 7.0 21 1.0811 0.0316 1.0811 1.0398
No log 8.0 24 0.9124 0.0530 0.9124 0.9552
No log 9.0 27 0.8243 0.1118 0.8243 0.9079
No log 10.0 30 0.9123 0.0747 0.9123 0.9552
No log 11.0 33 1.2739 0.0454 1.2739 1.1287
No log 12.0 36 1.4335 0.1759 1.4335 1.1973
No log 13.0 39 1.8193 0.1162 1.8193 1.3488
No log 14.0 42 1.8561 0.1435 1.8561 1.3624
No log 15.0 45 0.9841 0.3761 0.9841 0.9920
No log 16.0 48 0.7313 0.5326 0.7313 0.8552
No log 17.0 51 0.7339 0.5702 0.7339 0.8567
No log 18.0 54 0.5834 0.5840 0.5834 0.7638
No log 19.0 57 0.6078 0.5890 0.6078 0.7796
No log 20.0 60 0.7324 0.6326 0.7324 0.8558
No log 21.0 63 0.5792 0.6737 0.5792 0.7611
No log 22.0 66 1.4680 0.4653 1.4680 1.2116
No log 23.0 69 0.6729 0.6499 0.6729 0.8203
No log 24.0 72 1.4444 0.4875 1.4444 1.2018
No log 25.0 75 1.5153 0.4856 1.5153 1.2310
No log 26.0 78 0.6709 0.6385 0.6709 0.8191
No log 27.0 81 1.4818 0.4740 1.4818 1.2173
No log 28.0 84 1.2698 0.5007 1.2698 1.1268
No log 29.0 87 0.5879 0.6431 0.5879 0.7667
No log 30.0 90 0.6411 0.6549 0.6411 0.8007
No log 31.0 93 1.3267 0.5113 1.3267 1.1518
No log 32.0 96 0.8309 0.6278 0.8309 0.9115
No log 33.0 99 0.8959 0.5986 0.8959 0.9465
No log 34.0 102 1.2955 0.4868 1.2955 1.1382
No log 35.0 105 0.5478 0.6556 0.5478 0.7401
No log 36.0 108 0.5421 0.6591 0.5421 0.7363
No log 37.0 111 0.9267 0.5830 0.9267 0.9627
No log 38.0 114 1.6690 0.4389 1.6690 1.2919
No log 39.0 117 0.8910 0.5954 0.8910 0.9439
No log 40.0 120 0.8903 0.5997 0.8903 0.9436
No log 41.0 123 0.9002 0.6066 0.9002 0.9488

Framework versions

  • Transformers 4.47.0
  • Pytorch 2.5.1+cu121
  • Datasets 3.2.0
  • Tokenizers 0.21.0
Downloads last month
4
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for genki10/Version_weird_ASAP_FineTuningBERT_AugV12_k10_task1_organization_k10_k10_fold4

Finetuned
(5685)
this model