--- library_name: transformers base_model: aubmindlab/bert-base-arabertv2 tags: - generated_from_trainer metrics: - accuracy model-index: - name: bert-base-arabertv2_D3Tok_EMD_19levels results: [] --- # bert-base-arabertv2_D3Tok_EMD_19levels This model is a fine-tuned version of [aubmindlab/bert-base-arabertv2](https://huggingface.co/aubmindlab/bert-base-arabertv2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0306 - Macro F1: 0.4521 - Macro Precision: 0.4818 - Macro Recall: 0.4453 - Accuracy: 0.5291 - Accuracy With Margin: 0.6966 - Distance: 1.1860 - Quadratic weighted kappa: 0.7940 - Accuracy 7: 0.6300 - Accuracy 5: 0.6726 - Accuracy 3: 0.7412 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 16 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 12 ### Training results | Training Loss | Epoch | Step | Validation Loss | Macro F1 | Macro Precision | Macro Recall | Accuracy | Accuracy With Margin | Distance | Quadratic weighted kappa | Accuracy 7 | Accuracy 5 | Accuracy 3 | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------------:|:------------:|:--------:|:--------------------:|:--------:|:------------------------:|:----------:|:----------:|:----------:| | 1.0454 | 1.0 | 857 | 0.9505 | 0.2627 | 0.3239 | 0.3181 | 0.4703 | 0.6428 | 1.3360 | 0.7758 | 0.6030 | 0.6647 | 0.7330 | | 0.718 | 2.0 | 1714 | 0.8807 | 0.3647 | 0.3734 | 0.3709 | 0.5175 | 0.6841 | 1.1967 | 0.7981 | 0.6297 | 0.6787 | 0.7490 | | 0.5604 | 3.0 | 2571 | 0.9115 | 0.4027 | 0.4449 | 0.3915 | 0.5353 | 0.6837 | 1.1989 | 0.7920 | 0.6305 | 0.6746 | 0.7393 | | 0.4615 | 4.0 | 3428 | 0.9439 | 0.4163 | 0.4850 | 0.4124 | 0.5328 | 0.6830 | 1.2074 | 0.7942 | 0.6256 | 0.6699 | 0.7378 | | 0.3547 | 5.0 | 4285 | 0.9803 | 0.4059 | 0.4732 | 0.3965 | 0.5215 | 0.6807 | 1.2193 | 0.7864 | 0.6218 | 0.6655 | 0.7363 | | 0.2972 | 6.0 | 5142 | 0.9809 | 0.4499 | 0.4851 | 0.4425 | 0.5356 | 0.6856 | 1.2003 | 0.7931 | 0.6287 | 0.6695 | 0.7393 | | 0.2529 | 7.0 | 5999 | 0.9922 | 0.4427 | 0.4791 | 0.4293 | 0.5280 | 0.6885 | 1.1907 | 0.7946 | 0.6268 | 0.6715 | 0.7431 | | 0.2002 | 8.0 | 6856 | 1.0047 | 0.4516 | 0.4747 | 0.4448 | 0.5316 | 0.6902 | 1.1818 | 0.7974 | 0.6306 | 0.6731 | 0.7390 | | 0.1734 | 9.0 | 7713 | 1.0181 | 0.4545 | 0.4949 | 0.4515 | 0.5353 | 0.6951 | 1.1945 | 0.7920 | 0.6354 | 0.6784 | 0.7440 | | 0.1466 | 10.0 | 8570 | 1.0184 | 0.4469 | 0.4748 | 0.4414 | 0.5271 | 0.6951 | 1.1871 | 0.7960 | 0.6312 | 0.6739 | 0.7404 | | 0.1342 | 11.0 | 9427 | 1.0291 | 0.4496 | 0.4739 | 0.4471 | 0.5287 | 0.6985 | 1.1815 | 0.7980 | 0.6328 | 0.6770 | 0.7446 | | 0.1166 | 12.0 | 10284 | 1.0306 | 0.4521 | 0.4818 | 0.4453 | 0.5291 | 0.6966 | 1.1860 | 0.7940 | 0.6300 | 0.6726 | 0.7412 | ### Framework versions - Transformers 4.53.2 - Pytorch 2.6.0+cu124 - Datasets 4.0.0 - Tokenizers 0.21.2