license: apache-2.0 language: - en base_model: - bobbysam/resnet18-image-detector library_name: transformers pipeline_tag: image-classification tags: - computer-vision - image-classification - ai-detection - pytorch - resnet datasets: - custom metrics: - accuracy - precision - recall - f1 model-index: - name: resnet18-image-detector results: - task: type: image-classification name: AI vs Real Image Detection dataset: name: Custom AI Detection Dataset type: custom metrics: - type: accuracy value: 0.95 name: Accuracy - type: f1 value: 0.94 name: F1 Score - type: precision value: 0.93 name: Precision - type: recall value: 0.96 name: Recall --- # ResNet18 AI Image Detector **Repository:** [bobbysam/resnet18-image-detector](https://huggingface.co/bobbysam/resnet18-image-detector) [![Train](https://huggingface.co/datasets/huggingface/badges/raw/main/train-on-spaces-sm.svg)](https://huggingface.co/spaces/autotrain-projects/train-resnet18-detector) [![Deploy](https://huggingface.co/datasets/huggingface/badges/raw/main/deploy-on-spaces-sm.svg)](https://huggingface.co/spaces/autotrain-projects/deploy-resnet18-detector) --- ## 🧠 What does this model do? This is a **ResNet18-based deep neural network** trained to **detect whether an input image is a real photograph or AI-generated** (binary classification: `real` vs. `ai_generated`). It is part of the [ProofGuard](https://github.com/Proofguard/proofguard-backend) project and can be used to build trustworthy AI image detection pipelines. **Key Features:** - 🔬 Binary classification: Real vs AI-generated images - 🚀 Fast inference with ResNet18 architecture - 🤗 Compatible with Hugging Face Transformers - 📊 Comprehensive evaluation metrics - 🎯 Easy-to-use inference API --- # resnet18-image-detector This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2759 - Accuracy: 0.9555 - F1: 0.9555 - Precision: 0.9560 - Recall: 0.9555 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:------:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | 1.3995 | 0.0533 | 50 | 0.6382 | 0.6905 | 0.6824 | 0.7146 | 0.6905 | | 1.1186 | 0.1067 | 100 | 0.4529 | 0.8619 | 0.8619 | 0.8634 | 0.8619 | | 0.7891 | 0.16 | 150 | 0.3469 | 0.9124 | 0.9124 | 0.9124 | 0.9124 | | 0.7927 | 0.2133 | 200 | 0.3208 | 0.9305 | 0.9305 | 0.9305 | 0.9305 | | 0.7672 | 0.2667 | 250 | 0.3095 | 0.9417 | 0.9418 | 0.9418 | 0.9417 | | 0.7395 | 0.32 | 300 | 0.3625 | 0.9001 | 0.8992 | 0.9125 | 0.9001 | | 0.6937 | 0.3733 | 350 | 0.2940 | 0.9483 | 0.9483 | 0.9483 | 0.9483 | | 0.6654 | 0.4267 | 400 | 0.3315 | 0.9268 | 0.9266 | 0.9329 | 0.9268 | | 0.6647 | 0.48 | 450 | 0.2872 | 0.9487 | 0.9487 | 0.9497 | 0.9487 | | 0.7021 | 0.5333 | 500 | 0.2857 | 0.9488 | 0.9488 | 0.9491 | 0.9488 | | 0.6458 | 0.5867 | 550 | 0.2759 | 0.9555 | 0.9555 | 0.9560 | 0.9555 | | 0.6634 | 0.64 | 600 | 0.2830 | 0.9516 | 0.9515 | 0.9517 | 0.9516 | | 0.6534 | 0.6933 | 650 | 0.2858 | 0.9507 | 0.9506 | 0.9533 | 0.9507 | ### Framework versions - Transformers 4.54.1 - Pytorch 2.7.1+cu126 - Datasets 4.0.0 - Tokenizers 0.21.4