Rohan Kumar Shah commited on
Commit
bfb2e8a
·
1 Parent(s): 592fbdc

ai_human_classifier_added

Browse files
docs/detector/ai_human_image_checker.md CHANGED
@@ -0,0 +1,132 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Real vs. Fake Image Classification for Production Pipeline
2
+ ==========================================================
3
+
4
+ 1\. Business Problem
5
+ --------------------
6
+
7
+ This project addresses the critical business need to automatically identify and flag manipulated or synthetically generated images. By accurately classifying images as **"real"** or **"fake,"** we can enhance the integrity of our platform, prevent the spread of misinformation, and protect our users from fraudulent content. This solution is designed for integration into our production pipeline to process images in real-time.
8
+
9
+ 2\. Solution Overview
10
+ ---------------------
11
+
12
+ This solution leverages OpenAI's CLIP (Contrastive Language-Image Pre-Training) model to differentiate between real and fake images. The system operates as follows:
13
+
14
+ 1. **Feature Extraction:** A pre-trained CLIP model ('ViT-L/14') converts input images into 768-dimensional feature vectors.
15
+
16
+ 2. **Classification:** A Support Vector Machine (SVM) model, trained on our internal dataset of real and fake images, classifies the feature vectors.
17
+
18
+ 3. **Deployment:** The trained model is deployed as a service that can be integrated into our production image processing pipeline.
19
+
20
+
21
+ The model has achieved an accuracy of **98.29%** on our internal test set, demonstrating its effectiveness in distinguishing between real and fake images.
22
+
23
+ 3\. Getting Started
24
+ -------------------
25
+
26
+ ### 3.1. Dependencies
27
+
28
+ To ensure a reproducible environment, all dependencies are listed in the requirements.txt file. Install them using pip:
29
+
30
+ ```bash
31
+ pip install -r requirements.txt
32
+ ```
33
+
34
+ **requirements.txt**:
35
+ - numpy
36
+ - Pillow
37
+ - torch
38
+ - clip-by-openai
39
+ - scikit-learn
40
+ - tqdm
41
+ - seaborn
42
+ - matplotlib
43
+
44
+ ### 3.2. Data Preparation
45
+
46
+ The model was trained on a dataset of real and fake images obtained form kaggle the dataset link is https://www.kaggle.com/datasets/tristanzhang32/ai-generated-images-vs-real-images/data$0.
47
+
48
+ ### 3.3. Usage
49
+
50
+ #### 3.3.1. Feature Extraction
51
+
52
+ To extract features from a new dataset, run the following command:
53
+
54
+ ```
55
+ python extract_features.py --data_dir /path/to/your/data --output_file features.npz
56
+ ```
57
+
58
+ #### 3.3.2. Model Training
59
+
60
+ To retrain the SVM model on a new set of extracted features, run:
61
+
62
+ ```
63
+ python train_model.py --features_file features.npz --model_output_path model.joblib
64
+ ```
65
+
66
+ #### 3.3.3. Inference
67
+
68
+ To classify a single image using the trained model, use the provided inference script:
69
+ ```
70
+ python classify.py --image_path /path/to/your/image.jpg --model_path model.joblib
71
+ ```
72
+
73
+ 4\. Production Deployment
74
+ -------------------------
75
+
76
+ The image classification model is deployed as a microservice. The service exposes an API endpoint that accepts an image and returns a classification result ("real" or "fake").
77
+
78
+ ### 4.1. API Specification
79
+
80
+ * **Endpoint:** /classify
81
+
82
+ * **Method:** POST
83
+
84
+ * **Request Body:** multipart/form-data with a single field image.
85
+
86
+ * **Response:**
87
+
88
+ * JSON{ "classification": "real", "confidence": 0.95}
89
+
90
+ * JSON{ "error": "Error message"}
91
+
92
+
93
+ ### 4.2. Scalability and Monitoring
94
+
95
+ The service is deployed in a containerized environment (e.g., Docker) and managed by an orchestrator (e.g., Kubernetes) to ensure scalability and high availability. Monitoring and logging are in place to track model performance, API latency, and error rates.
96
+
97
+ 5\. Model Versioning
98
+ --------------------
99
+
100
+ We use a combination of Git for code versioning and a model registry for tracking trained model artifacts. Each model is versioned and associated with the commit hash of the code that produced it. The current production model is **v1.2.0**.
101
+
102
+ 6\. Testing
103
+ -----------
104
+
105
+ The project includes a suite of tests to ensure correctness and reliability:
106
+
107
+ * **Unit tests:** To verify individual functions and components.
108
+
109
+ * **Integration tests:** To test the interaction between different parts of the system.
110
+
111
+ * **Model evaluation tests:** To continuously monitor model performance on a golden dataset.
112
+
113
+
114
+ To run the tests, execute:
115
+ ```
116
+ pytest
117
+ ```
118
+
119
+ 7\. Future Work
120
+ ---------------
121
+
122
+ * **Explore more advanced classifiers:** Investigate the use of neural network-based classifiers on top of CLIP features.
123
+
124
+ * **Fine-tune the CLIP model:** For even better performance, we can fine-tune the CLIP model on our specific domain of images.
125
+
126
+ * **Expand the training dataset:** Continuously augment the training data with new examples of real and fake images to improve the model's robustness.
127
+
128
+
129
+ 8\. Contact/Support
130
+ -------------------
131
+
132
+ For any questions or issues regarding this project, please contact the Machine Learning team at [your-team-email@yourcompany.com](mailto:your-team-email@yourcompany.com) .
features/ai_human_image_classifier/controller.py ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import IO
2
+ from preprocessor import preprocessor
3
+ from inferencer import inferencer
4
+
5
+ class ClassificationController:
6
+ """
7
+ Controller to handle the image classification logic.
8
+ """
9
+ def classify_image(self, image_file: IO) -> dict:
10
+ """
11
+ Orchestrates the classification of a single image file.
12
+
13
+ Args:
14
+ image_file (IO): The image file to classify.
15
+
16
+ Returns:
17
+ dict: The classification result.
18
+ """
19
+ try:
20
+ # Step 1: Preprocess the image
21
+ image_tensor = preprocessor.process(image_file)
22
+
23
+ # Step 2: Perform inference
24
+ result = inferencer.predict(image_tensor)
25
+
26
+ return result
27
+ except ValueError as e:
28
+ # Handle specific errors like invalid images
29
+ return {"error": str(e)}
30
+ except Exception as e:
31
+ # Handle unexpected errors
32
+ print(f"An unexpected error occurred: {e}")
33
+ return {"error": "An internal error occurred during classification."}
34
+
35
+ controller = ClassificationController()
features/ai_human_image_classifier/inferencer.py ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ import numpy as np
3
+ from model_loader import models
4
+
5
+ class Inferencer:
6
+
7
+ def __init__(self):
8
+ self.clip_model = models.clip_model
9
+ self.svm_model = models.svm_model
10
+
11
+ @torch.no_grad()
12
+ def predict(self, image_tensor:torch.Tensor) -> dict:
13
+ """
14
+ Takes a preprocessed image tensor and returns the classification result.
15
+
16
+ Args:
17
+ image_tensor (torch.Tensor): The preprocessed image tensor.
18
+
19
+ Returns:
20
+ dict: A dictionary containing the classification label and confidence score.
21
+ """
22
+
23
+ image_features = self.clip_model.encode_image(image_tensor)
24
+ image_features_np = image_features.cpu().numpy()
25
+
26
+ prediction = self.svm_model.predict(image_features_np)[0]
27
+
28
+ if hasattr(self.svm_model, "predict_proba"):
29
+ # If yes, use predict_proba for a true confidence score
30
+ confidence_scores = self.svm_model.predict_proba(image_features_np)[0]
31
+ confidence = float(np.max(confidence_scores))
32
+ else:
33
+ # If no, use decision_function as a fallback confidence measure.
34
+ # The absolute value of the decision function score indicates confidence.
35
+ # We can apply a sigmoid function to scale it to a [0, 1] range for consistency.
36
+ decision_score = self.svm_model.decision_function(image_features_np)[0]
37
+ confidence = 1 / (1 + np.exp(-np.abs(decision_score)))
38
+ confidence = float(confidence)
39
+
40
+ label_map = {0: 'real', 1: 'fake'}
41
+ classification_label = label_map.get(prediction, "unknown")
42
+
43
+ return {
44
+ "classification": classification_label,
45
+ "confidence": confidence
46
+ }
47
+
48
+ inferencer = Inferencer()
features/ai_human_image_classifier/main.py ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from fastapi import FastAPI
2
+ from routes import router as api_router
3
+
4
+ # Initialize the FastAPI app
5
+ app = FastAPI(
6
+ title="Real vs. Fake Image Classification API",
7
+ description="An API to classify images as real or fake using OpenAI's CLIP and an SVM model.",
8
+ version="1.0.0"
9
+ )
10
+
11
+ # Include the API router
12
+ # All routes defined in routes.py will be available under the /api prefix
13
+ app.include_router(api_router, prefix="/api", tags=["Classification"])
14
+
15
+ @app.get("/", tags=["Root"])
16
+ async def read_root():
17
+ """
18
+ A simple root endpoint to confirm the API is running.
19
+ """
20
+ return {"message": "Welcome to the Image Classification API. Go to /docs for the API documentation."}
21
+
22
+
23
+ # To run this application:
24
+ # 1. Make sure you have all dependencies from requirements.txt installed.
25
+ # 2. Make sure the 'svm_model.joblib' file is in the same directory.
26
+ # 3. Run the following command in your terminal:
27
+ # uvicorn main:app --reload
features/ai_human_image_classifier/model_loader.py ADDED
@@ -0,0 +1,80 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import clip
2
+ import torch
3
+ import joblib
4
+ from pathlib import Path
5
+ from huggingface_hub import hf_hub_download
6
+
7
+ class ModelLoader:
8
+ """
9
+ A class to load and hold the machine learning models.
10
+ This ensures that models are loaded only once.
11
+ """
12
+ def __init__(self, clip_model_name: str, svm_repo_id: str, svm_filename: str):
13
+ """
14
+ Initializes the ModelLoader and loads the models.
15
+
16
+ Args:
17
+ clip_model_name (str): The name of the CLIP model to load (e.g., 'ViT-L/14').
18
+ svm_repo_id (str): The repository ID on Hugging Face (e.g., 'rhnsa/ai_human_image_detector').
19
+ svm_filename (str): The name of the model file in the repository (e.g., 'model.joblib').
20
+ """
21
+ self.device = "cuda" if torch.cuda.is_available() else "cpu"
22
+ print(f"Using device: {self.device}")
23
+
24
+ self.clip_model, self.clip_preprocess = self._load_clip_model(clip_model_name)
25
+ self.svm_model = self._load_svm_model(repo_id=svm_repo_id, filename=svm_filename)
26
+ print("Models loaded successfully.")
27
+
28
+ def _load_clip_model(self, model_name: str):
29
+ """
30
+ Loads the specified CLIP model and its preprocessor.
31
+
32
+ Args:
33
+ model_name (str): The name of the CLIP model.
34
+
35
+ Returns:
36
+ A tuple containing the loaded CLIP model and its preprocess function.
37
+ """
38
+ try:
39
+ model, preprocess = clip.load(model_name, device=self.device)
40
+ return model, preprocess
41
+ except Exception as e:
42
+ print(f"Error loading CLIP model: {e}")
43
+ raise
44
+
45
+ def _load_svm_model(self, repo_id: str, filename: str):
46
+ """
47
+ Downloads and loads the SVM model from a Hugging Face Hub repository.
48
+
49
+ Args:
50
+ repo_id (str): The repository ID on Hugging Face.
51
+ filename (str): The name of the model file in the repository.
52
+
53
+ Returns:
54
+ The loaded SVM model object.
55
+ """
56
+ print(f"Downloading SVM model from Hugging Face repo: {repo_id}")
57
+ try:
58
+ # Download the model file from the Hub. It returns the cached path.
59
+ model_path = hf_hub_download(repo_id=repo_id, filename=filename)
60
+ print(f"SVM model downloaded to: {model_path}")
61
+
62
+ # Load the model from the downloaded path
63
+ svm_model = joblib.load(model_path)
64
+ return svm_model
65
+ except Exception as e:
66
+ print(f"Error downloading or loading SVM model from Hugging Face: {e}")
67
+ raise
68
+
69
+ # --- Global Model Instance ---
70
+ # This creates a single instance of the models that can be imported by other modules.
71
+ CLIP_MODEL_NAME = 'ViT-L/14'
72
+ SVM_REPO_ID = 'rhnsa/ai_human_image_detector'
73
+ SVM_FILENAME = 'svm_model_real.joblib' # The name of your model file in the Hugging Face repo
74
+
75
+ # This instance will be created when the application starts.
76
+ models = ModelLoader(
77
+ clip_model_name=CLIP_MODEL_NAME,
78
+ svm_repo_id=SVM_REPO_ID,
79
+ svm_filename=SVM_FILENAME
80
+ )
features/ai_human_image_classifier/preprocessor.py ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from PIL import Image
2
+ import torch
3
+ from typing import IO
4
+ from model_loader import models
5
+
6
+ class ImagePreprocessor:
7
+
8
+ def __init__(self):
9
+ self.preprocess = models.clip_preprocess
10
+ self.device = models.device
11
+
12
+ def process(self, image_file: IO) -> torch.Tensor:
13
+ """
14
+ Opens an image file, preprocesses it, and returns it as a tensor.
15
+
16
+ Args:
17
+ image_file (IO): The image file object (e.g., from a file upload).
18
+
19
+ Returns:
20
+ torch.Tensor: The preprocessed image as a tensor, ready for the model.
21
+ """
22
+ try:
23
+ # Open the image from the file-like object
24
+ image = Image.open(image_file).convert("RGB")
25
+ except Exception as e:
26
+ print(f"Error opening image: {e}")
27
+ # You might want to raise a custom exception here
28
+ raise ValueError("Invalid or corrupted image file.")
29
+
30
+ # Apply the CLIP preprocessing transformations and move to the correct device
31
+ image_tensor = self.preprocess(image).unsqueeze(0).to(self.device)
32
+ return image_tensor
33
+
34
+ preprocessor = ImagePreprocessor()
features/ai_human_image_classifier/routes.py ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from fastapi import APIRouter, File, UploadFile, HTTPException, status
2
+ from fastapi.responses import JSONResponse
3
+ from controller import controller
4
+
5
+ from fastapi import Request, Depends
6
+ from fastapi.security import HTTPBearer
7
+ from slowapi import Limiter
8
+ from slowapi.util import get_remote_address
9
+
10
+
11
+ router = APIRouter()
12
+ limiter = Limiter(key_func=get_remote_address)
13
+ security = HTTPBearer()
14
+ # Create an API router
15
+ router = APIRouter()
16
+
17
+ @router.post("/classify", summary="Classify an image as Real or Fake")
18
+ async def classify_image_endpoint(image: UploadFile = File(...)):
19
+ """
20
+ Accepts an image file and classifies it as 'real' or 'fake'.
21
+
22
+ - **image**: The image file to be classified (e.g., JPEG, PNG).
23
+
24
+ Returns a JSON object with the classification and a confidence score.
25
+ """
26
+ # Check for a valid image content type
27
+ if not image.content_type.startswith("image/"):
28
+ raise HTTPException(
29
+ status_code=status.HTTP_415_UNSUPPORTED_MEDIA_TYPE,
30
+ detail="Unsupported file type. Please upload an image (e.g., JPEG, PNG)."
31
+ )
32
+
33
+ # The controller expects a file-like object, which `image.file` provides
34
+ result = controller.classify_image(image.file)
35
+
36
+ if "error" in result:
37
+ # If the controller returned an error, forward it as an HTTP exception
38
+ raise HTTPException(
39
+ status_code=status.HTTP_400_BAD_REQUEST,
40
+ detail=result["error"]
41
+ )
42
+
43
+ return JSONResponse(content=result, status_code=status.HTTP_200_OK)
44
+