File size: 3,665 Bytes
47db58a 24f182d 47db58a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 |
---
license: mit
language:
- en
tags:
- image-classification
- computer-vision
- deep-learning
- face-detection
---
---
language: en
tags:
- image-classification
- computer-vision
- deep-learning
- face-detection
- resnet
datasets:
- custom
license: mit
---
# ResNet-based Face Classification Model π
This model is trained to distinguish between real human faces and AI-generated faces using a ResNet-based architecture.
## Model Description π
### Model Architecture
- Deep CNN with residual connections based on ResNet architecture
- Input shape: (224, 224, 3)
- Multiple residual blocks with increasing filter sizes [64, 128, 256, 512]
- Global average pooling
- Dense layers with dropout for classification
- Binary output with sigmoid activation
### Task
Binary classification to determine if a face image is real (human) or AI-generated.
### Framework and Training
- Framework: TensorFlow
- Training Device: GPU
- Training Dataset: Custom dataset of real and AI-generated faces
- Validation Metrics:
- Accuracy: 52.45%
- Loss: 0.7246
## Intended Use π―
### Primary Intended Uses
- Research in deepfake detection
- Educational purposes in deep learning
- Face authentication systems
### Out-of-Scope Uses
- Production-level face verification
- Legal or forensic applications
- Stand-alone security systems
## Training Procedure π
### Training Details
```python
optimizer = Adam(learning_rate=0.0001)
loss = 'binary_crossentropy'
metrics = ['accuracy']
```
### Training Hyperparameters
- Learning rate: 0.0001
- Batch size: 32
- Dropout rate: 0.5
- Architecture:
- Initial conv: 64 filters, 7x7
- Residual blocks: [64, 128, 256, 512] filters
- Dense layer: 256 units
## Evaluation Results π
### Performance Metrics
- Validation Accuracy: 52.45%
- Validation Loss: 0.7246
### Limitations
- Performance is only slightly better than random chance
- May struggle with high-quality AI-generated images
- Limited testing on diverse face datasets
## Usage π»
```python
from tensorflow.keras.models import load_model
import cv2
import numpy as np
# Load the model
model = load_model('face_classification_model1')
# Preprocess image
def preprocess_image(image_path):
img = cv2.imread(image_path)
img = cv2.resize(img, (224, 224))
img = img / 255.0
return np.expand_dims(img, axis=0)
# Make prediction
image = preprocess_image('face_image.jpg')
prediction = model.predict(image)
is_real = prediction[0][0] > 0.5
```
## Ethical Considerations π€
This model is designed for research and educational purposes only. Users should:
- Obtain proper consent when processing personal face images
- Be aware of potential biases in face detection systems
- Consider privacy implications when using face analysis tools
- Not use this model for surveillance or harmful applications
## Technical Limitations β οΈ
1. Current performance limitations:
- Accuracy only slightly above random chance
- May require ensemble methods for better results
- Limited testing on diverse datasets
2. Recommended improvements:
- Extended training with larger datasets
- Implementation of data augmentation
- Hyperparameter optimization
- Transfer learning from pre-trained models
## Citation π
```bibtex
@software{face_classification_model1,
author = {Your Name},
title = {Face Classification Model using ResNet Architecture},
year = {2024},
publisher = {HuggingFace},
url = {https://huggingface.co/arsath-sm/face_classification_model1}
}
```
## Contributors π₯
- Arsath S.M
- Faahith K.R.M
- Arafath M.S.M
University of Jaffna
## License π
This model is licensed under the MIT License. |