File size: 1,734 Bytes
ea75c49 d114b73 8c35905 e5e5a2f af37920 93f83b7 8faa09a 457c10a b9fedf9 457c10a 5204502 457c10a 71e54ce 457c10a 1f9b3b5 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 |
---
license: apache-2.0
library_name: transformers
pipeline_tag: image-classification
model-index:
- name: AI Image Detect
results:
- task:
type: image-classification
name: Image Classification
metrics:
- type: accuracy
value: 0.98
---
This is a simple AI image detection model utilizing visual transformers trained on the CIFake dataset.
Example usage:
```python
import torch
from PIL import Image
from torchvision import transforms
from transformers import ViTForImageClassification, ViTImageProcessor
# Load the trained model
model_path = 'vit_model.pth'
model = ViTForImageClassification.from_pretrained('google/vit-base-patch16-224')
model.classifier = torch.nn.Linear(model.classifier.in_features, 2)
model.load_state_dict(torch.load(model_path, map_location=torch.device('cpu')))
model.eval()
# Define the image preprocessing pipeline
preprocess = transforms.Compose([
transforms.Resize((224, 224)),
transforms.ToTensor(),
])
def predict(image_path, model, preprocess):
# Load and preprocess the image
image = Image.open(image_path).convert('RGB')
inputs = preprocess(image).unsqueeze(0)
# Perform inference
with torch.no_grad():
outputs = model(inputs).logits
predicted_label = torch.argmax(outputs).item()
# Map the predicted label to the corresponding class
label_map = {0: 'FAKE', 1: 'REAL'}
predicted_class = label_map[predicted_label]
return predicted_class
# Example usage
image_paths = [
'path/to/image.jpg',
'path/to/image.jpg',
'path/to/image.jpg'
]
for image_path in image_paths:
predicted_class = predict(image_path, model, preprocess)
print(f'Predicted class: {predicted_class}', image_path)
``` |