Prathmesh2008 commited on
Commit
9341581
1 Parent(s): a154b96

Update code.py

Browse files

This code is a Python script that demonstrates how to create a deep learning model for binary classification using the VGG16 architecture pre-trained on the ImageNet dataset. Here's a breakdown of what each part of the code does:

Importing Libraries: The necessary libraries are imported, including NumPy, Keras, OpenCV (cv2), and TensorFlow.
Load Pre-trained VGG16 Model: The VGG16 model is loaded with pre-trained weights from the ImageNet dataset. The include_top=False argument indicates that the fully connected layers (top layers) of the VGG16 model will not be included, allowing for customization with additional layers.
Freeze Base Model Layers: All layers of the pre-trained VGG16 model are set to non-trainable (frozen) to prevent their weights from being updated during training.
Add Custom Layers for Classification: Additional layers are added on top of the pre-trained VGG16 base model to adapt it for binary classification (in this case, face authentication). These layers include a flattening layer (Flatten) to convert the output of the base model into a one-dimensional vector, followed by one or more fully connected (Dense) layers. The final layer uses a sigmoid activation function to produce a binary classification output.
Create Final Model: The custom layers are combined with the pre-trained VGG16 base model to create the final model.
Compile the Model: The model is compiled with the Adam optimizer and binary cross-entropy loss function. Accuracy is used as the metric to monitor during training.
Define Data Generators: Keras ImageDataGenerator objects are created to load and preprocess the training and validation data. The preprocess_input function is applied to preprocess the input images according to the requirements of the VGG16 model.
Load Training and Validation Data: Training and validation data are loaded from directories using the flow_from_directory method of the data generators. Image resizing and batch size are specified, along with binary class mode.
Train the Model: The model is trained using the fit method, specifying the training data generator, number of epochs, and validation data generator.
Evaluate the Model: After training, the model is evaluated on the validation data using the evaluate method, and the validation accuracy is printed.
This script provides a complete workflow for training a face authentication model using the VGG16 architecture and pre-trained weights.

Files changed (1) hide show
  1. code.py +59 -0
code.py CHANGED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #importing important libraries
2
+ import numpy as np
3
+ import keras
4
+ from keras.applications.vgg16 import VGG16, preprocess_input
5
+ from keras.layers import Flatten, Dense
6
+ from keras.models import Model
7
+ import cv2
8
+ import os
9
+ import numpy as np
10
+ import tensorflow as tf
11
+ from keras.models import Sequential
12
+ from keras.preprocessing import image
13
+ from keras.layers import Conv2D, MaxPooling2D, Flatten, Dense
14
+ from tensorflow.keras.preprocessing.image import ImageDataGenerator
15
+
16
+ # Load the pre-trained VGG16 model
17
+ base_model = VGG16(weights='imagenet', include_top=False, input_shape=(224, 224, 3))
18
+
19
+ # Freeze the base model layers
20
+ for layer in base_model.layers:
21
+ layer.trainable = False
22
+
23
+ # Add custom layers for face classification
24
+ x = base_model.output
25
+ x = Flatten()(x)
26
+ x = Dense(1024, activation='relu')(x)
27
+ predictions = Dense(1, activation='sigmoid')(x)
28
+
29
+ # Create the final model
30
+ model = Model(inputs=base_model.input, outputs=predictions)
31
+
32
+ # Compile the model
33
+ model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
34
+
35
+ # Define data generators for training and validation
36
+ data_generator = ImageDataGenerator(preprocessing_function=preprocess_input)
37
+
38
+ train_data = data_generator.flow_from_directory(
39
+ 'img_for_deepfake_detection/train',
40
+ target_size=(224, 224),
41
+ batch_size=32,
42
+ class_mode='binary',
43
+ # Number of workers for parallel data loading
44
+ )
45
+
46
+ valid_data = data_generator.flow_from_directory(
47
+ 'img_for_deepfake_detection/valid',
48
+ target_size=(224, 224),
49
+ batch_size=32,
50
+ class_mode='binary',
51
+ # Number of workers for parallel data loading
52
+ )
53
+
54
+ # Train the model
55
+ model.fit(train_data, epochs=10, validation_data=valid_data)
56
+
57
+ # Evaluate the model on the validation data
58
+ loss, accuracy = model.evaluate(valid_data)
59
+ print(f'Validation Accuracy: {accuracy*100:.2f}%')