GoogLeNet

Overview

The Inception architecture, a convolutional neural network (CNN) designed for tasks in computer vision such as classification and detection, stands out due to its efficiency. It contains fewer than 7 million parameters and is significantly more compact than its predecessors, being 9 times smaller than AlexNet and 22 times smaller than VGG16. This architecture gained recognition in the ImageNet 2014 challenge, where Google’s adaptation, named GoogLeNet (a tribute to LeNet), set new benchmarks in performance while utilizing fewer parameters compared to previous leading methods.

Architectural Innovations

Before the advent of the Inception architecture, models like AlexNet and VGG demonstrated the benefits of deeper network structures. However, deeper networks typically entail more computational steps and can lead to issues such as overfitting and the vanishing gradient problem. The Inception architecture offers a solution, enabling the training of complex CNNs with a reduced count of floating-point parameters.

In a conventional CNN design, layers are typically categorized as either pooling or convolution layers, with specific sizes for convolution filters. Although layering different sizes of convolution filters is beneficial for various tasks, it can rapidly increase the total number of parameters. The Inception architecture takes a different approach by running the convolution filters of various sizes (1x1, 3x3, 5x5) in parallel. That means it is possible to get different lower-dimensional embeddings -and hence, more information- from the same higher-dimensional features using these parallel processes! These are then integrated with max pooling into a unified component known as the Inception module. The GoogLeNet architecture is composed of a series of 9 such Inception modules. This configuration allows the network to maintain flexibility and learn complex tasks without a substantial increase in depth.

Code

import torch
import torch.nn as nn
import torch.nn.functional as F


class InceptionModule(nn.Module):
    def __init__(self, in_channels, n1x1, n3x3red, n3x3, n5x5red, n5x5, pool_proj):
        super(InceptionModule, self).__init__()
        self.b1 = nn.Sequential(
            nn.Conv2d(in_channels, n1x1, kernel_size=1),
            nn.ReLU(True),
        )

        self.b2 = nn.Sequential(
            nn.Conv2d(in_channels, n3x3red, kernel_size=1),
            nn.ReLU(True),
            nn.Conv2d(n3x3red, n3x3, kernel_size=3, padding=1),
            nn.ReLU(True),
        )

        self.b3 = nn.Sequential(
            nn.Conv2d(in_channels, n5x5red, kernel_size=1),
            nn.ReLU(True),
            nn.Conv2d(n5x5red, n5x5, kernel_size=5, padding=2),
            nn.ReLU(True),
        )

        self.b4 = nn.Sequential(
            nn.MaxPool2d(3, stride=1, padding=1),
            nn.Conv2d(in_channels, pool_proj, kernel_size=1),
            nn.ReLU(True),
        )

    def forward(self, x):
        y1 = self.b1(x)
        y2 = self.b2(x)
        y3 = self.b3(x)
        y4 = self.b4(x)
        return torch.cat([y1, y2, y3, y4], 1)


class GoogLeNet(nn.Module):
    def __init__(self):
        super(GoogLeNet, self).__init__()
        self.pre_layers = nn.Sequential(
            nn.Conv2d(3, 64, kernel_size=3, padding=1),
            nn.ReLU(True),
        )

        self.inception_blocks = nn.Sequential(
            InceptionModule(64, 16, 32, 32, 16, 8, 8),
            InceptionModule(64, 24, 32, 48, 16, 12, 12),
            nn.MaxPool2d(3, stride=2, padding=1),
            InceptionModule(96, 24, 32, 48, 16, 12, 12),
            InceptionModule(96, 16, 32, 48, 16, 16, 16),
            InceptionModule(96, 16, 32, 48, 16, 16, 16),
            InceptionModule(96, 16, 32, 48, 16, 16, 16),
            InceptionModule(96, 32, 32, 48, 16, 24, 24),
            nn.MaxPool2d(3, stride=2, padding=1),
            InceptionModule(128, 32, 48, 64, 16, 16, 16),
            InceptionModule(128, 32, 48, 64, 16, 16, 16),
        )

        self.output_net = nn.Sequential(
            nn.AdaptiveAvgPool2d((1, 1)), nn.Flatten(), nn.Linear(128, 100)
        )

    def forward(self, x):
        x = self.pre_layers(x)
        x = self.inception_blocks(x)
        x = self.output_net(x)
        return F.softmax(x, dim=1)