--- license: apache-2.0 library_name: pytorch --- # and A neuron that performs the AND logical computation. It generates the following truth table: | A | B | C | | - | - | - | | 0 | 0 | 0 | | 0 | 1 | 0 | | 1 | 0 | 0 | | 1 | 1 | 1 | It is inspired by McCulloch & Pitts' 1943 paper 'A Logical Calculus of the Ideas Immanent in Nervous Activity'. It doesn't contain any parameters. It takes as input two column vectors of zeros and ones. It outputs a single column vector of zeros and ones. Its mechanism is outlined in Figure 10-3 of Aurelien Geron's book 'Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow'. ![](https://raw.githubusercontent.com/sambitmukherjee/handson-ml3-pytorch/main/chapter10/Figure_10-3.png) Like all the other neurons in Figure 10-3, it is activated when at least two of its input connections are active. Code: https://github.com/sambitmukherjee/handson-ml3-pytorch/blob/main/chapter10/logical_computations_with_neurons.ipynb ## Usage ``` import torch import torch.nn as nn from huggingface_hub import PyTorchModelHubMixin # Let's create two column vectors containing `0`s and `1`s. batch = {'a': torch.tensor([[0], [0], [1], [1]]), 'b': torch.tensor([[0], [1], [0], [1]])} class AND(nn.Module, PyTorchModelHubMixin): def __init__(self): super().__init__() self.operation = "C = A AND B" def forward(self, x): a = x['a'] b = x['b'] inputs = torch.cat([a, b], axis=1) column_sum = torch.sum(inputs, dim=1, keepdim=True) output = (column_sum >= 2).long() return output # Instantiate: logical_and = AND.from_pretrained("sadhaklal/and") # Forward pass: output = logical_and(batch) print(output) ```