--- license: apache-2.0 library_name: pytorch --- # a-and-not-b A neuron that performs the A AND (NOT B) logical computation. It generates the following truth table: | A | B | C | | - | - | - | | 0 | 0 | 0 | | 0 | 1 | 0 | | 1 | 0 | 1 | | 1 | 1 | 0 | It is inspired by McCulloch & Pitts' 1943 paper 'A Logical Calculus of the Ideas Immanent in Nervous Activity'. It doesn't contain any parameters. It takes as input two column vectors of zeros and ones. It outputs a single column vector of zeros and ones. Its mechanism is outlined in Figure 10-3 of Aurelien Geron's book 'Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow'. ![](https://raw.githubusercontent.com/sambitmukherjee/handson-ml3-pytorch/main/chapter10/Figure_10-3.png) Like all the other neurons in Figure 10-3, it is activated when at least two of its input connections are active. Code: https://github.com/sambitmukherjee/handson-ml3-pytorch/blob/main/chapter10/logical_computations_with_neurons.ipynb ## Usage ``` import torch import torch.nn as nn from huggingface_hub import PyTorchModelHubMixin # Let's create two column vectors containing `0`s and `1`s. batch = {'a': torch.tensor([[0], [0], [1], [1]]), 'b': torch.tensor([[0], [1], [0], [1]])} class A_AND_NOT_B(nn.Module, PyTorchModelHubMixin): def __init__(self): super().__init__() self.operation = "C = A AND (NOT B)" def forward(self, x): a = x['a'] b = x['b'] b = -1 * b inputs = torch.cat([a, a, b], axis=1) column_sum = torch.sum(inputs, dim=1, keepdim=True) output = (column_sum >= 2).long() return output # Instantiate: a_and_not_b = A_AND_NOT_B.from_pretrained("sadhaklal/a-and-not-b") # Forward pass: output = a_and_not_b(batch) print(output) ```